Sample records for optimal task performance

  1. Increasing Optimism Protects Against Pain-Induced Impairment in Task-Shifting Performance.

    PubMed

    Boselie, Jantine J L M; Vancleef, Linda M G; Peters, Madelon L

    2017-04-01

    Persistent pain can lead to difficulties in executive task performance. Three core executive functions that are often postulated are inhibition, updating, and shifting. Optimism, the tendency to expect that good things happen in the future, has been shown to protect against pain-induced performance deterioration in executive function updating. This study tested whether this protective effect of a temporary optimistic state by means of a writing and visualization exercise extended to executive function shifting. A 2 (optimism: optimism vs no optimism) × 2 (pain: pain vs no pain) mixed factorial design was conducted. Participants (N = 61) completed a shifting task once with and once without concurrent painful heat stimulation after an optimism or neutral manipulation. Results showed that shifting performance was impaired when experimental heat pain was applied during task execution, and that optimism counteracted pain-induced deterioration in task-shifting performance. Experimentally-induced heat pain impairs shifting task performance and manipulated optimism or induced optimism counteracted this pain-induced performance deterioration. Identifying psychological factors that may diminish the negative effect of persistent pain on the ability to function in daily life is imperative. Copyright © 2016 American Pain Society. Published by Elsevier Inc. All rights reserved.

  2. Testing the Limits of Optimizing Dual-Task Performance in Younger and Older Adults

    PubMed Central

    Strobach, Tilo; Frensch, Peter; Müller, Herrmann Josef; Schubert, Torsten

    2012-01-01

    Impaired dual-task performance in younger and older adults can be improved with practice. Optimal conditions even allow for a (near) elimination of this impairment in younger adults. However, it is unknown whether such (near) elimination is the limit of performance improvements in older adults. The present study tests this limit in older adults under conditions of (a) a high amount of dual-task training and (b) training with simplified component tasks in dual-task situations. The data showed that a high amount of dual-task training in older adults provided no evidence for an improvement of dual-task performance to the optimal dual-task performance level achieved by younger adults. However, training with simplified component tasks in dual-task situations exclusively in older adults provided a similar level of optimal dual-task performance in both age groups. Therefore through applying a testing the limits approach, we demonstrated that older adults improved dual-task performance to the same level as younger adults at the end of training under very specific conditions. PMID:22408613

  3. Method and Apparatus for Performance Optimization Through Physical Perturbation of Task Elements

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III (Inventor); Pope, Alan T. (Inventor); Palsson, Olafur S. (Inventor); Turner, Marsha J. (Inventor)

    2016-01-01

    The invention is an apparatus and method of biofeedback training for attaining a physiological state optimally consistent with the successful performance of a task, wherein the probability of successfully completing the task is made is inversely proportional to a physiological difference value, computed as the absolute value of the difference between at least one physiological signal optimally consistent with the successful performance of the task and at least one corresponding measured physiological signal of a trainee performing the task. The probability of successfully completing the task is made inversely proportional to the physiological difference value by making one or more measurable physical attributes of the environment in which the task is performed, and upon which completion of the task depends, vary in inverse proportion to the physiological difference value.

  4. Multi-AUV autonomous task planning based on the scroll time domain quantum bee colony optimization algorithm in uncertain environment

    PubMed Central

    Zhang, Rubo; Yang, Yu

    2017-01-01

    Research on distributed task planning model for multi-autonomous underwater vehicle (MAUV). A scroll time domain quantum artificial bee colony (STDQABC) optimization algorithm is proposed to solve the multi-AUV optimal task planning scheme. In the uncertain marine environment, the rolling time domain control technique is used to realize a numerical optimization in a narrowed time range. Rolling time domain control is one of the better task planning techniques, which can greatly reduce the computational workload and realize the tradeoff between AUV dynamics, environment and cost. Finally, a simulation experiment was performed to evaluate the distributed task planning performance of the scroll time domain quantum bee colony optimization algorithm. The simulation results demonstrate that the STDQABC algorithm converges faster than the QABC and ABC algorithms in terms of both iterations and running time. The STDQABC algorithm can effectively improve MAUV distributed tasking planning performance, complete the task goal and get the approximate optimal solution. PMID:29186166

  5. Multi-AUV autonomous task planning based on the scroll time domain quantum bee colony optimization algorithm in uncertain environment.

    PubMed

    Li, Jianjun; Zhang, Rubo; Yang, Yu

    2017-01-01

    Research on distributed task planning model for multi-autonomous underwater vehicle (MAUV). A scroll time domain quantum artificial bee colony (STDQABC) optimization algorithm is proposed to solve the multi-AUV optimal task planning scheme. In the uncertain marine environment, the rolling time domain control technique is used to realize a numerical optimization in a narrowed time range. Rolling time domain control is one of the better task planning techniques, which can greatly reduce the computational workload and realize the tradeoff between AUV dynamics, environment and cost. Finally, a simulation experiment was performed to evaluate the distributed task planning performance of the scroll time domain quantum bee colony optimization algorithm. The simulation results demonstrate that the STDQABC algorithm converges faster than the QABC and ABC algorithms in terms of both iterations and running time. The STDQABC algorithm can effectively improve MAUV distributed tasking planning performance, complete the task goal and get the approximate optimal solution.

  6. The effects of experimental pain and induced optimism on working memory task performance.

    PubMed

    Boselie, Jantine J L M; Vancleef, Linda M G; Peters, Madelon L

    2016-07-01

    Pain can interrupt and deteriorate executive task performance. We have previously shown that experimentally induced optimism can diminish the deteriorating effect of cold pressor pain on a subsequent working memory task (i.e., operation span task). In two successive experiments we sought further evidence for the protective role of optimism on pain-induced working memory impairments. We used another working memory task (i.e., 2-back task) that was performed either after or during pain induction. Study 1 employed a 2 (optimism vs. no-optimism)×2 (pain vs. no-pain)×2 (pre-score vs. post-score) mixed factorial design. In half of the participants optimism was induced by the Best Possible Self (BPS) manipulation, which required them to write and visualize about a life in the future where everything turned out for the best. In the control condition, participants wrote and visualized a typical day in their life (TD). Next, participants completed either the cold pressor task (CPT) or a warm water control task (WWCT). Before (baseline) and after the CPT or WWCT participants working memory performance was measured with the 2-back task. The 2-back task measures the ability to monitor and update working memory representation by asking participants to indicate whether the current stimulus corresponds to the stimulus that was presented 2 stimuli ago. Study 2 had a 2 (optimism vs. no-optimism)×2 (pain vs. no-pain) mixed factorial design. After receiving the BPS or control manipulation, participants completed the 2-back task twice: once with painful heat stimulation, and once without any stimulation (counter-balanced order). Continuous heat stimulation was used with temperatures oscillating around 1°C above and 1°C below the individual pain threshold. In study 1, the results did not show an effect of cold pressor pain on subsequent 2-back task performance. Results of study 2 indicated that heat pain impaired concurrent 2-back task performance. However, no evidence was found that optimism protected against this pain-induced performance deterioration. Experimentally induced pain impairs concurrent but not subsequent working memory task performance. Manipulated optimism did not counteract pain-induced deterioration of 2-back performance. It is important to explore factors that may diminish the negative impact of pain on the ability to function in daily life, as pain itself often cannot be remediated. We are planning to conduct future studies that should shed further light on the conditions, contexts and executive operations for which optimism can act as a protective factor. Copyright © 2016 Scandinavian Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  7. Task-Driven Optimization of Fluence Field and Regularization for Model-Based Iterative Reconstruction in Computed Tomography.

    PubMed

    Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster

    2017-12-01

    This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.

  8. Strategic Adaptation to Task Characteristics, Incentives, and Individual Differences in Dual-Tasking

    PubMed Central

    Janssen, Christian P.; Brumby, Duncan P.

    2015-01-01

    We investigate how good people are at multitasking by comparing behavior to a prediction of the optimal strategy for dividing attention between two concurrent tasks. In our experiment, 24 participants had to interleave entering digits on a keyboard with controlling a randomly moving cursor with a joystick. The difficulty of the tracking task was systematically varied as a within-subjects factor. Participants were also exposed to different explicit reward functions that varied the relative importance of the tracking task relative to the typing task (between-subjects). Results demonstrate that these changes in task characteristics and monetary incentives, together with individual differences in typing ability, influenced how participants choose to interleave tasks. This change in strategy then affected their performance on each task. A computational cognitive model was used to predict performance for a wide set of alternative strategies for how participants might have possibly interleaved tasks. This allowed for predictions of optimal performance to be derived, given the constraints placed on performance by the task and cognition. A comparison of human behavior with the predicted optimal strategy shows that participants behaved near optimally. Our findings have implications for the design and evaluation of technology for multitasking situations, as consideration should be given to the characteristics of the task, but also to how different users might use technology depending on their individual characteristics and their priorities. PMID:26161851

  9. Adjustable control station with movable monitors and cameras for viewing systems in robotics and teleoperations

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor)

    1994-01-01

    Real-time video presentations are provided in the field of operator-supervised automation and teleoperation, particularly in control stations having movable cameras for optimal viewing of a region of interest in robotics and teleoperations for performing different types of tasks. Movable monitors to match the corresponding camera orientations (pan, tilt, and roll) are provided in order to match the coordinate systems of all the monitors to the operator internal coordinate system. Automated control of the arrangement of cameras and monitors, and of the configuration of system parameters, is provided for optimal viewing and performance of each type of task for each operator since operators have different individual characteristics. The optimal viewing arrangement and system parameter configuration is determined and stored for each operator in performing each of many types of tasks in order to aid the automation of setting up optimal arrangements and configurations for successive tasks in real time. Factors in determining what is optimal include the operator's ability to use hand-controllers for each type of task. Robot joint locations, forces and torques are used, as well as the operator's identity, to identify the current type of task being performed in order to call up a stored optimal viewing arrangement and system parameter configuration.

  10. Task-Driven Tube Current Modulation and Regularization Design in Computed Tomography with Penalized-Likelihood Reconstruction.

    PubMed

    Gang, G J; Siewerdsen, J H; Stayman, J W

    2016-02-01

    This work applies task-driven optimization to design CT tube current modulation and directional regularization in penalized-likelihood (PL) reconstruction. The relative performance of modulation schemes commonly adopted for filtered-backprojection (FBP) reconstruction were also evaluated for PL in comparison. We adopt a task-driven imaging framework that utilizes a patient-specific anatomical model and information of the imaging task to optimize imaging performance in terms of detectability index ( d' ). This framework leverages a theoretical model based on implicit function theorem and Fourier approximations to predict local spatial resolution and noise characteristics of PL reconstruction as a function of the imaging parameters to be optimized. Tube current modulation was parameterized as a linear combination of Gaussian basis functions, and regularization was based on the design of (directional) pairwise penalty weights for the 8 in-plane neighboring voxels. Detectability was optimized using a covariance matrix adaptation evolutionary strategy algorithm. Task-driven designs were compared to conventional tube current modulation strategies for a Gaussian detection task in an abdomen phantom. The task-driven design yielded the best performance, improving d' by ~20% over an unmodulated acquisition. Contrary to FBP, PL reconstruction using automatic exposure control and modulation based on minimum variance (in FBP) performed worse than the unmodulated case, decreasing d' by 16% and 9%, respectively. This work shows that conventional tube current modulation schemes suitable for FBP can be suboptimal for PL reconstruction. Thus, the proposed task-driven optimization provides additional opportunities for improved imaging performance and dose reduction beyond that achievable with conventional acquisition and reconstruction.

  11. Task-driven optimization of CT tube current modulation and regularization in model-based iterative reconstruction

    NASA Astrophysics Data System (ADS)

    Gang, Grace J.; Siewerdsen, Jeffrey H.; Webster Stayman, J.

    2017-06-01

    Tube current modulation (TCM) is routinely adopted on diagnostic CT scanners for dose reduction. Conventional TCM strategies are generally designed for filtered-backprojection (FBP) reconstruction to satisfy simple image quality requirements based on noise. This work investigates TCM designs for model-based iterative reconstruction (MBIR) to achieve optimal imaging performance as determined by a task-based image quality metric. Additionally, regularization is an important aspect of MBIR that is jointly optimized with TCM, and includes both the regularization strength that controls overall smoothness as well as directional weights that permits control of the isotropy/anisotropy of the local noise and resolution properties. Initial investigations focus on a known imaging task at a single location in the image volume. The framework adopts Fourier and analytical approximations for fast estimation of the local noise power spectrum (NPS) and modulation transfer function (MTF)—each carrying dependencies on TCM and regularization. For the single location optimization, the local detectability index (d‧) of the specific task was directly adopted as the objective function. A covariance matrix adaptation evolution strategy (CMA-ES) algorithm was employed to identify the optimal combination of imaging parameters. Evaluations of both conventional and task-driven approaches were performed in an abdomen phantom for a mid-frequency discrimination task in the kidney. Among the conventional strategies, the TCM pattern optimal for FBP using a minimum variance criterion yielded a worse task-based performance compared to an unmodulated strategy when applied to MBIR. Moreover, task-driven TCM designs for MBIR were found to have the opposite behavior from conventional designs for FBP, with greater fluence assigned to the less attenuating views of the abdomen and less fluence to the more attenuating lateral views. Such TCM patterns exaggerate the intrinsic anisotropy of the MTF and NPS as a result of the data weighting in MBIR. Directional penalty design was found to reinforce the same trend. The task-driven approaches outperform conventional approaches, with the maximum improvement in d‧ of 13% given by the joint optimization of TCM and regularization. This work demonstrates that the TCM optimal for MBIR is distinct from conventional strategies proposed for FBP reconstruction and strategies optimal for FBP are suboptimal and may even reduce performance when applied to MBIR. The task-driven imaging framework offers a promising approach for optimizing acquisition and reconstruction for MBIR that can improve imaging performance and/or dose utilization beyond conventional imaging strategies.

  12. Dividing Attention Between Tasks: Testing Whether Explicit Payoff Functions Elicit Optimal Dual-Task Performance.

    PubMed

    Farmer, George D; Janssen, Christian P; Nguyen, Anh T; Brumby, Duncan P

    2018-04-01

    We test people's ability to optimize performance across two concurrent tasks. Participants performed a number entry task while controlling a randomly moving cursor with a joystick. Participants received explicit feedback on their performance on these tasks in the form of a single combined score. This payoff function was varied between conditions to change the value of one task relative to the other. We found that participants adapted their strategy for interleaving the two tasks, by varying how long they spent on one task before switching to the other, in order to achieve the near maximum payoff available in each condition. In a second experiment, we show that this behavior is learned quickly (within 2-3 min over several discrete trials) and remained stable for as long as the payoff function did not change. The results of this work show that people are adaptive and flexible in how they prioritize and allocate attention in a dual-task setting. However, it also demonstrates some of the limits regarding people's ability to optimize payoff functions. Copyright © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.

  13. Low-dose cone-beam CT via raw counts domain low-signal correction schemes: Performance assessment and task-based parameter optimization (Part II. Task-based parameter optimization).

    PubMed

    Gomez-Cardona, Daniel; Hayes, John W; Zhang, Ran; Li, Ke; Cruz-Bastida, Juan Pablo; Chen, Guang-Hong

    2018-05-01

    Different low-signal correction (LSC) methods have been shown to efficiently reduce noise streaks and noise level in CT to provide acceptable images at low-radiation dose levels. These methods usually result in CT images with highly shift-variant and anisotropic spatial resolution and noise, which makes the parameter optimization process highly nontrivial. The purpose of this work was to develop a local task-based parameter optimization framework for LSC methods. Two well-known LSC methods, the adaptive trimmed mean (ATM) filter and the anisotropic diffusion (AD) filter, were used as examples to demonstrate how to use the task-based framework to optimize filter parameter selection. Two parameters, denoted by the set P, for each LSC method were included in the optimization problem. For the ATM filter, these parameters are the low- and high-signal threshold levels p l and p h ; for the AD filter, the parameters are the exponents δ and γ in the brightness gradient function. The detectability index d' under the non-prewhitening (NPW) mathematical observer model was selected as the metric for parameter optimization. The optimization problem was formulated as an unconstrained optimization problem that consisted of maximizing an objective function d'(P), where i and j correspond to the i-th imaging task and j-th spatial location, respectively. Since there is no explicit mathematical function to describe the dependence of d' on the set of parameters P for each LSC method, the optimization problem was solved via an experimentally measured d' map over a densely sampled parameter space. In this work, three high-contrast-high-frequency discrimination imaging tasks were defined to explore the parameter space of each of the LSC methods: a vertical bar pattern (task I), a horizontal bar pattern (task II), and a multidirectional feature (task III). Two spatial locations were considered for the analysis, a posterior region-of-interest (ROI) located within the noise streaks region and an anterior ROI, located further from the noise streaks region. Optimal results derived from the task-based detectability index metric were compared to other operating points in the parameter space with different noise and spatial resolution trade-offs. The optimal operating points determined through the d' metric depended on the interplay between the major spatial frequency components of each imaging task and the highly shift-variant and anisotropic noise and spatial resolution properties associated with each operating point in the LSC parameter space. This interplay influenced imaging performance the most when the major spatial frequency component of a given imaging task coincided with the direction of spatial resolution loss or with the dominant noise spatial frequency component; this was the case of imaging task II. The performance of imaging tasks I and III was influenced by this interplay in a smaller scale than imaging task II, since the major frequency component of task I was perpendicular to imaging task II, and because imaging task III did not have strong directional dependence. For both LSC methods, there was a strong dependence of the overall d' magnitude and shape of the contours on the spatial location within the phantom, particularly for imaging tasks II and III. The d' value obtained at the optimal operating point for each spatial location and imaging task was similar when comparing the LSC methods studied in this work. A local task-based detectability framework to optimize the selection of parameters for LSC methods was developed. The framework takes into account the potential shift-variant and anisotropic spatial resolution and noise properties to maximize the imaging performance of the CT system. Optimal parameters for a given LSC method depend strongly on the spatial location within the image object. © 2018 American Association of Physicists in Medicine.

  14. The impact of crosstalk on three-dimensional laparoscopic performance and workload.

    PubMed

    Sakata, Shinichiro; Grove, Philip M; Watson, Marcus O; Stevenson, Andrew R L

    2017-10-01

    This is the first study to explore the effects of crosstalk from 3D laparoscopic displays on technical performance and workload. We studied crosstalk at magnitudes that may have been tolerated during laparoscopic surgery. Participants were 36 voluntary doctors. To minimize floor effects, participants completed their surgery rotations, and a laparoscopic suturing course for surgical trainees. We used a counterbalanced, within-subjects design in which participants were randomly assigned to complete laparoscopic tasks in one of six unique testing sequences. In a simulation laboratory, participants were randomly assigned to complete laparoscopic 'navigation in space' and suturing tasks in three viewing conditions: 2D, 3D without ghosting and 3D with ghosting. Participants calibrated their exposure to crosstalk as the maximum level of ghosting that they could tolerate without discomfort. The Randot® Stereotest was used to verify stereoacuity. The study performance metric was time to completion. The NASA TLX was used to measure workload. Normal threshold stereoacuity (40-20 second of arc) was verified in all participants. Comparing optimal 3D with 2D viewing conditions, mean performance times were 2.8 and 1.6 times faster in laparoscopic navigation in space and suturing tasks respectively (p< .001). Comparing optimal 3D with suboptimal 3D viewing conditions, mean performance times were 2.9 times faster in both tasks (p< .001). Mean workload in 2D was 1.5 and 1.3 times greater than in optimal 3D viewing, for navigation in space and suturing tasks respectively (p< .001). Mean workload associated with suboptimal 3D was 1.3 times greater than optimal 3D in both laparoscopic tasks (p< .001). There was no significant relationship between the magnitude of ghosting score, laparoscopic performance and workload. Our findings highlight the advantages of 3D displays when used optimally, and their shortcomings when used sub-optimally, on both laparoscopic performance and workload.

  15. The Vigilance Decrement in Executive Function Is Attenuated When Individual Chronotypes Perform at Their Optimal Time of Day

    PubMed Central

    Lara, Tania; Madrid, Juan Antonio; Correa, Ángel

    2014-01-01

    Time of day modulates our cognitive functions, especially those related to executive control, such as the ability to inhibit inappropriate responses. However, the impact of individual differences in time of day preferences (i.e. morning vs. evening chronotype) had not been considered by most studies. It was also unclear whether the vigilance decrement (impaired performance with time on task) depends on both time of day and chronotype. In this study, morning-type and evening-type participants performed a task measuring vigilance and response inhibition (the Sustained Attention to Response Task, SART) in morning and evening sessions. The results showed that the vigilance decrement in inhibitory performance was accentuated at non-optimal as compared to optimal times of day. In the morning-type group, inhibition performance decreased linearly with time on task only in the evening session, whereas in the morning session it remained more accurate and stable over time. In contrast, inhibition performance in the evening-type group showed a linear vigilance decrement in the morning session, whereas in the evening session the vigilance decrement was attenuated, following a quadratic trend. Our findings imply that the negative effects of time on task in executive control can be prevented by scheduling cognitive tasks at the optimal time of day according to specific circadian profiles of individuals. Therefore, time of day and chronotype influences should be considered in research and clinical studies as well as real-word situations demanding executive control for response inhibition. PMID:24586404

  16. Genetic algorithm based task reordering to improve the performance of batch scheduled massively parallel scientific applications

    DOE PAGES

    Sankaran, Ramanan; Angel, Jordan; Brown, W. Michael

    2015-04-08

    The growth in size of networked high performance computers along with novel accelerator-based node architectures has further emphasized the importance of communication efficiency in high performance computing. The world's largest high performance computers are usually operated as shared user facilities due to the costs of acquisition and operation. Applications are scheduled for execution in a shared environment and are placed on nodes that are not necessarily contiguous on the interconnect. Furthermore, the placement of tasks on the nodes allocated by the scheduler is sub-optimal, leading to performance loss and variability. Here, we investigate the impact of task placement on themore » performance of two massively parallel application codes on the Titan supercomputer, a turbulent combustion flow solver (S3D) and a molecular dynamics code (LAMMPS). Benchmark studies show a significant deviation from ideal weak scaling and variability in performance. The inter-task communication distance was determined to be one of the significant contributors to the performance degradation and variability. A genetic algorithm-based parallel optimization technique was used to optimize the task ordering. This technique provides an improved placement of the tasks on the nodes, taking into account the application's communication topology and the system interconnect topology. As a result, application benchmarks after task reordering through genetic algorithm show a significant improvement in performance and reduction in variability, therefore enabling the applications to achieve better time to solution and scalability on Titan during production.« less

  17. Identifying optimum performance trade-offs using a cognitively bounded rational analysis model of discretionary task interleaving.

    PubMed

    Janssen, Christian P; Brumby, Duncan P; Dowell, John; Chater, Nick; Howes, Andrew

    2011-01-01

    We report the results of a dual-task study in which participants performed a tracking and typing task under various experimental conditions. An objective payoff function was used to provide explicit feedback on how participants should trade off performance between the tasks. Results show that participants' dual-task interleaving strategy was sensitive to changes in the difficulty of the tracking task and resulted in differences in overall task performance. To test the hypothesis that people select strategies that maximize payoff, a Cognitively Bounded Rational Analysis model was developed. This analysis evaluated a variety of dual-task interleaving strategies to identify the optimal strategy for maximizing payoff in each condition. The model predicts that the region of optimum performance is different between experimental conditions. The correspondence between human data and the prediction of the optimal strategy is found to be remarkably high across a number of performance measures. This suggests that participants were honing their behavior to maximize payoff. Limitations are discussed. Copyright © 2011 Cognitive Science Society, Inc.

  18. A Task-Optimized Neural Network Replicates Human Auditory Behavior, Predicts Brain Responses, and Reveals a Cortical Processing Hierarchy.

    PubMed

    Kell, Alexander J E; Yamins, Daniel L K; Shook, Erica N; Norman-Haignere, Sam V; McDermott, Josh H

    2018-05-02

    A core goal of auditory neuroscience is to build quantitative models that predict cortical responses to natural sounds. Reasoning that a complete model of auditory cortex must solve ecologically relevant tasks, we optimized hierarchical neural networks for speech and music recognition. The best-performing network contained separate music and speech pathways following early shared processing, potentially replicating human cortical organization. The network performed both tasks as well as humans and exhibited human-like errors despite not being optimized to do so, suggesting common constraints on network and human performance. The network predicted fMRI voxel responses substantially better than traditional spectrotemporal filter models throughout auditory cortex. It also provided a quantitative signature of cortical representational hierarchy-primary and non-primary responses were best predicted by intermediate and late network layers, respectively. The results suggest that task optimization provides a powerful set of tools for modeling sensory systems. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Identification of Swallowing Tasks from a Modified Barium Swallow Study That Optimize the Detection of Physiological Impairment

    ERIC Educational Resources Information Center

    Hazelwood, R. Jordan; Armeson, Kent E.; Hill, Elizabeth G.; Bonilha, Heather Shaw; Martin-Harris, Bonnie

    2017-01-01

    Purpose: The purpose of this study was to identify which swallowing task(s) yielded the worst performance during a standardized modified barium swallow study (MBSS) in order to optimize the detection of swallowing impairment. Method: This secondary data analysis of adult MBSSs estimated the probability of each swallowing task yielding the derived…

  20. CQPSO scheduling algorithm for heterogeneous multi-core DAG task model

    NASA Astrophysics Data System (ADS)

    Zhai, Wenzheng; Hu, Yue-Li; Ran, Feng

    2017-07-01

    Efficient task scheduling is critical to achieve high performance in a heterogeneous multi-core computing environment. The paper focuses on the heterogeneous multi-core directed acyclic graph (DAG) task model and proposes a novel task scheduling method based on an improved chaotic quantum-behaved particle swarm optimization (CQPSO) algorithm. A task priority scheduling list was built. A processor with minimum cumulative earliest finish time (EFT) was acted as the object of the first task assignment. The task precedence relationships were satisfied and the total execution time of all tasks was minimized. The experimental results show that the proposed algorithm has the advantage of optimization abilities, simple and feasible, fast convergence, and can be applied to the task scheduling optimization for other heterogeneous and distributed environment.

  1. Optimization of medical imaging display systems: using the channelized Hotelling observer for detecting lung nodules: experimental study

    NASA Astrophysics Data System (ADS)

    Platisa, Ljiljana; Vansteenkiste, Ewout; Goossens, Bart; Marchessoux, Cédric; Kimpe, Tom; Philips, Wilfried

    2009-02-01

    Medical-imaging systems are designed to aid medical specialists in a specific task. Therefore, the physical parameters of a system need to optimize the task performance of a human observer. This requires measurements of human performance in a given task during the system optimization. Typically, psychophysical studies are conducted for this purpose. Numerical observer models have been successfully used to predict human performance in several detection tasks. Especially, the task of signal detection using a channelized Hotelling observer (CHO) in simulated images has been widely explored. However, there are few studies done for clinically acquired images that also contain anatomic noise. In this paper, we investigate the performance of a CHO in the task of detecting lung nodules in real radiographic images of the chest. To evaluate variability introduced by the limited available data, we employ a commonly used study of a multi-reader multi-case (MRMC) scenario. It accounts for both case and reader variability. Finally, we use the "oneshot" methods to estimate the MRMC variance of the area under the ROC curve (AUC). The obtained AUC compares well to those reported for human observer study on a similar data set. Furthermore, the "one-shot" analysis implies a fairly consistent performance of the CHO with the variance of AUC below 0.002. This indicates promising potential for numerical observers in optimization of medical imaging displays and encourages further investigation on the subject.

  2. Turbine Performance Optimization Task Status

    NASA Technical Reports Server (NTRS)

    Griffin, Lisa W.; Turner, James E. (Technical Monitor)

    2001-01-01

    Capability to optimize for turbine performance and accurately predict unsteady loads will allow for increased reliability, Isp, and thrust-to-weight. The development of a fast, accurate aerodynamic design, analysis, and optimization system is required.

  3. When more of the same is better

    NASA Astrophysics Data System (ADS)

    Fontanari, José F.

    2016-01-01

    Problem solving (e.g., drug design, traffic engineering, software development) by task forces represents a substantial portion of the economy of developed countries. Here we use an agent-based model of cooperative problem-solving systems to study the influence of diversity on the performance of a task force. We assume that agents cooperate by exchanging information on their partial success and use that information to imitate the more successful agent in the system —the model. The agents differ only in their propensities to copy the model. We find that, for easy tasks, the optimal organization is a homogeneous system composed of agents with the highest possible copy propensities. For difficult tasks, we find that diversity can prevent the system from being trapped in sub-optimal solutions. However, when the system size is adjusted to maximize the performance the homogeneous systems outperform the heterogeneous systems, i.e., for optimal performance, sameness should be preferred to diversity.

  4. Decision Making in Concurrent Multitasking: Do People Adapt to Task Interference?

    PubMed Central

    Nijboer, Menno; Taatgen, Niels A.; Brands, Annelies; Borst, Jelmer P.; van Rijn, Hedderik

    2013-01-01

    While multitasking has received a great deal of attention from researchers, we still know little about how well people adapt their behavior to multitasking demands. In three experiments, participants were presented with a multicolumn subtraction task, which required working memory in half of the trials. This primary task had to be combined with a secondary task requiring either working memory or visual attention, resulting in different types of interference. Before each trial, participants were asked to choose which secondary task they wanted to perform concurrently with the primary task. We predicted that if people seek to maximize performance or minimize effort required to perform the dual task, they choose task combinations that minimize interference. While performance data showed that the predicted optimal task combinations indeed resulted in minimal interference between tasks, the preferential choice data showed that a third of participants did not show any adaptation, and for the remainder it took a considerable number of trials before the optimal task combinations were chosen consistently. On the basis of these results we argue that, while in principle people are able to adapt their behavior according to multitasking demands, selection of the most efficient combination of strategies is not an automatic process. PMID:24244527

  5. Learning and inference using complex generative models in a spatial localization task.

    PubMed

    Bejjanki, Vikranth R; Knill, David C; Aslin, Richard N

    2016-01-01

    A large body of research has established that, under relatively simple task conditions, human observers integrate uncertain sensory information with learned prior knowledge in an approximately Bayes-optimal manner. However, in many natural tasks, observers must perform this sensory-plus-prior integration when the underlying generative model of the environment consists of multiple causes. Here we ask if the Bayes-optimal integration seen with simple tasks also applies to such natural tasks when the generative model is more complex, or whether observers rely instead on a less efficient set of heuristics that approximate ideal performance. Participants localized a "hidden" target whose position on a touch screen was sampled from a location-contingent bimodal generative model with different variances around each mode. Over repeated exposure to this task, participants learned the a priori locations of the target (i.e., the bimodal generative model), and integrated this learned knowledge with uncertain sensory information on a trial-by-trial basis in a manner consistent with the predictions of Bayes-optimal behavior. In particular, participants rapidly learned the locations of the two modes of the generative model, but the relative variances of the modes were learned much more slowly. Taken together, our results suggest that human performance in a more complex localization task, which requires the integration of sensory information with learned knowledge of a bimodal generative model, is consistent with the predictions of Bayes-optimal behavior, but involves a much longer time-course than in simpler tasks.

  6. Optimized Assistive Human-Robot Interaction Using Reinforcement Learning.

    PubMed

    Modares, Hamidreza; Ranatunga, Isura; Lewis, Frank L; Popa, Dan O

    2016-03-01

    An intelligent human-robot interaction (HRI) system with adjustable robot behavior is presented. The proposed HRI system assists the human operator to perform a given task with minimum workload demands and optimizes the overall human-robot system performance. Motivated by human factor studies, the presented control structure consists of two control loops. First, a robot-specific neuro-adaptive controller is designed in the inner loop to make the unknown nonlinear robot behave like a prescribed robot impedance model as perceived by a human operator. In contrast to existing neural network and adaptive impedance-based control methods, no information of the task performance or the prescribed robot impedance model parameters is required in the inner loop. Then, a task-specific outer-loop controller is designed to find the optimal parameters of the prescribed robot impedance model to adjust the robot's dynamics to the operator skills and minimize the tracking error. The outer loop includes the human operator, the robot, and the task performance details. The problem of finding the optimal parameters of the prescribed robot impedance model is transformed into a linear quadratic regulator (LQR) problem which minimizes the human effort and optimizes the closed-loop behavior of the HRI system for a given task. To obviate the requirement of the knowledge of the human model, integral reinforcement learning is used to solve the given LQR problem. Simulation results on an x - y table and a robot arm, and experimental implementation results on a PR2 robot confirm the suitability of the proposed method.

  7. CAMS as a tool for human factors research in spaceflight

    NASA Astrophysics Data System (ADS)

    Sauer, Juergen

    2004-01-01

    The paper reviews a number of research studies that were carried out with a PC-based task environment called Cabin Air Management System (CAMS) simulating the operation of a spacecraft's life support system. As CAMS was a multiple task environment, it allowed the measurement of performance at different levels. Four task components of different priority were embedded in the task environment: diagnosis and repair of system faults, maintaining atmospheric parameters in a safe state, acknowledgement of system alarms (reaction time), and keeping a record of critical system resources (prospective memory). Furthermore, the task environment permitted the examination of different task management strategies and changes in crew member state (fatigue, anxiety, mental effort). A major goal of the research programme was to examine how crew members adapted to various forms of sub-optimal working conditions, such as isolation and confinement, sleep deprivation and noise. None of the studies provided evidence for decrements in primary task performance. However, the results showed a number of adaptive responses of crew members to adjust to the different sub-optimal working conditions. There was evidence for adjustments in information sampling strategies (usually reductions in sampling frequency) as a result of unfavourable working conditions. The results also showed selected decrements in secondary task performance. Prospective memory seemed to be somewhat more vulnerable to sub-optimal working conditions than performance on the reaction time task. Finally, suggestions are made for future research with the CAMS environment.

  8. PAVENET OS: A Compact Hard Real-Time Operating System for Precise Sampling in Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Saruwatari, Shunsuke; Suzuki, Makoto; Morikawa, Hiroyuki

    The paper shows a compact hard real-time operating system for wireless sensor nodes called PAVENET OS. PAVENET OS provides hybrid multithreading: preemptive multithreading and cooperative multithreading. Both of the multithreading are optimized for two kinds of tasks on wireless sensor networks, and those are real-time tasks and best-effort ones. PAVENET OS can efficiently perform hard real-time tasks that cannot be performed by TinyOS. The paper demonstrates the hybrid multithreading realizes compactness and low overheads, which are comparable to those of TinyOS, through quantitative evaluation. The evaluation results show PAVENET OS performs 100 Hz sensor sampling with 0.01% jitter while performing wireless communication tasks, whereas optimized TinyOS has 0.62% jitter. In addition, PAVENET OS has a small footprint and low overheads (minimum RAM size: 29 bytes, minimum ROM size: 490 bytes, minimum task switch time: 23 cycles).

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gang, G; Siewerdsen, J; Stayman, J

    Purpose: There has been increasing interest in integrating fluence field modulation (FFM) devices with diagnostic CT scanners for dose reduction purposes. Conventional FFM strategies, however, are often either based on heuristics or the analysis of filtered-backprojection (FBP) performance. This work investigates a prospective task-driven optimization of FFM for model-based iterative reconstruction (MBIR) in order to improve imaging performance at the same total dose as conventional strategies. Methods: The task-driven optimization framework utilizes an ultra-low dose 3D scout as a patient-specific anatomical model and a mathematical formation of the imaging task. The MBIR method investigated is quadratically penalized-likelihood reconstruction. The FFMmore » objective function uses detectability index, d’, computed as a function of the predicted spatial resolution and noise in the image. To optimize performance throughout the object, a maxi-min objective was adopted where the minimum d’ over multiple locations is maximized. To reduce the dimensionality of the problem, FFM is parameterized as a linear combination of 2D Gaussian basis functions over horizontal detector pixels and projection angles. The coefficients of these bases are found using the covariance matrix adaptation evolution strategy (CMA-ES) algorithm. The task-driven design was compared with three other strategies proposed for FBP reconstruction for a calcification cluster discrimination task in an abdomen phantom. Results: The task-driven optimization yielded FFM that was significantly different from those designed for FBP. Comparing all four strategies, the task-based design achieved the highest minimum d’ with an 8–48% improvement, consistent with the maxi-min objective. In addition, d’ was improved to a greater extent over a larger area within the entire phantom. Conclusion: Results from this investigation suggests the need to re-evaluate conventional FFM strategies for MBIR. The task-based optimization framework provides a promising approach that maximizes imaging performance under the same total dose constraint.« less

  10. Filling the glass: Effects of a positive psychology intervention on executive task performance in chronic pain patients.

    PubMed

    Boselie, J J L M; Vancleef, L M G; Peters, M L

    2018-03-24

    Chronic pain is associated with emotional problems as well as difficulties in cognitive functioning. Prior experimental studies have shown that optimism, the tendency to expect that good things happen in the future, and positive emotions can counteract pain-induced task performance deficits in healthy participants. More specifically, induced optimism was found to buffer against the negative effects of experimental pain on executive functioning. This clinical experiment examined whether this beneficial effect can be extended to a chronic pain population. Patients (N = 122) were randomized to a positive psychology Internet-based intervention (PPI; n = 74) or a waiting list control condition (WLC; n = 48). The PPI consisted of positive psychology exercises that particularly target optimism, positive emotions and self-compassion. Results demonstrated that patients in the PPI condition scored higher on happiness, optimism, positive future expectancies, positive affect, self-compassion and ability to live a desired life despite pain, and scored lower on pain catastrophizing, depression and anxiety compared to patients in the WLC condition. However, executive task performance did not improve following completion of the PPI, compared to the WLC condition. Despite the lack of evidence that positive emotions and optimism can improve executive task performance in chronic pain patients, this study did convincingly demonstrate that it is possible to increase positive emotions and optimism in chronic pain patients with an online positive psychology intervention. It is imperative to further explore amendable psychological factors that may reduce the negative impact of pain on executive functioning. We demonstrated that an Internet-based positive psychology intervention strengthens optimism and positive emotions in chronic pain patients. These emotional improvements are not associated with improved executive task performance. As pain itself often cannot be relieved, it is imperative to have techniques to reduce the burden of living with chronic pain. © 2018 The Authors. European Journal of Pain published by John Wiley & Sons Ltd on behalf of European Pain Federation -EFIC®.

  11. On scheduling task systems with variable service times

    NASA Astrophysics Data System (ADS)

    Maset, Richard G.; Banawan, Sayed A.

    1993-08-01

    Several strategies have been proposed for developing optimal and near-optimal schedules for task systems (jobs consisting of multiple tasks that can be executed in parallel). Most such strategies, however, implicitly assume deterministic task service times. We show that these strategies are much less effective when service times are highly variable. We then evaluate two strategies—one adaptive, one static—that have been proposed for retaining high performance despite such variability. Both strategies are extensions of critical path scheduling, which has been found to be efficient at producing near-optimal schedules. We found the adaptive approach to be quite effective.

  12. Task-driven imaging in cone-beam computed tomography.

    PubMed

    Gang, G J; Stayman, J W; Ouadah, S; Ehtiati, T; Siewerdsen, J H

    Conventional workflow in interventional imaging often ignores a wealth of prior information of the patient anatomy and the imaging task. This work introduces a task-driven imaging framework that utilizes such information to prospectively design acquisition and reconstruction techniques for cone-beam CT (CBCT) in a manner that maximizes task-based performance in subsequent imaging procedures. The framework is employed in jointly optimizing tube current modulation, orbital tilt, and reconstruction parameters in filtered backprojection reconstruction for interventional imaging. Theoretical predictors of noise and resolution relates acquisition and reconstruction parameters to task-based detectability. Given a patient-specific prior image and specification of the imaging task, an optimization algorithm prospectively identifies the combination of imaging parameters that maximizes task-based detectability. Initial investigations were performed for a variety of imaging tasks in an elliptical phantom and an anthropomorphic head phantom. Optimization of tube current modulation and view-dependent reconstruction kernel was shown to have greatest benefits for a directional task (e.g., identification of device or tissue orientation). The task-driven approach yielded techniques in which the dose and sharp kernels were concentrated in views contributing the most to the signal power associated with the imaging task. For example, detectability of a line pair detection task was improved by at least three fold compared to conventional approaches. For radially symmetric tasks, the task-driven strategy yielded results similar to a minimum variance strategy in the absence of kernel modulation. Optimization of the orbital tilt successfully avoided highly attenuating structures that can confound the imaging task by introducing noise correlations masquerading at spatial frequencies of interest. This work demonstrated the potential of a task-driven imaging framework to improve image quality and reduce dose beyond that achievable with conventional imaging approaches.

  13. Task constraints and minimization of muscle effort result in a small number of muscle synergies during gait.

    PubMed

    De Groote, Friedl; Jonkers, Ilse; Duysens, Jacques

    2014-01-01

    Finding muscle activity generating a given motion is a redundant problem, since there are many more muscles than degrees of freedom. The control strategies determining muscle recruitment from a redundant set are still poorly understood. One theory of motor control suggests that motion is produced through activating a small number of muscle synergies, i.e., muscle groups that are activated in a fixed ratio by a single input signal. Because of the reduced number of input signals, synergy-based control is low dimensional. But a major criticism on the theory of synergy-based control of muscles is that muscle synergies might reflect task constraints rather than a neural control strategy. Another theory of motor control suggests that muscles are recruited by optimizing performance. Optimization of performance has been widely used to calculate muscle recruitment underlying a given motion while assuming independent recruitment of muscles. If synergies indeed determine muscle recruitment underlying a given motion, optimization approaches that do not model synergy-based control could result in muscle activations that do not show the synergistic muscle action observed through electromyography (EMG). If, however, synergistic muscle action results from performance optimization and task constraints (joint kinematics and external forces), such optimization approaches are expected to result in low-dimensional synergistic muscle activations that are similar to EMG-based synergies. We calculated muscle recruitment underlying experimentally measured gait patterns by optimizing performance assuming independent recruitment of muscles. We found that the muscle activations calculated without any reference to synergies can be accurately explained by on average four synergies. These synergies are similar to EMG-based synergies. We therefore conclude that task constraints and performance optimization explain synergistic muscle recruitment from a redundant set of muscles.

  14. Cloud computing task scheduling strategy based on improved differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Ge, Junwei; He, Qian; Fang, Yiqiu

    2017-04-01

    In order to optimize the cloud computing task scheduling scheme, an improved differential evolution algorithm for cloud computing task scheduling is proposed. Firstly, the cloud computing task scheduling model, according to the model of the fitness function, and then used improved optimization calculation of the fitness function of the evolutionary algorithm, according to the evolution of generation of dynamic selection strategy through dynamic mutation strategy to ensure the global and local search ability. The performance test experiment was carried out in the CloudSim simulation platform, the experimental results show that the improved differential evolution algorithm can reduce the cloud computing task execution time and user cost saving, good implementation of the optimal scheduling of cloud computing tasks.

  15. Impedance learning for robotic contact tasks using natural actor-critic algorithm.

    PubMed

    Kim, Byungchan; Park, Jooyoung; Park, Shinsuk; Kang, Sungchul

    2010-04-01

    Compared with their robotic counterparts, humans excel at various tasks by using their ability to adaptively modulate arm impedance parameters. This ability allows us to successfully perform contact tasks even in uncertain environments. This paper considers a learning strategy of motor skill for robotic contact tasks based on a human motor control theory and machine learning schemes. Our robot learning method employs impedance control based on the equilibrium point control theory and reinforcement learning to determine the impedance parameters for contact tasks. A recursive least-square filter-based episodic natural actor-critic algorithm is used to find the optimal impedance parameters. The effectiveness of the proposed method was tested through dynamic simulations of various contact tasks. The simulation results demonstrated that the proposed method optimizes the performance of the contact tasks in uncertain conditions of the environment.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The objective of the contract is to consolidate the advances made during the previous contract in the conversion of syngas to motor fuels using Molecular Sieve-containing catalysts and to demonstrate the practical utility and economic value of the new catalyst/process systems with appropriate laboratory runs. Work on the program is divided into the following six tasks: (1) preparation of a detailed work plan covering the entire performance of the contract; (2) techno-economic studies that will supplement those that are presently being carried out by MITRE; (3) optimization of the most promising catalysts developed under prior contract; (4) optimization of themore » UCC catalyst system in a manner that will give it the longest possible service life; (5) optimization of a UCC process/catalyst system based upon a tubular reactor with a recycle loop containing the most promising catalyst developed under Tasks 3 and 4 studies; and (6) economic evaluation of the optimal performance found under Task 5 for the UCC process/catalyst system. Progress reports are presented for Tasks 1, 3, 4, and 5.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The objective of the contract is to consolidate the advances made during the previous contract in the conversion of syngas to motor fuels using Molecular Sieve-containing catalysts and to demonstrate the practical utility and economic value of the new catalyst/process systems with appropriate laboratory runs. Work on the program is divided into the following six tasks: (1) preparation of a detailed work plan covering the entire performance of the contract; (2) preliminary techno-economic assessment of the UCC catalyst/process system; (3) optimization of the most promising catalyst developed under prior contract; (4) optimization of the UCC catalyst system in a mannermore » that will give it the longest possible service life; (5) optimization of a UCC process/catalyst system based upon a tubular reactor with a recycle loop containing the most promising catalyst developed under Tasks 3 and 4 studies; and (6) economic evaluation of the optimal performance found under Task 5 for the UCC process/catalyst system. Progress reports are presented for tasks 2 through 5. 232 figs., 19 tabs.« less

  18. Reward rate optimization in two-alternative decision making: empirical tests of theoretical predictions.

    PubMed

    Simen, Patrick; Contreras, David; Buck, Cara; Hu, Peter; Holmes, Philip; Cohen, Jonathan D

    2009-12-01

    The drift-diffusion model (DDM) implements an optimal decision procedure for stationary, 2-alternative forced-choice tasks. The height of a decision threshold applied to accumulating information on each trial determines a speed-accuracy tradeoff (SAT) for the DDM, thereby accounting for a ubiquitous feature of human performance in speeded response tasks. However, little is known about how participants settle on particular tradeoffs. One possibility is that they select SATs that maximize a subjective rate of reward earned for performance. For the DDM, there exist unique, reward-rate-maximizing values for its threshold and starting point parameters in free-response tasks that reward correct responses (R. Bogacz, E. Brown, J. Moehlis, P. Holmes, & J. D. Cohen, 2006). These optimal values vary as a function of response-stimulus interval, prior stimulus probability, and relative reward magnitude for correct responses. We tested the resulting quantitative predictions regarding response time, accuracy, and response bias under these task manipulations and found that grouped data conformed well to the predictions of an optimally parameterized DDM.

  19. Optimizing the number of steps in learning tasks for complex skills.

    PubMed

    Nadolski, Rob J; Kirschner, Paul A; van Merriënboer, Jeroen J G

    2005-06-01

    Carrying out whole tasks is often too difficult for novice learners attempting to acquire complex skills. The common solution is to split up the tasks into a number of smaller steps. The number of steps must be optimized for efficient and effective learning. The aim of the study is to investigate the relation between the number of steps provided to learners and the quality of their learning of complex skills. It is hypothesized that students receiving an optimized number of steps will learn better than those receiving either the whole task in only one step or those receiving a large number of steps. Participants were 35 sophomore law students studying at Dutch universities, mean age=22.8 years (SD=3.5), 63% were female. Participants were randomly assigned to 1 of 3 computer-delivered versions of a multimedia programme on how to prepare and carry out a law plea. The versions differed only in the number of learning steps provided. Videotaped plea-performance results were determined, various related learning measures were acquired and all computer actions were logged and analyzed. Participants exposed to an intermediate (i.e. optimized) number of steps outperformed all others on the compulsory learning task. No differences in performance on a transfer task were found. A high number of steps proved to be less efficient for carrying out the learning task. An intermediate number of steps is the most effective, proving that the number of steps can be optimized for improving learning.

  20. Prediction of pilot opinion ratings using an optimal pilot model. [of aircraft handling qualities in multiaxis tasks

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1977-01-01

    A brief review of some of the more pertinent applications of analytical pilot models to the prediction of aircraft handling qualities is undertaken. The relative ease with which multiloop piloting tasks can be modeled via the optimal control formulation makes the use of optimal pilot models particularly attractive for handling qualities research. To this end, a rating hypothesis is introduced which relates the numerical pilot opinion rating assigned to a particular vehicle and task to the numerical value of the index of performance resulting from an optimal pilot modeling procedure as applied to that vehicle and task. This hypothesis is tested using data from piloted simulations and is shown to be reasonable. An example concerning a helicopter landing approach is introduced to outline the predictive capability of the rating hypothesis in multiaxis piloting tasks.

  1. Video game practice optimizes executive control skills in dual-task and task switching situations.

    PubMed

    Strobach, Tilo; Frensch, Peter A; Schubert, Torsten

    2012-05-01

    We examined the relation of action video game practice and the optimization of executive control skills that are needed to coordinate two different tasks. As action video games are similar to real life situations and complex in nature, and include numerous concurrent actions, they may generate an ideal environment for practicing these skills (Green & Bavelier, 2008). For two types of experimental paradigms, dual-task and task switching respectively; we obtained performance advantages for experienced video gamers compared to non-gamers in situations in which two different tasks were processed simultaneously or sequentially. This advantage was absent in single-task situations. These findings indicate optimized executive control skills in video gamers. Similar findings in non-gamers after 15 h of action video game practice when compared to non-gamers with practice on a puzzle game clarified the causal relation between video game practice and the optimization of executive control skills. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Higher Intelligence Is Associated with Less Task-Related Brain Network Reconfiguration

    PubMed Central

    Cole, Michael W.

    2016-01-01

    The human brain is able to exceed modern computers on multiple computational demands (e.g., language, planning) using a small fraction of the energy. The mystery of how the brain can be so efficient is compounded by recent evidence that all brain regions are constantly active as they interact in so-called resting-state networks (RSNs). To investigate the brain's ability to process complex cognitive demands efficiently, we compared functional connectivity (FC) during rest and multiple highly distinct tasks. We found previously that RSNs are present during a wide variety of tasks and that tasks only minimally modify FC patterns throughout the brain. Here, we tested the hypothesis that, although subtle, these task-evoked FC updates from rest nonetheless contribute strongly to behavioral performance. One might expect that larger changes in FC reflect optimization of networks for the task at hand, improving behavioral performance. Alternatively, smaller changes in FC could reflect optimization for efficient (i.e., small) network updates, reducing processing demands to improve behavioral performance. We found across three task domains that high-performing individuals exhibited more efficient brain connectivity updates in the form of smaller changes in functional network architecture between rest and task. These smaller changes suggest that individuals with an optimized intrinsic network configuration for domain-general task performance experience more efficient network updates generally. Confirming this, network update efficiency correlated with general intelligence. The brain's reconfiguration efficiency therefore appears to be a key feature contributing to both its network dynamics and general cognitive ability. SIGNIFICANCE STATEMENT The brain's network configuration varies based on current task demands. For example, functional brain connections are organized in one way when one is resting quietly but in another way if one is asked to make a decision. We found that the efficiency of these updates in brain network organization is positively related to general intelligence, the ability to perform a wide variety of cognitively challenging tasks well. Specifically, we found that brain network configuration at rest was already closer to a wide variety of task configurations in intelligent individuals. This suggests that the ability to modify network connectivity efficiently when task demands change is a hallmark of high intelligence. PMID:27535904

  3. Sensory modality, temperament, and the development of sustained attention: a vigilance study in children and adults.

    PubMed

    Curtindale, Lori; Laurie-Rose, Cynthia; Bennett-Murphy, Laura; Hull, Sarah

    2007-05-01

    Applying optimal stimulation theory, the present study explored the development of sustained attention as a dynamic process. It examined the interaction of modality and temperament over time in children and adults. Second-grade children and college-aged adults performed auditory and visual vigilance tasks. Using the Carey temperament questionnaires (S. C. McDevitt & W. B. Carey, 1995), the authors classified participants according to temperament composites of reactivity and task orientation. In a preliminary study, tasks were equated across age and modality using d' matching procedures. In the main experiment, 48 children and 48 adults performed these calibrated tasks. The auditory task proved more difficult for both children and adults. Intermodal relations changed with age: Performance across modality was significantly correlated for children but not for adults. Although temperament did not significantly predict performance in adults, it did for children. The temperament effects observed in children--specifically in those with the composite of reactivity--occurred in connection with the auditory task and in a manner consistent with theoretical predictions derived from optimal stimulation theory. Copyright (c) 2007 APA, all rights reserved.

  4. A control-theory model for human decision-making

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Tanner, R. B.

    1971-01-01

    A model for human decision making is an adaptation of an optimal control model for pilot/vehicle systems. The models for decision and control both contain concepts of time delay, observation noise, optimal prediction, and optimal estimation. The decision making model was intended for situations in which the human bases his decision on his estimate of the state of a linear plant. Experiments are described for the following task situations: (a) single decision tasks, (b) two-decision tasks, and (c) simultaneous manual control and decision making. Using fixed values for model parameters, single-task and two-task decision performance can be predicted to within an accuracy of 10 percent. Agreement is less good for the simultaneous decision and control situation.

  5. Efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback.

    PubMed

    Orhan, A Emin; Ma, Wei Ji

    2017-07-26

    Animals perform near-optimal probabilistic inference in a wide range of psychophysical tasks. Probabilistic inference requires trial-to-trial representation of the uncertainties associated with task variables and subsequent use of this representation. Previous work has implemented such computations using neural networks with hand-crafted and task-dependent operations. We show that generic neural networks trained with a simple error-based learning rule perform near-optimal probabilistic inference in nine common psychophysical tasks. In a probabilistic categorization task, error-based learning in a generic network simultaneously explains a monkey's learning curve and the evolution of qualitative aspects of its choice behavior. In all tasks, the number of neurons required for a given level of performance grows sublinearly with the input population size, a substantial improvement on previous implementations of probabilistic inference. The trained networks develop a novel sparsity-based probabilistic population code. Our results suggest that probabilistic inference emerges naturally in generic neural networks trained with error-based learning rules.Behavioural tasks often require probability distributions to be inferred about task specific variables. Here, the authors demonstrate that generic neural networks can be trained using a simple error-based learning rule to perform such probabilistic computations efficiently without any need for task specific operations.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The objective of the contract is to consolidate the advances made during the previous contract in the conversion of syngas to motor fuels using Molecular Sieve-containing catalysts and to demonstrate the practical utility and economic value of the new catalyst/process systems with appropriate laboratory runs. Work on the program is divided into the following six tasks: (1) preparation of a detailed work plan covering the entire performance of the contract; (2) preliminary techno-economic assessment of the UCC catalyst/process system; (3) optimization of the most promising catalysts developed under prior contract; (4) optimization of the UCC catalyst system in a mannermore » that will give it the longest possible service life; (5) optimization of a UCC process/catalyst system based upon a tubular reactor with a recycle loop; and (6) economic evaluation of the optimal performance found under Task 5 for the UCC process/catalyst system. Accomplishments are reported for Tasks 2 through 5.« less

  7. Cortical membrane potential signature of optimal states for sensory signal detection

    PubMed Central

    McGinley, Matthew J.; David, Stephen V.; McCormick, David A.

    2015-01-01

    The neural correlates of optimal states for signal detection task performance are largely unknown. One hypothesis holds that optimal states exhibit tonically depolarized cortical neurons with enhanced spiking activity, such as occur during movement. We recorded membrane potentials of auditory cortical neurons in mice trained on a challenging tone-in-noise detection task while assessing arousal with simultaneous pupillometry and hippocampal recordings. Arousal measures accurately predicted multiple modes of membrane potential activity, including: rhythmic slow oscillations at low arousal, stable hyperpolarization at intermediate arousal, and depolarization during phasic or tonic periods of hyper-arousal. Walking always occurred during hyper-arousal. Optimal signal detection behavior and sound-evoked responses, at both sub-threshold and spiking levels, occurred at intermediate arousal when pre-decision membrane potentials were stably hyperpolarized. These results reveal a cortical physiological signature of the classically-observed inverted-U relationship between task performance and arousal, and that optimal detection exhibits enhanced sensory-evoked responses and reduced background synaptic activity. PMID:26074005

  8. Affective and cognitive decision-making in adolescents.

    PubMed

    van Duijvenvoorde, Anna C K; Jansen, Brenda R J; Visser, Ingmar; Huizenga, Hilde M

    2010-01-01

    Adolescents demonstrate impaired decision-making in emotionally arousing situations, yet they appear to exhibit relatively mature decision-making skills in predominantly cognitive, low-arousal situations. In this study we compared adolescents' (13-15 years) performance on matched affective and cognitive decision-making tasks, in order to determine (1) their performance level on each task and (2) whether performance on the cognitive task was associated with performance on the affective task. Both tasks required a comparison of choice dimensions characterized by frequency of loss, amount of loss, and constant gain. Results indicated that in the affective task, adolescents performed sub-optimally by considering only the frequency of loss, whereas in the cognitive task adolescents used relatively mature decision rules by considering two or all three choice dimensions. Performance on the affective task was not related to performance on the cognitive task. These results are discussed in light of neural developmental trajectories observed in adolescence.

  9. Use of EEG workload indices for diagnostic monitoring of vigilance decrement.

    PubMed

    Kamzanova, Altyngul T; Kustubayeva, Almira M; Matthews, Gerald

    2014-09-01

    A study was run to test which of five electroencephalographic (EEG) indices was most diagnostic of loss of vigilance at two levels of workload. EEG indices of alertness include conventional spectral power measures as well as indices combining measures from multiple frequency bands, such as the Task Load Index (TLI) and the Engagement Index (El). However, it is unclear which indices are optimal for early detection of loss of vigilance. Ninety-two participants were assigned to one of two experimental conditions, cued (lower workload) and uncued (higher workload), and then performed a 40-min visual vigilance task. Performance on this task is believed to be limited by attentional resource availability. EEG was recorded continuously. Performance, subjective state, and workload were also assessed. The task showed a vigilance decrement in performance; cuing improved performance and reduced subjective workload. Lower-frequency alpha (8 to 10.9 Hz) and TLI were most sensitive to the task parameters. The magnitude of temporal change was larger for lower-frequency alpha. Surprisingly, higher TLI was associated with superior performance. Frontal theta and El were influenced by task workload only in the final period of work. Correlational data also suggested that the indices are distinct from one another. Lower-frequency alpha appears to be the optimal index for monitoring vigilance on the task used here, but further work is needed to test how diagnosticity of EEG indices varies with task demands. Lower-frequency alpha may be used to diagnose loss of operator alertness on tasks requiring vigilance.

  10. Optimal Design of Cable-Driven Manipulators Using Particle Swarm Optimization.

    PubMed

    Bryson, Joshua T; Jin, Xin; Agrawal, Sunil K

    2016-08-01

    The design of cable-driven manipulators is complicated by the unidirectional nature of the cables, which results in extra actuators and limited workspaces. Furthermore, the particular arrangement of the cables and the geometry of the robot pose have a significant effect on the cable tension required to effect a desired joint torque. For a sufficiently complex robot, the identification of a satisfactory cable architecture can be difficult and can result in multiply redundant actuators and performance limitations based on workspace size and cable tensions. This work leverages previous research into the workspace analysis of cable systems combined with stochastic optimization to develop a generalized methodology for designing optimized cable routings for a given robot and desired task. A cable-driven robot leg performing a walking-gait motion is used as a motivating example to illustrate the methodology application. The components of the methodology are described, and the process is applied to the example problem. An optimal cable routing is identified, which provides the necessary controllable workspace to perform the desired task and enables the robot to perform that task with minimal cable tensions. A robot leg is constructed according to this routing and used to validate the theoretical model and to demonstrate the effectiveness of the resulting cable architecture.

  11. Toward a Model-Based Predictive Controller Design in Brain–Computer Interfaces

    PubMed Central

    Kamrunnahar, M.; Dias, N. S.; Schiff, S. J.

    2013-01-01

    A first step in designing a robust and optimal model-based predictive controller (MPC) for brain–computer interface (BCI) applications is presented in this article. An MPC has the potential to achieve improved BCI performance compared to the performance achieved by current ad hoc, nonmodel-based filter applications. The parameters in designing the controller were extracted as model-based features from motor imagery task-related human scalp electroencephalography. Although the parameters can be generated from any model-linear or non-linear, we here adopted a simple autoregressive model that has well-established applications in BCI task discriminations. It was shown that the parameters generated for the controller design can as well be used for motor imagery task discriminations with performance (with 8–23% task discrimination errors) comparable to the discrimination performance of the commonly used features such as frequency specific band powers and the AR model parameters directly used. An optimal MPC has significant implications for high performance BCI applications. PMID:21267657

  12. Toward a model-based predictive controller design in brain-computer interfaces.

    PubMed

    Kamrunnahar, M; Dias, N S; Schiff, S J

    2011-05-01

    A first step in designing a robust and optimal model-based predictive controller (MPC) for brain-computer interface (BCI) applications is presented in this article. An MPC has the potential to achieve improved BCI performance compared to the performance achieved by current ad hoc, nonmodel-based filter applications. The parameters in designing the controller were extracted as model-based features from motor imagery task-related human scalp electroencephalography. Although the parameters can be generated from any model-linear or non-linear, we here adopted a simple autoregressive model that has well-established applications in BCI task discriminations. It was shown that the parameters generated for the controller design can as well be used for motor imagery task discriminations with performance (with 8-23% task discrimination errors) comparable to the discrimination performance of the commonly used features such as frequency specific band powers and the AR model parameters directly used. An optimal MPC has significant implications for high performance BCI applications.

  13. WE-EF-207-01: FEATURED PRESENTATION and BEST IN PHYSICS (IMAGING): Task-Driven Imaging for Cone-Beam CT in Interventional Guidance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gang, G; Stayman, J; Ouadah, S

    2015-06-15

    Purpose: This work introduces a task-driven imaging framework that utilizes a patient-specific anatomical model, mathematical definition of the imaging task, and a model of the imaging system to prospectively design acquisition and reconstruction techniques that maximize task-based imaging performance. Utility of the framework is demonstrated in the joint optimization of tube current modulation and view-dependent reconstruction kernel in filtered-backprojection reconstruction and non-circular orbit design in model-based reconstruction. Methods: The system model is based on a cascaded systems analysis of cone-beam CT capable of predicting the spatially varying noise and resolution characteristics as a function of the anatomical model and amore » wide range of imaging parameters. Detectability index for a non-prewhitening observer model is used as the objective function in a task-driven optimization. The combination of tube current and reconstruction kernel modulation profiles were identified through an alternating optimization algorithm where tube current was updated analytically followed by a gradient-based optimization of reconstruction kernel. The non-circular orbit is first parameterized as a linear combination of bases functions and the coefficients were then optimized using an evolutionary algorithm. The task-driven strategy was compared with conventional acquisitions without modulation, using automatic exposure control, and in a circular orbit. Results: The task-driven strategy outperformed conventional techniques in all tasks investigated, improving the detectability of a spherical lesion detection task by an average of 50% in the interior of a pelvis phantom. The non-circular orbit design successfully mitigated photon starvation effects arising from a dense embolization coil in a head phantom, improving the conspicuity of an intracranial hemorrhage proximal to the coil. Conclusion: The task-driven imaging framework leverages a knowledge of the imaging task within a patient-specific anatomical model to optimize image acquisition and reconstruction techniques, thereby improving imaging performance beyond that achievable with conventional approaches. 2R01-CA-112163; R01-EB-017226; U01-EB-018758; Siemens Healthcare (Forcheim, Germany)« less

  14. Enhancing motor performance improvement by personalizing non-invasive cortical stimulation with concurrent functional near-infrared spectroscopy and multi-modal motor measurements

    NASA Astrophysics Data System (ADS)

    Khan, Bilal; Hodics, Timea; Hervey, Nathan; Kondraske, George; Stowe, Ann; Alexandrakis, George

    2015-03-01

    Transcranial direct current stimulation (tDCS) is a non-invasive cortical stimulation technique that can facilitate task specific plasticity that can improve motor performance. Current tDCS interventions uniformly apply a chosen electrode montage to a subject population without personalizing electrode placement for optimal motor gains. We propose a novel perturbation tDCS (ptDCS) paradigm for determining a personalized electrode montage in which tDCS intervention yields maximal motor performance improvements during stimulation. PtDCS was applied to ten healthy adults and five stroke patients with upper hemiparesis as they performed an isometric wrist flexion task with their non-dominant arm. Simultaneous recordings of torque applied to a stationary handle, muscle activity by electromyography (EMG), and cortical activity by functional near-infrared spectroscopy (fNIRS) during ptDCS helped interpret how cortical activity perturbations by any given electrode montage related to changes in muscle activity and task performance quantified by a Reaction Time (RT) X Error product. PtDCS enabled quantifying the effect on task performance of 20 different electrode pair montages placed over the sensorimotor cortex. Interestingly, the electrode montage maximizing performance in all healthy adults did not match any of the ones being explored in current literature as a means of improving the motor performance of stroke patients. Furthermore, the optimal montage was found to be different in each stroke patient and the resulting motor gains were very significant during stimulation. This study supports the notion that task-specific ptDCS optimization can lend itself to personalizing the rehabilitation of patients with brain injury.

  15. Using Performance Task Data to Improve Instruction

    ERIC Educational Resources Information Center

    Abbott, Amy L.; Wren, Douglas G.

    2016-01-01

    Two well-accepted ideas among educators are (a) performance assessment is an effective means of assessing higher-order thinking skills and (b) data-driven instruction planning is a valuable tool for optimizing student learning. This article describes a locally developed performance task (LDPT) designed to measure critical thinking, problem…

  16. Preliminary Work for Examining the Scalability of Reinforcement Learning

    NASA Technical Reports Server (NTRS)

    Clouse, Jeff

    1998-01-01

    Researchers began studying automated agents that learn to perform multiple-step tasks early in the history of artificial intelligence (Samuel, 1963; Samuel, 1967; Waterman, 1970; Fikes, Hart & Nilsonn, 1972). Multiple-step tasks are tasks that can only be solved via a sequence of decisions, such as control problems, robotics problems, classic problem-solving, and game-playing. The objective of agents attempting to learn such tasks is to use the resources they have available in order to become more proficient at the tasks. In particular, each agent attempts to develop a good policy, a mapping from states to actions, that allows it to select actions that optimize a measure of its performance on the task; for example, reducing the number of steps necessary to complete the task successfully. Our study focuses on reinforcement learning, a set of learning techniques where the learner performs trial-and-error experiments in the task and adapts its policy based on the outcome of those experiments. Much of the work in reinforcement learning has focused on a particular, simple representation, where every problem state is represented explicitly in a table, and associated with each state are the actions that can be chosen in that state. A major advantage of this table lookup representation is that one can prove that certain reinforcement learning techniques will develop an optimal policy for the current task. The drawback is that the representation limits the application of reinforcement learning to multiple-step tasks with relatively small state-spaces. There has been a little theoretical work that proves that convergence to optimal solutions can be obtained when using generalization structures, but the structures are quite simple. The theory says little about complex structures, such as multi-layer, feedforward artificial neural networks (Rumelhart & McClelland, 1986), but empirical results indicate that the use of reinforcement learning with such structures is promising. These empirical results make no theoretical claims, nor compare the policies produced to optimal policies. A goal of our work is to be able to make the comparison between an optimal policy and one stored in an artificial neural network. A difficulty of performing such a study is finding a multiple-step task that is small enough that one can find an optimal policy using table lookup, yet large enough that, for practical purposes, an artificial neural network is really required. We have identified a limited form of the game OTHELLO as satisfying these requirements. The work we report here is in the very preliminary stages of research, but this paper provides background for the problem being studied and a description of our initial approach to examining the problem. In the remainder of this paper, we first describe reinforcement learning in more detail. Next, we present the game OTHELLO. Finally we argue that a restricted form of the game meets the requirements of our study, and describe our preliminary approach to finding an optimal solution to the problem.

  17. A novel channel selection method for optimal classification in different motor imagery BCI paradigms.

    PubMed

    Shan, Haijun; Xu, Haojie; Zhu, Shanan; He, Bin

    2015-10-21

    For sensorimotor rhythms based brain-computer interface (BCI) systems, classification of different motor imageries (MIs) remains a crucial problem. An important aspect is how many scalp electrodes (channels) should be used in order to reach optimal performance classifying motor imaginations. While the previous researches on channel selection mainly focus on MI tasks paradigms without feedback, the present work aims to investigate the optimal channel selection in MI tasks paradigms with real-time feedback (two-class control and four-class control paradigms). In the present study, three datasets respectively recorded from MI tasks experiment, two-class control and four-class control experiments were analyzed offline. Multiple frequency-spatial synthesized features were comprehensively extracted from every channel, and a new enhanced method IterRelCen was proposed to perform channel selection. IterRelCen was constructed based on Relief algorithm, but was enhanced from two aspects: change of target sample selection strategy and adoption of the idea of iterative computation, and thus performed more robust in feature selection. Finally, a multiclass support vector machine was applied as the classifier. The least number of channels that yield the best classification accuracy were considered as the optimal channels. One-way ANOVA was employed to test the significance of performance improvement among using optimal channels, all the channels and three typical MI channels (C3, C4, Cz). The results show that the proposed method outperformed other channel selection methods by achieving average classification accuracies of 85.2, 94.1, and 83.2 % for the three datasets, respectively. Moreover, the channel selection results reveal that the average numbers of optimal channels were significantly different among the three MI paradigms. It is demonstrated that IterRelCen has a strong ability for feature selection. In addition, the results have shown that the numbers of optimal channels in the three different motor imagery BCI paradigms are distinct. From a MI task paradigm, to a two-class control paradigm, and to a four-class control paradigm, the number of required channels for optimizing the classification accuracy increased. These findings may provide useful information to optimize EEG based BCI systems, and further improve the performance of noninvasive BCI.

  18. Cue Integration in Categorical Tasks: Insights from Audio-Visual Speech Perception

    PubMed Central

    Bejjanki, Vikranth Rao; Clayards, Meghan; Knill, David C.; Aslin, Richard N.

    2011-01-01

    Previous cue integration studies have examined continuous perceptual dimensions (e.g., size) and have shown that human cue integration is well described by a normative model in which cues are weighted in proportion to their sensory reliability, as estimated from single-cue performance. However, this normative model may not be applicable to categorical perceptual dimensions (e.g., phonemes). In tasks defined over categorical perceptual dimensions, optimal cue weights should depend not only on the sensory variance affecting the perception of each cue but also on the environmental variance inherent in each task-relevant category. Here, we present a computational and experimental investigation of cue integration in a categorical audio-visual (articulatory) speech perception task. Our results show that human performance during audio-visual phonemic labeling is qualitatively consistent with the behavior of a Bayes-optimal observer. Specifically, we show that the participants in our task are sensitive, on a trial-by-trial basis, to the sensory uncertainty associated with the auditory and visual cues, during phonemic categorization. In addition, we show that while sensory uncertainty is a significant factor in determining cue weights, it is not the only one and participants' performance is consistent with an optimal model in which environmental, within category variability also plays a role in determining cue weights. Furthermore, we show that in our task, the sensory variability affecting the visual modality during cue-combination is not well estimated from single-cue performance, but can be estimated from multi-cue performance. The findings and computational principles described here represent a principled first step towards characterizing the mechanisms underlying human cue integration in categorical tasks. PMID:21637344

  19. Telemanipulator design and optimization software

    NASA Astrophysics Data System (ADS)

    Cote, Jean; Pelletier, Michel

    1995-12-01

    For many years, industrial robots have been used to execute specific repetitive tasks. In those cases, the optimal configuration and location of the manipulator only has to be found once. The optimal configuration or position where often found empirically according to the tasks to be performed. In telemanipulation, the nature of the tasks to be executed is much wider and can be very demanding in terms of dexterity and workspace. The position/orientation of the robot's base could be required to move during the execution of a task. At present, the choice of the initial position of the teleoperator is usually found empirically which can be sufficient in the case of an easy or repetitive task. In the converse situation, the amount of time wasted to move the teleoperator support platform has to be taken into account during the execution of the task. Automatic optimization of the position/orientation of the platform or a better designed robot configuration could minimize these movements and save time. This paper will present two algorithms. The first algorithm is used to optimize the position and orientation of a given manipulator (or manipulators) with respect to the environment on which a task has to be executed. The second algorithm is used to optimize the position or the kinematic configuration of a robot. For this purpose, the tasks to be executed are digitized using a position/orientation measurement system and a compact representation based on special octrees. Given a digitized task, the optimal position or Denavit-Hartenberg configuration of the manipulator can be obtained numerically. Constraints on the robot design can also be taken into account. A graphical interface has been designed to facilitate the use of the two optimization algorithms.

  20. Optimal control of a hybrid rhythmic-discrete task: the bouncing ball revisited.

    PubMed

    Ronsse, Renaud; Wei, Kunlin; Sternad, Dagmar

    2010-05-01

    Rhythmically bouncing a ball with a racket is a hybrid task that combines continuous rhythmic actuation of the racket with the control of discrete impact events between racket and ball. This study presents experimental data and a two-layered modeling framework that explicitly addresses the hybrid nature of control: a first discrete layer calculates the state to reach at impact and the second continuous layer smoothly drives the racket to this desired state, based on optimality principles. The testbed for this hybrid model is task performance at a range of increasingly slower tempos. When slowing the rhythm of the bouncing actions, the continuous cycles become separated into a sequence of discrete movements interspersed by dwell times and directed to achieve the desired impact. Analyses of human performance show increasing variability of performance measures with slower tempi, associated with a change in racket trajectories from approximately sinusoidal to less symmetrical velocity profiles. Matching results of model simulations give support to a hybrid control model based on optimality, and therefore suggest that optimality principles are applicable to the sensorimotor control of complex movements such as ball bouncing.

  1. Performance impact of mutation operators of a subpopulation-based genetic algorithm for multi-robot task allocation problems.

    PubMed

    Liu, Chun; Kroll, Andreas

    2016-01-01

    Multi-robot task allocation determines the task sequence and distribution for a group of robots in multi-robot systems, which is one of constrained combinatorial optimization problems and more complex in case of cooperative tasks because they introduce additional spatial and temporal constraints. To solve multi-robot task allocation problems with cooperative tasks efficiently, a subpopulation-based genetic algorithm, a crossover-free genetic algorithm employing mutation operators and elitism selection in each subpopulation, is developed in this paper. Moreover, the impact of mutation operators (swap, insertion, inversion, displacement, and their various combinations) is analyzed when solving several industrial plant inspection problems. The experimental results show that: (1) the proposed genetic algorithm can obtain better solutions than the tested binary tournament genetic algorithm with partially mapped crossover; (2) inversion mutation performs better than other tested mutation operators when solving problems without cooperative tasks, and the swap-inversion combination performs better than other tested mutation operators/combinations when solving problems with cooperative tasks. As it is difficult to produce all desired effects with a single mutation operator, using multiple mutation operators (including both inversion and swap) is suggested when solving similar combinatorial optimization problems.

  2. Earth resources data analysis program, phase 3

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Tasks were performed in two areas: (1) systems analysis and (2) algorithmic development. The major effort in the systems analysis task was the development of a recommended approach to the monitoring of resource utilization data for the Large Area Crop Inventory Experiment (LACIE). Other efforts included participation in various studies concerning the LACIE Project Plan, the utility of the GE Image 100, and the specifications for a special purpose processor to be used in the LACIE. In the second task, the major effort was the development of improved algorithms for estimating proportions of unclassified remotely sensed data. Also, work was performed on optimal feature extraction and optimal feature extraction for proportion estimation.

  3. Use of the Occupational Therapy Task-Oriented Approach to optimize the motor performance of a client with cognitive limitations.

    PubMed

    Preissner, Katharine

    2010-01-01

    This case report describes the use of the Occupational Therapy Task-Oriented Approach with a client with occupational performance limitations after a cerebral vascular accident. The Occupational Therapy Task-Oriented Approach is often suggested as a preferred neurorehabilitation intervention to improve occupational performance by optimizing motor behavior. One common critique of this approach, however, is that it may seem inappropriate or have limited application for clients with cognitive deficits. This case report demonstrates how an occupational therapist working in an inpatient rehabilitation setting used the occupational therapy task-oriented evaluation framework and treatment principles described by Mathiowetz (2004) with a person with significant cognitive limitations. This approach was effective in assisting the client in meeting her long-term goals, maximizing her participation in meaningful occupations, and successfully transitioning to home with her daughter.

  4. Working-memory load and temporal myopia in dynamic decision making.

    PubMed

    Worthy, Darrell A; Otto, A Ross; Maddox, W Todd

    2012-11-01

    We examined the role of working memory (WM) in dynamic decision making by having participants perform decision-making tasks under single-task or dual-task conditions. In 2 experiments participants performed dynamic decision-making tasks in which they chose 1 of 2 options on each trial. The decreasing option always gave a larger immediate reward but caused future rewards for both options to decrease. The increasing option always gave a smaller immediate reward but caused future rewards for both options to increase. In each experiment we manipulated the reward structure such that the decreasing option was the optimal choice in 1 condition and the increasing option was the optimal choice in the other condition. Behavioral results indicated that dual-task participants selected the immediately rewarding decreasing option more often, and single-task participants selected the increasing option more often, regardless of which option was optimal. Thus, dual-task participants performed worse on 1 type of task but better on the other type. Modeling results showed that single-task participants' data were most often best fit by a win-stay, lose-shift (WSLS) rule-based model that tracked differences across trials, and dual-task participants' data were most often best fit by a Softmax reinforcement learning model that tracked recency-weighted average rewards for each option. This suggests that manipulating WM load affects the degree to which participants focus on the immediate versus delayed consequences of their actions and whether they employ a rule-based WSLS strategy, but it does not necessarily affect how well people weigh the immediate versus delayed benefits when determining the long-term utility of each option.

  5. A Comparison of Heuristic and Human Performance on Open Versions of the Traveling Salesperson Problem

    ERIC Educational Resources Information Center

    MacGregor, James N.; Chronicle, Edward P.; Ormerod, Thomas C.

    2006-01-01

    We compared the performance of three heuristics with that of subjects on variants of a well-known combinatorial optimization task, the Traveling Salesperson Problem (TSP). The present task consisted of finding the shortest path through an array of points from one side of the array to the other. Like the standard TSP, the task is computationally…

  6. An optimal control model approach to the design of compensators for simulator delay

    NASA Technical Reports Server (NTRS)

    Baron, S.; Lancraft, R.; Caglayan, A.

    1982-01-01

    The effects of display delay on pilot performance and workload and of the design of the filters to ameliorate these effects were investigated. The optimal control model for pilot/vehicle analysis was used both to determine the potential delay effects and to design the compensators. The model was applied to a simple roll tracking task and to a complex hover task. The results confirm that even small delays can degrade performance and impose a workload penalty. A time-domain compensator designed by using the optimal control model directly appears capable of providing extensive compensation for these effects even in multi-input, multi-output problems.

  7. To branch out or stay focused? Affective shifts differentially predict organizational citizenship behavior and task performance.

    PubMed

    Yang, Liu-Qin; Simon, Lauren S; Wang, Lei; Zheng, Xiaoming

    2016-06-01

    We draw from personality systems interaction (PSI) theory (Kuhl, 2000) and regulatory focus theory (Higgins, 1997) to examine how dynamic positive and negative affective processes interact to predict both task and contextual performance. Using a twice-daily diary design over the course of a 3-week period, results from multilevel regression analysis revealed that distinct patterns of change in positive and negative affect optimally predicted contextual and task performance among a sample of 71 employees at a medium-sized technology company. Specifically, within persons, increases (upshifts) in positive affect over the course of a workday better predicted the subsequent day's organizational citizenship behavior (OCB) when such increases were coupled with decreases (downshifts) in negative affect. The optimal pattern of change in positive and negative affect differed, however, in predicting task performance. That is, upshifts in positive affect over the course of the workday better predicted the subsequent day's task performance when such upshifts were accompanied by upshifts in negative affect. The contribution of our findings to PSI theory and the broader affective and motivation regulation literatures, along with practical implications, are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  8. The effect of haptic guidance and visual feedback on learning a complex tennis task.

    PubMed

    Marchal-Crespo, Laura; van Raai, Mark; Rauter, Georg; Wolf, Peter; Riener, Robert

    2013-11-01

    While haptic guidance can improve ongoing performance of a motor task, several studies have found that it ultimately impairs motor learning. However, some recent studies suggest that the haptic demonstration of optimal timing, rather than movement magnitude, enhances learning in subjects trained with haptic guidance. Timing of an action plays a crucial role in the proper accomplishment of many motor skills, such as hitting a moving object (discrete timing task) or learning a velocity profile (time-critical tracking task). The aim of the present study is to evaluate which feedback conditions-visual or haptic guidance-optimize learning of the discrete and continuous elements of a timing task. The experiment consisted in performing a fast tennis forehand stroke in a virtual environment. A tendon-based parallel robot connected to the end of a racket was used to apply haptic guidance during training. In two different experiments, we evaluated which feedback condition was more adequate for learning: (1) a time-dependent discrete task-learning to start a tennis stroke and (2) a tracking task-learning to follow a velocity profile. The effect that the task difficulty and subject's initial skill level have on the selection of the optimal training condition was further evaluated. Results showed that the training condition that maximizes learning of the discrete time-dependent motor task depends on the subjects' initial skill level. Haptic guidance was especially suitable for less-skilled subjects and in especially difficult discrete tasks, while visual feedback seems to benefit more skilled subjects. Additionally, haptic guidance seemed to promote learning in a time-critical tracking task, while visual feedback tended to deteriorate the performance independently of the task difficulty and subjects' initial skill level. Haptic guidance outperformed visual feedback, although additional studies are needed to further analyze the effect of other types of feedback visualization on motor learning of time-critical tasks.

  9. Statistical model based iterative reconstruction in clinical CT systems. Part III. Task-based kV/mAs optimization for radiation dose reduction

    PubMed Central

    Li, Ke; Gomez-Cardona, Daniel; Hsieh, Jiang; Lubner, Meghan G.; Pickhardt, Perry J.; Chen, Guang-Hong

    2015-01-01

    Purpose: For a given imaging task and patient size, the optimal selection of x-ray tube potential (kV) and tube current-rotation time product (mAs) is pivotal in achieving the maximal radiation dose reduction while maintaining the needed diagnostic performance. Although contrast-to-noise (CNR)-based strategies can be used to optimize kV/mAs for computed tomography (CT) imaging systems employing the linear filtered backprojection (FBP) reconstruction method, a more general framework needs to be developed for systems using the nonlinear statistical model-based iterative reconstruction (MBIR) method. The purpose of this paper is to present such a unified framework for the optimization of kV/mAs selection for both FBP- and MBIR-based CT systems. Methods: The optimal selection of kV and mAs was formulated as a constrained optimization problem to minimize the objective function, Dose(kV,mAs), under the constraint that the achievable detectability index d′(kV,mAs) is not lower than the prescribed value of d℞′ for a given imaging task. Since it is difficult to analytically model the dependence of d′ on kV and mAs for the highly nonlinear MBIR method, this constrained optimization problem is solved with comprehensive measurements of Dose(kV,mAs) and d′(kV,mAs) at a variety of kV–mAs combinations, after which the overlay of the dose contours and d′ contours is used to graphically determine the optimal kV–mAs combination to achieve the lowest dose while maintaining the needed detectability for the given imaging task. As an example, d′ for a 17 mm hypoattenuating liver lesion detection task was experimentally measured with an anthropomorphic abdominal phantom at four tube potentials (80, 100, 120, and 140 kV) and fifteen mA levels (25 and 50–700) with a sampling interval of 50 mA at a fixed rotation time of 0.5 s, which corresponded to a dose (CTDIvol) range of [0.6, 70] mGy. Using the proposed method, the optimal kV and mA that minimized dose for the prescribed detectability level of d℞′=16 were determined. As another example, the optimal kV and mA for an 8 mm hyperattenuating liver lesion detection task were also measured using the developed framework. Both an in vivo animal and human subject study were used as demonstrations of how the developed framework can be applied to the clinical work flow. Results: For the first task, the optimal kV and mAs were measured to be 100 and 500, respectively, for FBP, which corresponded to a dose level of 24 mGy. In comparison, the optimal kV and mAs for MBIR were 80 and 150, respectively, which corresponded to a dose level of 4 mGy. The topographies of the iso-d′ map and the iso-CNR map were the same for FBP; thus, the use of d′- and CNR-based optimization methods generated the same results for FBP. However, the topographies of the iso-d′ and iso-CNR map were significantly different in MBIR; the CNR-based method overestimated the performance of MBIR, predicting an overly aggressive dose reduction factor. For the second task, the developed framework generated the following optimization results: for FBP, kV = 140, mA = 350, dose = 37.5 mGy; for MBIR, kV = 120, mA = 250, dose = 18.8 mGy. Again, the CNR-based method overestimated the performance of MBIR. Results of the preliminary in vivo studies were consistent with those of the phantom experiments. Conclusions: A unified and task-driven kV/mAs optimization framework has been developed in this work. The framework is applicable to both linear and nonlinear CT systems such as those using the MBIR method. As expected, the developed framework can be reduced to the conventional CNR-based kV/mAs optimization frameworks if the system is linear. For MBIR-based nonlinear CT systems, however, the developed task-based kV/mAs optimization framework is needed to achieve the maximal dose reduction while maintaining the desired diagnostic performance. PMID:26328971

  10. A Report on Applying EEGnet to Discriminate Human State Effects on Task Performance

    DTIC Science & Technology

    2018-01-01

    whether we could identify what task the participant was performing from differences in the recorded brain time series . We modeled the relationship...between input data (brain time series ) and output labels (task A and task B) as an unknown function, and we found an optimal approximation of that...this report are not to be construed as an official Department of the Army position unless so designated by other authorized documents. Citation of

  11. In Search of the Optimal Path: How Learners at Task Use an Online Dictionary

    ERIC Educational Resources Information Center

    Hamel, Marie-Josee

    2012-01-01

    We have analyzed circa 180 navigation paths followed by six learners while they performed three language encoding tasks at the computer using an online dictionary prototype. Our hypothesis was that learners who follow an "optimal path" while navigating within the dictionary, using its search and look-up functions, would have a high chance of…

  12. Solving the optimal attention allocation problem in manual control

    NASA Technical Reports Server (NTRS)

    Kleinman, D. L.

    1976-01-01

    Within the context of the optimal control model of human response, analytic expressions for the gradients of closed-loop performance metrics with respect to human operator attention allocation are derived. These derivatives serve as the basis for a gradient algorithm that determines the optimal attention that a human should allocate among several display indicators in a steady-state manual control task. Application of the human modeling techniques are made to study the hover control task for a CH-46 VTOL flight tested by NASA.

  13. Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography.

    PubMed

    Gang, G J; Siewerdsen, J H; Stayman, J W

    2017-02-11

    This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index ( d' ) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength ( β ) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.

  14. How work-self conflict/facilitation influences exhaustion and task performance: A three-wave study on the role of personal resources.

    PubMed

    Demerouti, Evangelia; Sanz-Vergel, Ana Isabel; Petrou, Paraskevas; van den Heuvel, Machteld

    2016-10-01

    Although work and family are undoubtedly important life domains, individuals are also active in other life roles which are also important to them (like pursuing personal interests). Building on identity theory and the resource perspective on work-home interface, we examined whether there is an indirect effect of work-self conflict/facilitation on exhaustion and task performance over time through personal resources (i.e., self-efficacy and optimism). The sample was composed of 368 Dutch police officers. Results of the 3-wave longitudinal study confirmed that work-self conflict was related to lower levels of self-efficacy, whereas work-self facilitation was related to improved optimism over time. In turn, self-efficacy was related to higher task performance, whereas optimism was related to diminished levels of exhaustion over time. Further analysis supported the negative, indirect effect of work-self facilitation on exhaustion through optimism over time, and only a few reversed causal effects emerged. The study contributes to the literature on interrole management by showing the role of personal resources in the process of conflict or facilitation over time. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. Optimal cooperative control synthesis of active displays

    NASA Technical Reports Server (NTRS)

    Garg, S.; Schmidt, D. K.

    1985-01-01

    A technique is developed that is intended to provide a systematic approach to synthesizing display augmentation for optimal manual control in complex, closed-loop tasks. A cooperative control synthesis technique, previously developed to design pilot-optimal control augmentation for the plant, is extended to incorporate the simultaneous design of performance enhancing displays. The technique utilizes an optimal control model of the man in the loop. It is applied to the design of a quickening control law for a display and a simple K/s(2) plant, and then to an F-15 type aircraft in a multi-channel task. Utilizing the closed loop modeling and analysis procedures, the results from the display design algorithm are evaluated and an analytical validation is performed. Experimental validation is recommended for future efforts.

  16. Identification of Swallowing Tasks From a Modified Barium Swallow Study That Optimize the Detection of Physiological Impairment

    PubMed Central

    Armeson, Kent E.; Hill, Elizabeth G.; Bonilha, Heather Shaw; Martin-Harris, Bonnie

    2017-01-01

    Purpose The purpose of this study was to identify which swallowing task(s) yielded the worst performance during a standardized modified barium swallow study (MBSS) in order to optimize the detection of swallowing impairment. Method This secondary data analysis of adult MBSSs estimated the probability of each swallowing task yielding the derived Modified Barium Swallow Impairment Profile (MBSImP™©; Martin-Harris et al., 2008) Overall Impression (OI; worst) scores using generalized estimating equations. The range of probabilities across swallowing tasks was calculated to discern which swallowing task(s) yielded the worst performance. Results Large-volume, thin-liquid swallowing tasks had the highest probabilities of yielding the OI scores for oral containment and airway protection. The cookie swallowing task was most likely to yield OI scores for oral clearance. Several swallowing tasks had nearly equal probabilities (≤ .20) of yielding the OI score. Conclusions The MBSS must represent impairment while requiring boluses that challenge the swallowing system. No single swallowing task had a sufficiently high probability to yield the identification of the worst score for each physiological component. Omission of swallowing tasks will likely fail to capture the most severe impairment for physiological components critical for safe and efficient swallowing. Results provide further support for standardized, well-tested protocols during MBSS. PMID:28614846

  17. Identification of Swallowing Tasks From a Modified Barium Swallow Study That Optimize the Detection of Physiological Impairment.

    PubMed

    Hazelwood, R Jordan; Armeson, Kent E; Hill, Elizabeth G; Bonilha, Heather Shaw; Martin-Harris, Bonnie

    2017-07-12

    The purpose of this study was to identify which swallowing task(s) yielded the worst performance during a standardized modified barium swallow study (MBSS) in order to optimize the detection of swallowing impairment. This secondary data analysis of adult MBSSs estimated the probability of each swallowing task yielding the derived Modified Barium Swallow Impairment Profile (MBSImP™©; Martin-Harris et al., 2008) Overall Impression (OI; worst) scores using generalized estimating equations. The range of probabilities across swallowing tasks was calculated to discern which swallowing task(s) yielded the worst performance. Large-volume, thin-liquid swallowing tasks had the highest probabilities of yielding the OI scores for oral containment and airway protection. The cookie swallowing task was most likely to yield OI scores for oral clearance. Several swallowing tasks had nearly equal probabilities (≤ .20) of yielding the OI score. The MBSS must represent impairment while requiring boluses that challenge the swallowing system. No single swallowing task had a sufficiently high probability to yield the identification of the worst score for each physiological component. Omission of swallowing tasks will likely fail to capture the most severe impairment for physiological components critical for safe and efficient swallowing. Results provide further support for standardized, well-tested protocols during MBSS.

  18. Young children do not succeed in choice tasks that imply evaluating chances.

    PubMed

    Girotto, Vittorio; Fontanari, Laura; Gonzalez, Michel; Vallortigara, Giorgio; Blaye, Agnès

    2016-07-01

    Preverbal infants manifest probabilistic intuitions in their reactions to the outcomes of simple physical processes and in their choices. Their ability conflicts with the evidence that, before the age of about 5years, children's verbal judgments do not reveal probability understanding. To assess these conflicting results, three studies tested 3-5-year-olds on choice tasks on which infants perform successfully. The results showed that children of all age groups made optimal choices in tasks that did not require forming probabilistic expectations. In probabilistic tasks, however, only 5-year-olds made optimal choices. Younger children performed at random and/or were guided by superficial heuristics. These results suggest caution in interpreting infants' ability to evaluate chance, and indicate that the development of this ability may not follow a linear trajectory. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. How age affects memory task performance in clinically normal hearing persons.

    PubMed

    Vercammen, Charlotte; Goossens, Tine; Wouters, Jan; van Wieringen, Astrid

    2017-05-01

    The main objective of this study is to investigate memory task performance in different age groups, irrespective of hearing status. Data are collected on a short-term memory task (WAIS-III Digit Span forward) and two working memory tasks (WAIS-III Digit Span backward and the Reading Span Test). The tasks are administered to young (20-30 years, n = 56), middle-aged (50-60 years, n = 47), and older participants (70-80 years, n = 16) with normal hearing thresholds. All participants have passed a cognitive screening task (Montreal Cognitive Assessment (MoCA)). Young participants perform significantly better than middle-aged participants, while middle-aged and older participants perform similarly on the three memory tasks. Our data show that older clinically normal hearing persons perform equally well on the memory tasks as middle-aged persons. However, even under optimal conditions of preserved sensory processing, changes in memory performance occur. Based on our data, these changes set in before middle age.

  20. Fast Gaussian kernel learning for classification tasks based on specially structured global optimization.

    PubMed

    Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen

    2014-09-01

    For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Acquisition of decision making criteria: reward rate ultimately beats accuracy.

    PubMed

    Balci, Fuat; Simen, Patrick; Niyogi, Ritwik; Saxe, Andrew; Hughes, Jessica A; Holmes, Philip; Cohen, Jonathan D

    2011-02-01

    Speed-accuracy trade-offs strongly influence the rate of reward that can be earned in many decision-making tasks. Previous reports suggest that human participants often adopt suboptimal speed-accuracy trade-offs in single session, two-alternative forced-choice tasks. We investigated whether humans acquired optimal speed-accuracy trade-offs when extensively trained with multiple signal qualities. When performance was characterized in terms of decision time and accuracy, our participants eventually performed nearly optimally in the case of higher signal qualities. Rather than adopting decision criteria that were individually optimal for each signal quality, participants adopted a single threshold that was nearly optimal for most signal qualities. However, setting a single threshold for different coherence conditions resulted in only negligible decrements in the maximum possible reward rate. Finally, we tested two hypotheses regarding the possible sources of suboptimal performance: (1) favoring accuracy over reward rate and (2) misestimating the reward rate due to timing uncertainty. Our findings provide support for both hypotheses, but also for the hypothesis that participants can learn to approach optimality. We find specifically that an accuracy bias dominates early performance, but diminishes greatly with practice. The residual discrepancy between optimal and observed performance can be explained by an adaptive response to uncertainty in time estimation.

  2. ASTROS: A multidisciplinary automated structural design tool

    NASA Technical Reports Server (NTRS)

    Neill, D. J.

    1989-01-01

    ASTROS (Automated Structural Optimization System) is a finite-element-based multidisciplinary structural optimization procedure developed under Air Force sponsorship to perform automated preliminary structural design. The design task is the determination of the structural sizes that provide an optimal structure while satisfying numerous constraints from many disciplines. In addition to its automated design features, ASTROS provides a general transient and frequency response capability, as well as a special feature to perform a transient analysis of a vehicle subjected to a nuclear blast. The motivation for the development of a single multidisciplinary design tool is that such a tool can provide improved structural designs in less time than is currently needed. The role of such a tool is even more apparent as modern materials come into widespread use. Balancing conflicting requirements for the structure's strength and stiffness while exploiting the benefits of material anisotropy is perhaps an impossible task without assistance from an automated design tool. Finally, the use of a single tool can bring the design task into better focus among design team members, thereby improving their insight into the overall task.

  3. Optimizing spectral CT parameters for material classification tasks

    NASA Astrophysics Data System (ADS)

    Rigie, D. S.; La Rivière, P. J.

    2016-06-01

    In this work, we propose a framework for optimizing spectral CT imaging parameters and hardware design with regard to material classification tasks. Compared with conventional CT, many more parameters must be considered when designing spectral CT systems and protocols. These choices will impact material classification performance in a non-obvious, task-dependent way with direct implications for radiation dose reduction. In light of this, we adapt Hotelling Observer formalisms typically applied to signal detection tasks to the spectral CT, material-classification problem. The result is a rapidly computable metric that makes it possible to sweep out many system configurations, generating parameter optimization curves (POC’s) that can be used to select optimal settings. The proposed model avoids restrictive assumptions about the basis-material decomposition (e.g. linearity) and incorporates signal uncertainty with a stochastic object model. This technique is demonstrated on dual-kVp and photon-counting systems for two different, clinically motivated material classification tasks (kidney stone classification and plaque removal). We show that the POC’s predicted with the proposed analytic model agree well with those derived from computationally intensive numerical simulation studies.

  4. Optimizing Spectral CT Parameters for Material Classification Tasks

    PubMed Central

    Rigie, D. S.; La Rivière, P. J.

    2017-01-01

    In this work, we propose a framework for optimizing spectral CT imaging parameters and hardware design with regard to material classification tasks. Compared with conventional CT, many more parameters must be considered when designing spectral CT systems and protocols. These choices will impact material classification performance in a non-obvious, task-dependent way with direct implications for radiation dose reduction. In light of this, we adapt Hotelling Observer formalisms typically applied to signal detection tasks to the spectral CT, material-classification problem. The result is a rapidly computable metric that makes it possible to sweep out many system configurations, generating parameter optimization curves (POC’s) that can be used to select optimal settings. The proposed model avoids restrictive assumptions about the basis-material decomposition (e.g. linearity) and incorporates signal uncertainty with a stochastic object model. This technique is demonstrated on dual-kVp and photon-counting systems for two different, clinically motivated material classification tasks (kidney stone classification and plaque removal). We show that the POC’s predicted with the proposed analytic model agree well with those derived from computationally intensive numerical simulation studies. PMID:27227430

  5. Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography

    PubMed Central

    Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.

    2017-01-01

    Purpose This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d′) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM. PMID:28626290

  6. Joint optimization of fluence field modulation and regularization in task-driven computed tomography

    NASA Astrophysics Data System (ADS)

    Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.

    2017-03-01

    Purpose: This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods: We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d') across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results: The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions: The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.

  7. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    PubMed

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  8. New approaches to optimization in aerospace conceptual design

    NASA Technical Reports Server (NTRS)

    Gage, Peter J.

    1995-01-01

    Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.

  9. The effects of display-control I/O, compatibility, and integrality on dual-task performance and subjective workload

    NASA Technical Reports Server (NTRS)

    Tsang, Pamela S.; Hart, Sandra G.; Vidulich, Michael A.

    1987-01-01

    The utility of speech technology was evaluated in terms of three dual task principles: resource competition between the time shared tasks, stimulus central processing response compatibility, and task integrality. Empirical support for these principles was reviewed. Two studies investigating the interactive effects of the three principles were described. Objective performance and subjective workload ratings for both single and dual tasks were examined. It was found that the single task measures were not necessarily good predictors for the dual task measures. It was shown that all three principles played an important role in determining an optimal task configuration. This was reflected in both the performance measures and the subjective measures. Therefore, consideration of all three principles is required to insure proper use of speech technology in a complex environment.

  10. Integrating prediction, provenance, and optimization into high energy workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schram, M.; Bansal, V.; Friese, R. D.

    We propose a novel approach for efficient execution of workflows on distributed resources. The key components of this framework include: performance modeling to quantitatively predict workflow component behavior; optimization-based scheduling such as choosing an optimal subset of resources to meet demand and assignment of tasks to resources; distributed I/O optimizations such as prefetching; and provenance methods for collecting performance data. In preliminary results, these techniques improve throughput on a small Belle II workflow by 20%.

  11. Pointing Device Performance in Steering Tasks.

    PubMed

    Senanayake, Ransalu; Goonetilleke, Ravindra S

    2016-06-01

    Use of touch-screen-based interactions is growing rapidly. Hence, knowing the maneuvering efficacy of touch screens relative to other pointing devices is of great importance in the context of graphical user interfaces. Movement time, accuracy, and user preferences of four pointing device settings were evaluated on a computer with 14 participants aged 20.1 ± 3.13 years. It was found that, depending on the difficulty of the task, the optimal settings differ for ballistic and visual control tasks. With a touch screen, resting the arm increased movement time for steering tasks. When both performance and comfort are considered, whether to use a mouse or a touch screen for person-computer interaction depends on the steering difficulty. Hence, a input device should be chosen based on the application, and should be optimized to match the graphical user interface. © The Author(s) 2016.

  12. Performance-based workload assessment: Allocation strategy and added task sensitivity

    NASA Technical Reports Server (NTRS)

    Vidulich, Michael A.

    1990-01-01

    The preliminary results of a research program investigating the use of added tasks to evaluate mental workload are reviewed. The focus of the first studies was a reappraisal of the traditional secondary task logic that encouraged the use of low-priority instructions for the added task. It was believed that such low-priority tasks would encourage subjects to split their available resources among the two tasks. The primary task would be assigned all the resources it needed, and any remaining reserve capacity would be assigned to the secondary task. If the model were correct, this approach was expected to combine sensitivity to primary task difficulty with unintrusiveness to primary task performance. The first studies of the current project demonstrated that a high-priority added task, although intrusive, could be more sensitive than the traditional low-priority secondary task. These results suggested that a more appropriate model of the attentional effects associated with added task performance might be based on capacity switching, rather than the traditional optimal allocation model.

  13. Task Analysis - Its Relation to Content Analysis.

    ERIC Educational Resources Information Center

    Gagne, Robert M.

    Task analysis is a procedure having the purpose of identifying different kinds of performances which are outcomes of learning, in order to make possible the specification of optimal instructional conditions for each kind of outcome. Task analysis may be related to content analysis in two different ways: (1) it may be used to identify the probably…

  14. Modeling human decision making behavior in supervisory control

    NASA Technical Reports Server (NTRS)

    Tulga, M. K.; Sheridan, T. B.

    1977-01-01

    An optimal decision control model was developed, which is based primarily on a dynamic programming algorithm which looks at all the available task possibilities, charts an optimal trajectory, and commits itself to do the first step (i.e., follow the optimal trajectory during the next time period), and then iterates the calculation. A Bayesian estimator was included which estimates the tasks which might occur in the immediate future and provides this information to the dynamic programming routine. Preliminary trials comparing the human subject's performance to that of the optimal model show a great similarity, but indicate that the human skips certain movements which require quick change in strategy.

  15. Perceptual learning through optimization of attentional weighting: human versus optimal Bayesian learner

    NASA Technical Reports Server (NTRS)

    Eckstein, Miguel P.; Abbey, Craig K.; Pham, Binh T.; Shimozaki, Steven S.

    2004-01-01

    Human performance in visual detection, discrimination, identification, and search tasks typically improves with practice. Psychophysical studies suggest that perceptual learning is mediated by an enhancement in the coding of the signal, and physiological studies suggest that it might be related to the plasticity in the weighting or selection of sensory units coding task relevant information (learning through attention optimization). We propose an experimental paradigm (optimal perceptual learning paradigm) to systematically study the dynamics of perceptual learning in humans by allowing comparisons to that of an optimal Bayesian algorithm and a number of suboptimal learning models. We measured improvement in human localization (eight-alternative forced-choice with feedback) performance of a target randomly sampled from four elongated Gaussian targets with different orientations and polarities and kept as a target for a block of four trials. The results suggest that the human perceptual learning can occur within a lapse of four trials (<1 min) but that human learning is slower and incomplete with respect to the optimal algorithm (23.3% reduction in human efficiency from the 1st-to-4th learning trials). The greatest improvement in human performance, occurring from the 1st-to-2nd learning trial, was also present in the optimal observer, and, thus reflects a property inherent to the visual task and not a property particular to the human perceptual learning mechanism. One notable source of human inefficiency is that, unlike the ideal observer, human learning relies more heavily on previous decisions than on the provided feedback, resulting in no human learning on trials following a previous incorrect localization decision. Finally, the proposed theory and paradigm provide a flexible framework for future studies to evaluate the optimality of human learning of other visual cues and/or sensory modalities.

  16. Differences in attentional strategies by novice and experienced operating theatre scrub nurses.

    PubMed

    Koh, Ranieri Y I; Park, Taezoon; Wickens, Christopher D; Ong, Lay Teng; Chia, Soon Noi

    2011-09-01

    This study investigated the effect of nursing experience on attention allocation and task performance during surgery. The prevention of cases of retained foreign bodies after surgery typically depends on scrub nurses, who are responsible for performing multiple tasks that impose heavy demands on the nurses' cognitive resources. However, the relationship between the level of experiences and attention allocation strategies has not been extensively studied. Eye movement data were collected from 10 novice and 10 experienced scrub nurses in the operating theater for caesarean section surgeries. Visual scanning data, analyzed by dividing the workstation into four main areas and the surgery into four stages, were compared to the optimum expected value estimated by SEEV (Salience, Effort, Expectancy, and Value) model. Both experienced and novice nurses showed significant correlations to the optimal percentage dwell time values, and significant differences were found in attention allocation optimality between experienced and novice nurses, with experienced nurses adhering significantly more to the optimal in the stages of high workload. Experienced nurses spent less time on the final count and encountered fewer interruptions during the count than novices indicating better performance in task management, whereas novice nurses switched attention between areas of interest more than experienced nurses. The results provide empirical evidence of a relationship between the application of optimal visual attention management strategies and performance, opening up possibilities to the development of visual attention and interruption training for better performance. (c) 2011 APA, all rights reserved.

  17. Effects of disulfiram on choice behavior in a rodent gambling task: association with catecholamine levels.

    PubMed

    Di Ciano, Patricia; Manvich, Daniel F; Pushparaj, Abhiram; Gappasov, Andrew; Hess, Ellen J; Weinshenker, David; Le Foll, Bernard

    2018-01-01

    Gambling disorder is a growing societal concern, as recognized by its recent classification as an addictive disorder in the DSM-5. Case reports have shown that disulfiram reduces gambling-related behavior in humans. The purpose of the present study was to determine whether disulfiram affects performance on a rat gambling task, a rodent version of the Iowa gambling task in humans, and whether any changes were associated with alterations in dopamine and/or norepinephrine levels. Rats were administered disulfiram prior to testing on the rat gambling task or prior to analysis of dopamine or norepinephrine levels in brain homogenates. Rats in the behavioral task were divided into two subgroups (optimal vs suboptimal) based on their baseline levels of performance in the rat gambling task. Rats in the optimal group chose the advantageous strategy more, and rats in the suboptimal group (a parallel to problem gambling) chose the disadvantageous strategy more. Rats were not divided into optimal or suboptimal groups prior to neurochemical analysis. Disulfiram administered 2 h, but not 30 min, before the task dose-dependently improved choice behavior in the rats with an initial disadvantageous "gambling-like" strategy, while having no effect on the rats employing an advantageous strategy. The behavioral effects of disulfiram were associated with increased striatal dopamine and decreased striatal norepinephrine. These findings suggest that combined actions on dopamine and norepinephrine may be a useful treatment for gambling disorders.

  18. Optimal dynamic voltage scaling for wireless sensor nodes with real-time constraints

    NASA Astrophysics Data System (ADS)

    Cassandras, Christos G.; Zhuang, Shixin

    2005-11-01

    Sensors are increasingly embedded in manufacturing systems and wirelessly networked to monitor and manage operations ranging from process and inventory control to tracking equipment and even post-manufacturing product monitoring. In building such sensor networks, a critical issue is the limited and hard to replenish energy in the devices involved. Dynamic voltage scaling is a technique that controls the operating voltage of a processor to provide desired performance while conserving energy and prolonging the overall network's lifetime. We consider such power-limited devices processing time-critical tasks which are non-preemptive, aperiodic and have uncertain arrival times. We treat voltage scaling as a dynamic optimization problem whose objective is to minimize energy consumption subject to hard or soft real-time execution constraints. In the case of hard constraints, we build on prior work (which engages a voltage scaling controller at task completion times) by developing an intra-task controller that acts at all arrival times of incoming tasks. We show that this optimization problem can be decomposed into two simpler ones whose solution leads to an algorithm that does not actually require solving any nonlinear programming problems. In the case of soft constraints, this decomposition must be partly relaxed, but it still leads to a scalable (linear in the number of tasks) algorithm. Simulation results are provided to illustrate performance improvements in systems with intra-task controllers compared to uncontrolled systems or those using inter-task control.

  19. FMRQ-A Multiagent Reinforcement Learning Algorithm for Fully Cooperative Tasks.

    PubMed

    Zhang, Zhen; Zhao, Dongbin; Gao, Junwei; Wang, Dongqing; Dai, Yujie

    2017-06-01

    In this paper, we propose a multiagent reinforcement learning algorithm dealing with fully cooperative tasks. The algorithm is called frequency of the maximum reward Q-learning (FMRQ). FMRQ aims to achieve one of the optimal Nash equilibria so as to optimize the performance index in multiagent systems. The frequency of obtaining the highest global immediate reward instead of immediate reward is used as the reinforcement signal. With FMRQ each agent does not need the observation of the other agents' actions and only shares its state and reward at each step. We validate FMRQ through case studies of repeated games: four cases of two-player two-action and one case of three-player two-action. It is demonstrated that FMRQ can converge to one of the optimal Nash equilibria in these cases. Moreover, comparison experiments on tasks with multiple states and finite steps are conducted. One is box-pushing and the other one is distributed sensor network problem. Experimental results show that the proposed algorithm outperforms others with higher performance.

  20. Evaluation of linearly solvable Markov decision process with dynamic model learning in a mobile robot navigation task.

    PubMed

    Kinjo, Ken; Uchibe, Eiji; Doya, Kenji

    2013-01-01

    Linearly solvable Markov Decision Process (LMDP) is a class of optimal control problem in which the Bellman's equation can be converted into a linear equation by an exponential transformation of the state value function (Todorov, 2009b). In an LMDP, the optimal value function and the corresponding control policy are obtained by solving an eigenvalue problem in a discrete state space or an eigenfunction problem in a continuous state using the knowledge of the system dynamics and the action, state, and terminal cost functions. In this study, we evaluate the effectiveness of the LMDP framework in real robot control, in which the dynamics of the body and the environment have to be learned from experience. We first perform a simulation study of a pole swing-up task to evaluate the effect of the accuracy of the learned dynamics model on the derived the action policy. The result shows that a crude linear approximation of the non-linear dynamics can still allow solution of the task, despite with a higher total cost. We then perform real robot experiments of a battery-catching task using our Spring Dog mobile robot platform. The state is given by the position and the size of a battery in its camera view and two neck joint angles. The action is the velocities of two wheels, while the neck joints were controlled by a visual servo controller. We test linear and bilinear dynamic models in tasks with quadratic and Guassian state cost functions. In the quadratic cost task, the LMDP controller derived from a learned linear dynamics model performed equivalently with the optimal linear quadratic regulator (LQR). In the non-quadratic task, the LMDP controller with a linear dynamics model showed the best performance. The results demonstrate the usefulness of the LMDP framework in real robot control even when simple linear models are used for dynamics learning.

  1. Task-Driven Orbit Design and Implementation on a Robotic C-Arm System for Cone-Beam CT.

    PubMed

    Ouadah, S; Jacobson, M; Stayman, J W; Ehtiati, T; Weiss, C; Siewerdsen, J H

    2017-03-01

    This work applies task-driven optimization to the design of non-circular orbits that maximize imaging performance for a particular imaging task. First implementation of task-driven imaging on a clinical robotic C-arm system is demonstrated, and a framework for orbit calculation is described and evaluated. We implemented a task-driven imaging framework to optimize orbit parameters that maximize detectability index d '. This framework utilizes a specified Fourier domain task function and an analytical model for system spatial resolution and noise. Two experiments were conducted to test the framework. First, a simple task was considered consisting of frequencies lying entirely on the f z -axis (e.g., discrimination of structures oriented parallel to the central axial plane), and a "circle + arc" orbit was incorporated into the framework as a means to improve sampling of these frequencies, and thereby increase task-based detectability. The orbit was implemented on a robotic C-arm (Artis Zeego, Siemens Healthcare). A second task considered visualization of a cochlear implant simulated within a head phantom, with spatial frequency response emphasizing high-frequency content in the ( f y , f z ) plane of the cochlea. An optimal orbit was computed using the task-driven framework, and the resulting image was compared to that for a circular orbit. For the f z -axis task, the circle + arc orbit was shown to increase d ' by a factor of 1.20, with an improvement of 0.71 mm in a 3D edge-spread measurement for edges located far from the central plane and a decrease in streak artifacts compared to a circular orbit. For the cochlear implant task, the resulting orbit favored complementary views of high tilt angles in a 360° orbit, and d ' was increased by a factor of 1.83. This work shows that a prospective definition of imaging task can be used to optimize source-detector orbit and improve imaging performance. The method was implemented for execution of non-circular, task-driven orbits on a clinical robotic C-arm system. The framework is sufficiently general to include both acquisition parameters (e.g., orbit, kV, and mA selection) and reconstruction parameters (e.g., a spatially varying regularizer).

  2. Task-driven orbit design and implementation on a robotic C-arm system for cone-beam CT

    NASA Astrophysics Data System (ADS)

    Ouadah, S.; Jacobson, M.; Stayman, J. W.; Ehtiati, T.; Weiss, C.; Siewerdsen, J. H.

    2017-03-01

    Purpose: This work applies task-driven optimization to the design of non-circular orbits that maximize imaging performance for a particular imaging task. First implementation of task-driven imaging on a clinical robotic C-arm system is demonstrated, and a framework for orbit calculation is described and evaluated. Methods: We implemented a task-driven imaging framework to optimize orbit parameters that maximize detectability index d'. This framework utilizes a specified Fourier domain task function and an analytical model for system spatial resolution and noise. Two experiments were conducted to test the framework. First, a simple task was considered consisting of frequencies lying entirely on the fz-axis (e.g., discrimination of structures oriented parallel to the central axial plane), and a "circle + arc" orbit was incorporated into the framework as a means to improve sampling of these frequencies, and thereby increase task-based detectability. The orbit was implemented on a robotic C-arm (Artis Zeego, Siemens Healthcare). A second task considered visualization of a cochlear implant simulated within a head phantom, with spatial frequency response emphasizing high-frequency content in the (fy, fz) plane of the cochlea. An optimal orbit was computed using the task-driven framework, and the resulting image was compared to that for a circular orbit. Results: For the fz-axis task, the circle + arc orbit was shown to increase d' by a factor of 1.20, with an improvement of 0.71 mm in a 3D edge-spread measurement for edges located far from the central plane and a decrease in streak artifacts compared to a circular orbit. For the cochlear implant task, the resulting orbit favored complementary views of high tilt angles in a 360° orbit, and d' was increased by a factor of 1.83. Conclusions: This work shows that a prospective definition of imaging task can be used to optimize source-detector orbit and improve imaging performance. The method was implemented for execution of non-circular, task-driven orbits on a clinical robotic C-arm system. The framework is sufficiently general to include both acquisition parameters (e.g., orbit, kV, and mA selection) and reconstruction parameters (e.g., a spatially varying regularizer).

  3. Benchmarking image fusion system design parameters

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.

    2013-06-01

    A clear and absolute method for discriminating between image fusion algorithm performances is presented. This method can effectively be used to assist in the design and modeling of image fusion systems. Specifically, it is postulated that quantifying human task performance using image fusion should be benchmarked to whether the fusion algorithm, at a minimum, retained the performance benefit achievable by each independent spectral band being fused. The established benchmark would then clearly represent the threshold that a fusion system should surpass to be considered beneficial to a particular task. A genetic algorithm is employed to characterize the fused system parameters using a Matlab® implementation of NVThermIP as the objective function. By setting the problem up as a mixed-integer constraint optimization problem, one can effectively look backwards through the image acquisition process: optimizing fused system parameters by minimizing the difference between modeled task difficulty measure and the benchmark task difficulty measure. The results of an identification perception experiment are presented, where human observers were asked to identify a standard set of military targets, and used to demonstrate the effectiveness of the benchmarking process.

  4. Mild traumatic brain injury: graph-model characterization of brain networks for episodic memory.

    PubMed

    Tsirka, Vasso; Simos, Panagiotis G; Vakis, Antonios; Kanatsouli, Kassiani; Vourkas, Michael; Erimaki, Sofia; Pachou, Ellie; Stam, Cornelis Jan; Micheloyannis, Sifis

    2011-02-01

    Episodic memory is among the cognitive functions that can be affected in the acute phase following mild traumatic brain injury (MTBI). The present study used EEG recordings to evaluate global synchronization and network organization of rhythmic activity during the encoding and recognition phases of an episodic memory task varying in stimulus type (kaleidoscope images, pictures, words, and pseudowords). Synchronization of oscillatory activity was assessed using a linear and nonlinear connectivity estimator and network analyses were performed using algorithms derived from graph theory. Twenty five MTBI patients (tested within days post-injury) and healthy volunteers were closely matched on demographic variables, verbal ability, psychological status variables, as well as on overall task performance. Patients demonstrated sub-optimal network organization, as reflected by changes in graph parameters in the theta and alpha bands during both encoding and recognition. There were no group differences in spectral energy during task performance or on network parameters during a control condition (rest). Evidence of less optimally organized functional networks during memory tasks was more prominent for pictorial than for verbal stimuli. Copyright © 2010 Elsevier B.V. All rights reserved.

  5. Alterations in Resting-State Activity Relate to Performance in a Verbal Recognition Task

    PubMed Central

    López Zunini, Rocío A.; Thivierge, Jean-Philippe; Kousaie, Shanna; Sheppard, Christine; Taler, Vanessa

    2013-01-01

    In the brain, resting-state activity refers to non-random patterns of intrinsic activity occurring when participants are not actively engaged in a task. We monitored resting-state activity using electroencephalogram (EEG) both before and after a verbal recognition task. We show a strong positive correlation between accuracy in verbal recognition and pre-task resting-state alpha power at posterior sites. We further characterized this effect by examining resting-state post-task activity. We found marked alterations in resting-state alpha power when comparing pre- and post-task periods, with more pronounced alterations in participants that attained higher task accuracy. These findings support a dynamical view of cognitive processes where patterns of ongoing brain activity can facilitate –or interfere– with optimal task performance. PMID:23785436

  6. A quantitative measure for degree of automation and its relation to system performance and mental load.

    PubMed

    Wei, Z G; Macwan, A P; Wieringa, P A

    1998-06-01

    In this paper we quantitatively model degree of automation (DofA) in supervisory control as a function of the number and nature of tasks to be performed by the operator and automation. This model uses a task weighting scheme in which weighting factors are obtained from task demand load, task mental load, and task effect on system performance. The computation of DofA is demonstrated using an experimental system. Based on controlled experiments using operators, analyses of the task effect on system performance, the prediction and assessment of task demand load, and the prediction of mental load were performed. Each experiment had a different DofA. The effect of a change in DofA on system performance and mental load was investigated. It was found that system performance became less sensitive to changes in DofA at higher levels of DofA. The experimental data showed that when the operator controlled a partly automated system, perceived mental load could be predicted from the task mental load for each task component, as calculated by analyzing a situation in which all tasks were manually controlled. Actual or potential applications of this research include a methodology to balance and optimize the automation of complex industrial systems.

  7. Optimality and stability of intentional and unintentional actions: I. Origins of drifts in performance.

    PubMed

    Parsa, Behnoosh; Terekhov, Alexander; Zatsiorsky, Vladimir M; Latash, Mark L

    2017-02-01

    We address the nature of unintentional changes in performance in two papers. This first paper tested a hypothesis that unintentional changes in performance variables during continuous tasks without visual feedback are due to two processes. First, there is a drift of the referent coordinate for the salient performance variable toward the actual coordinate of the effector. Second, there is a drift toward minimum of a cost function. We tested this hypothesis in four-finger isometric pressing tasks that required the accurate production of a combination of total moment and total force with natural and modified finger involvement. Subjects performed accurate force-moment production tasks under visual feedback, and then visual feedback was removed for some or all of the salient variables. Analytical inverse optimization was used to compute a cost function. Without visual feedback, both force and moment drifted slowly toward lower absolute magnitudes. Over 15 s, the force drop could reach 20% of its initial magnitude while moment drop could reach 30% of its initial magnitude. Individual finger forces could show drifts toward both higher and lower forces. The cost function estimated using the analytical inverse optimization reduced its value as a consequence of the drift. We interpret the results within the framework of hierarchical control with referent spatial coordinates for salient variables at each level of the hierarchy combined with synergic control of salient variables. The force drift is discussed as a natural relaxation process toward states with lower potential energy in the physical (physiological) system involved in the task.

  8. Optimality and stability of intentional and unintentional actions: I. Origins of drifts in performance

    PubMed Central

    Parsa, Behnoosh; Terekhov, Alexander; Zatsiorsky, Vladimir M.; Latash, Mark L.

    2016-01-01

    We address the nature of unintentional changes in performance in two papers. This first paper tested a hypothesis that unintentional changes in performance variables during continuous tasks without visual feedback are due to two processes. First, there is a drift of the referent coordinate for the salient performance variable toward the actual coordinate of the effector. Second, there is a drift toward minimum of a cost function. We tested this hypothesis in four-finger isometric pressing tasks that required the accurate production of a combination of total moment and total force with natural and modified finger involvement. Subjects performed accurate force/moment production tasks under visual feedback, and then visual feedback was removed for some or all of the salient variables. Analytical inverse optimization was used to compute a cost function. Without visual feedback, both force and moment drifted slowly toward lower absolute magnitudes. Over 15 s, the force drop could reach 20% of its initial magnitude while moment drop could reach 30% of its initial magnitude. Individual finger forces could show drifts toward both higher and lower forces. The cost function estimated using the analytical inverse optimization reduced its value as a consequence of the drift. We interpret the results within the framework of hierarchical control with referent spatial coordinates for salient variables at each level of the hierarchy combined with synergic control of salient variables. The force drift is discussed as a natural relaxation process toward states with lower potential energy in the physical (physiological) system involved in the task. PMID:27785549

  9. Strategic workload management and decision biases in aviation

    NASA Technical Reports Server (NTRS)

    Raby, Mireille; Wickens, Christopher D.

    1994-01-01

    Thirty pilots flew three simulated landing approaches under conditions of low, medium, and high workload. Workload conditions were created by varying time pressure and external communications requirements. Our interest was in how the pilots strategically managed or adapted to the increasing workload. We independently assessed the pilot's ranking of the priority of different discrete tasks during the approach and landing. Pilots were found to sacrifice some aspects of primary flight control as workload increased. For discrete tasks, increasing workload increased the amount of time in performing the high priority tasks, decreased the time in performing those of lowest priority, and did not affect duration of performance episodes or optimality of scheduling of tasks of any priority level. Individual differences analysis revealed that high-performing subjects scheduled discrete tasks earlier in the flight and shifted more often between different activities.

  10. The Curvilinear Relationship between State Neuroticism and Momentary Task Performance

    PubMed Central

    Debusscher, Jonas; Hofmans, Joeri; De Fruyt, Filip

    2014-01-01

    A daily diary and two experience sampling studies were carried out to investigate curvilinearity of the within-person relationship between state neuroticism and task performance, as well as the moderating effects of within-person variation in momentary job demands (i.e., work pressure and task complexity). In one, results showed that under high work pressure, the state neuroticism–task performance relationship was best described by an exponentially decreasing curve, whereas an inverted U-shaped curve was found for tasks low in work pressure, while in another study, a similar trend was visible for task complexity. In the final study, the state neuroticism–momentary task performance relationship was a linear one, and this relationship was moderated by momentary task complexity. Together, results from all three studies showed that it is important to take into account the moderating effects of momentary job demands because within-person variation in job demands affects the way in which state neuroticism relates to momentary levels of task performance. Specifically, we found that experiencing low levels of state neuroticism may be most beneficial in high demanding tasks, whereas more moderate levels of state neuroticism are optimal under low momentary job demands. PMID:25238547

  11. Mechanics and pathomechanics in the overhead athlete.

    PubMed

    Kibler, W Ben; Wilkes, Trevor; Sciascia, Aaron

    2013-10-01

    Optimal performance of the overhead throwing task requires precise mechanics that involve coordinated kinetic and kinematic chains to develop, transfer, and regulate the forces the body needs to withstand the inherent demands of the task and to allow optimal performance. These chains have been evaluated and the basic components, called nodes, have been identified. Impaired performance and/or injury, the DTS, is associated with alterations in the mechanics that are called pathomechanics. They can occur at multiple locations throughout the kinetic chain. They must be evaluated and treated as part of the overall problem. Observational analysis of the mechanics and pathomechanics using the node analysis method can be useful in highlighting areas of alteration that can be evaluated for anatomic injury or altered physiology. The comprehensive kinetic chain examination can evaluate sites of kinetic chain breakage, and a detailed shoulder examination can assess joint internal derangement of altered physiology that may contribute to the pathomechanics. Treatment of the DTS should be comprehensive, directed toward restoring physiology and mechanics and optimizing anatomy. This maximizes the body’s ability to develop normal mechanics to accomplish the overhead throwing task. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. A dual-task investigation of automaticity in visual word processing

    NASA Technical Reports Server (NTRS)

    McCann, R. S.; Remington, R. W.; Van Selst, M.

    2000-01-01

    An analysis of activation models of visual word processing suggests that frequency-sensitive forms of lexical processing should proceed normally while unattended. This hypothesis was tested by having participants perform a speeded pitch discrimination task followed by lexical decisions or word naming. As the stimulus onset asynchrony between the tasks was reduced, lexical-decision and naming latencies increased dramatically. Word-frequency effects were additive with the increase, indicating that frequency-sensitive processing was subject to postponement while attention was devoted to the other task. Either (a) the same neural hardware shares responsibility for lexical processing and central stages of choice reaction time task processing and cannot perform both computations simultaneously, or (b) lexical processing is blocked in order to optimize performance on the pitch discrimination task. Either way, word processing is not as automatic as activation models suggest.

  13. Computational models of the Posner simple and choice reaction time tasks

    PubMed Central

    Feher da Silva, Carolina; Baldo, Marcus V. C.

    2015-01-01

    The landmark experiments by Posner in the late 1970s have shown that reaction time (RT) is faster when the stimulus appears in an expected location, as indicated by a cue; since then, the so-called Posner task has been considered a “gold standard” test of spatial attention. It is thus fundamental to understand the neural mechanisms involved in performing it. To this end, we have developed a Bayesian detection system and small integrate-and-fire neural networks, which modeled sensory and motor circuits, respectively, and optimized them to perform the Posner task under different cue type proportions and noise levels. In doing so, main findings of experimental research on RT were replicated: the relative frequency effect, suboptimal RTs and significant error rates due to noise and invalid cues, slower RT for choice RT tasks than for simple RT tasks, fastest RTs for valid cues and slowest RTs for invalid cues. Analysis of the optimized systems revealed that the employed mechanisms were consistent with related findings in neurophysiology. Our models predict that (1) the results of a Posner task may be affected by the relative frequency of valid and neutral trials, (2) in simple RT tasks, input from multiple locations are added together to compose a stronger signal, and (3) the cue affects motor circuits more strongly in choice RT tasks than in simple RT tasks. In discussing the computational demands of the Posner task, attention has often been described as a filter that protects the nervous system, whose capacity is limited, from information overload. Our models, however, reveal that the main problems that must be overcome to perform the Posner task effectively are distinguishing signal from external noise and selecting the appropriate response in the presence of internal noise. PMID:26190997

  14. A novel task-oriented optimal design for P300-based brain-computer interfaces.

    PubMed

    Zhou, Zongtan; Yin, Erwei; Liu, Yang; Jiang, Jun; Hu, Dewen

    2014-10-01

    Objective. The number of items of a P300-based brain-computer interface (BCI) should be adjustable in accordance with the requirements of the specific tasks. To address this issue, we propose a novel task-oriented optimal approach aimed at increasing the performance of general P300 BCIs with different numbers of items. Approach. First, we proposed a stimulus presentation with variable dimensions (VD) paradigm as a generalization of the conventional single-character (SC) and row-column (RC) stimulus paradigms. Furthermore, an embedding design approach was employed for any given number of items. Finally, based on the score-P model of each subject, the VD flash pattern was selected by a linear interpolation approach for a certain task. Main results. The results indicate that the optimal BCI design consistently outperforms the conventional approaches, i.e., the SC and RC paradigms. Specifically, there is significant improvement in the practical information transfer rate for a large number of items. Significance. The results suggest that the proposed optimal approach would provide useful guidance in the practical design of general P300-based BCIs.

  15. A novel task-oriented optimal design for P300-based brain-computer interfaces

    NASA Astrophysics Data System (ADS)

    Zhou, Zongtan; Yin, Erwei; Liu, Yang; Jiang, Jun; Hu, Dewen

    2014-10-01

    Objective. The number of items of a P300-based brain-computer interface (BCI) should be adjustable in accordance with the requirements of the specific tasks. To address this issue, we propose a novel task-oriented optimal approach aimed at increasing the performance of general P300 BCIs with different numbers of items. Approach. First, we proposed a stimulus presentation with variable dimensions (VD) paradigm as a generalization of the conventional single-character (SC) and row-column (RC) stimulus paradigms. Furthermore, an embedding design approach was employed for any given number of items. Finally, based on the score-P model of each subject, the VD flash pattern was selected by a linear interpolation approach for a certain task. Main results. The results indicate that the optimal BCI design consistently outperforms the conventional approaches, i.e., the SC and RC paradigms. Specifically, there is significant improvement in the practical information transfer rate for a large number of items. Significance. The results suggest that the proposed optimal approach would provide useful guidance in the practical design of general P300-based BCIs.

  16. Medial prefrontal cortex and the adaptive regulation of reinforcement learning parameters.

    PubMed

    Khamassi, Mehdi; Enel, Pierre; Dominey, Peter Ford; Procyk, Emmanuel

    2013-01-01

    Converging evidence suggest that the medial prefrontal cortex (MPFC) is involved in feedback categorization, performance monitoring, and task monitoring, and may contribute to the online regulation of reinforcement learning (RL) parameters that would affect decision-making processes in the lateral prefrontal cortex (LPFC). Previous neurophysiological experiments have shown MPFC activities encoding error likelihood, uncertainty, reward volatility, as well as neural responses categorizing different types of feedback, for instance, distinguishing between choice errors and execution errors. Rushworth and colleagues have proposed that the involvement of MPFC in tracking the volatility of the task could contribute to the regulation of one of RL parameters called the learning rate. We extend this hypothesis by proposing that MPFC could contribute to the regulation of other RL parameters such as the exploration rate and default action values in case of task shifts. Here, we analyze the sensitivity to RL parameters of behavioral performance in two monkey decision-making tasks, one with a deterministic reward schedule and the other with a stochastic one. We show that there exist optimal parameter values specific to each of these tasks, that need to be found for optimal performance and that are usually hand-tuned in computational models. In contrast, automatic online regulation of these parameters using some heuristics can help producing a good, although non-optimal, behavioral performance in each task. We finally describe our computational model of MPFC-LPFC interaction used for online regulation of the exploration rate and its application to a human-robot interaction scenario. There, unexpected uncertainties are produced by the human introducing cued task changes or by cheating. The model enables the robot to autonomously learn to reset exploration in response to such uncertain cues and events. The combined results provide concrete evidence specifying how prefrontal cortical subregions may cooperate to regulate RL parameters. It also shows how such neurophysiologically inspired mechanisms can control advanced robots in the real world. Finally, the model's learning mechanisms that were challenged in the last robotic scenario provide testable predictions on the way monkeys may learn the structure of the task during the pretraining phase of the previous laboratory experiments. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Energy Supply- Production of Fuel from Agricultural and Animal Waste

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gabriel Miller

    2009-03-25

    The Society for Energy and Environmental Research (SEER) was funded in March 2004 by the Department of Energy, under grant DE-FG-36-04GO14268, to produce a study, and oversee construction and implementation, for the thermo-chemical production of fuel from agricultural and animal waste. The grant focuses on the Changing World Technologies (CWT) of West Hempstead, NY, thermal conversion process (TCP), which converts animal residues and industrial food processing biproducts into fuels, and as an additional product, fertilizers. A commercial plant was designed and built by CWT, partially using grant funds, in Carthage, Missouri, to process animal residues from a nearby turkey processingmore » plant. The DOE sponsored program consisted of four tasks. These were: Task 1 Optimization of the CWT Plant in Carthage - This task focused on advancing and optimizing the process plant operated by CWT that converts organic waste to fuel and energy. Task 2 Characterize and Validate Fuels Produced by CWT - This task focused on testing of bio-derived hydrocarbon fuels from the Carthage plant in power generating equipment to determine the regulatory compliance of emissions and overall performance of the fuel. Task 3 Characterize Mixed Waste Streams - This task focused on studies performed at Princeton University to better characterize mixed waste incoming streams from animal and vegetable residues. Task 4 Fundamental Research in Waste Processing Technologies - This task focused on studies performed at the Massachusetts Institute of Technology (MIT) on the chemical reformation reaction of agricultural biomass compounds in a hydrothermal medium. Many of the challenges to optimize, improve and perfect the technology, equipment and processes in order to provide an economically viable means of creating sustainable energy were identified in the DOE Stage Gate Review, whose summary report was issued on July 30, 2004. This summary report appears herein as Appendix 1, and the findings of the report formed the basis for much of the subsequent work under the grant. An explanation of the process is presented as well as the completed work on the four tasks.« less

  18. Reasoning and Memory: People Make Varied Use of the Information Available in Working Memory

    ERIC Educational Resources Information Center

    Hardman, Kyle O.; Cowan, Nelson

    2016-01-01

    Working memory (WM) is used for storing information in a highly accessible state so that other mental processes, such as reasoning, can use that information. Some WM tasks require that participants not only store information, but also reason about that information to perform optimally on the task. In this study, we used visual WM tasks that had…

  19. Assessing performance in complex team environments.

    PubMed

    Whitmore, Jeffrey N

    2005-07-01

    This paper provides a brief introduction to team performance assessment. It highlights some critical aspects leading to the successful measurement of team performance in realistic console operations; discusses the idea of process and outcome measures; presents two types of team data collection systems; and provides an example of team performance assessment. Team performance assessment is a complicated endeavor relative to assessing individual performance. Assessing team performance necessitates a clear understanding of each operator's task, both at the individual and team level, and requires planning for efficient data capture and analysis. Though team performance assessment requires considerable effort, the results can be very worthwhile. Most tasks performed in Command and Control environments are team tasks, and understanding this type of performance is becoming increasingly important to the evaluation of mission success and for overall system optimization.

  20. Performance Management and Optimization of Semiconductor Design Projects

    NASA Astrophysics Data System (ADS)

    Hinrichs, Neele; Olbrich, Markus; Barke, Erich

    2010-06-01

    The semiconductor industry is characterized by fast technological changes and small time-to-market windows. Improving productivity is the key factor to stand up to the competitors and thus successfully persist in the market. In this paper a Performance Management System for analyzing, optimizing and evaluating chip design projects is presented. A task graph representation is used to optimize the design process regarding time, cost and workload of resources. Key Performance Indicators are defined in the main areas cost, profit, resources, process and technical output to appraise the project.

  1. Measurement of functional task difficulty during motor learning: What level of difficulty corresponds to the optimal challenge point?

    PubMed

    Akizuki, Kazunori; Ohashi, Yukari

    2015-10-01

    The relationship between task difficulty and learning benefit was examined, as was the measurability of task difficulty. Participants were required to learn a postural control task on an unstable surface at one of four different task difficulty levels. Results from the retention test showed an inverted-U relationship between task difficulty during acquisition and motor learning. The second-highest level of task difficulty was the most effective for motor learning, while learning was delayed at the most and least difficult levels. Additionally, the results indicate that salivary α-amylase and the performance dimension of the National Aeronautics and Space Administration-Task Load Index (NASA-TLX) are useful indices of task difficulty. Our findings suggested that instructors may be able to adjust task difficulty based on salivary α-amylase and the performance dimension of the NASA-TLX to enhance learning. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Optimal digital filtering for tremor suppression.

    PubMed

    Gonzalez, J G; Heredia, E A; Rahman, T; Barner, K E; Arce, G R

    2000-05-01

    Remote manually operated tasks such as those found in teleoperation, virtual reality, or joystick-based computer access, require the generation of an intermediate electrical signal which is transmitted to the controlled subsystem (robot arm, virtual environment, or a cursor in a computer screen). When human movements are distorted, for instance, by tremor, performance can be improved by digitally filtering the intermediate signal before it reaches the controlled device. This paper introduces a novel tremor filtering framework in which digital equalizers are optimally designed through pursuit tracking task experiments. Due to inherent properties of the man-machine system, the design of tremor suppression equalizers presents two serious problems: 1) performance criteria leading to optimizations that minimize mean-squared error are not efficient for tremor elimination and 2) movement signals show ill-conditioned autocorrelation matrices, which often result in useless or unstable solutions. To address these problems, a new performance indicator in the context of tremor is introduced, and the optimal equalizer according to this new criterion is developed. Ill-conditioning of the autocorrelation matrix is overcome using a novel method which we call pulled-optimization. Experiments performed with artificially induced vibrations and a subject with Parkinson's disease show significant improvement in performance. Additional results, along with MATLAB source code of the algorithms, and a customizable demo for PC joysticks, are available on the Internet at http:¿tremor-suppression.com.

  3. Operational testing of a figure of merit for overall task performance

    NASA Technical Reports Server (NTRS)

    Lemay, Moira

    1990-01-01

    An overall indicator or figure of merit (FOM), for the quality of pilot performance is needed to define optimal workload levels, predict system failure, measure the impact of new automation in the cockpit, and define the relative contributions of subtasks to overall task performance. A normative FOM was developed based on the calculation of a standard score for each component of a complex task. It reflected some effects, detailed in an earlier study, of the introduction of new data link technology into the cockpit. Since the technique showed promise, further testing was done. A new set of data was obtained using the recently developed Multi-Attribute Task Battery. This is a complex battery consisting of four tasks which can be varied in task demand, and on which performance measures can be obtained. This battery was presented to 12 subjects in a 20 minute trial at each of three levels of workload or task demand, and performance measures collected on all four tasks. The NASA-TLX workload rating scale was presented at minutes 6, 12, and 18, of each trial. A figure of merit was then obtained for each run of the battery by calculating a mean, SD, and standard score for each task. Each task contributed its own proportion to the overall FOM, and relative contributions changed with increasing workload. Thus, the FOM shows the effect of task changes, not only on the individual task that is changed, but also on the performance of other tasks and of the whole task. The cost to other tasks of maintaining constant performance on an individual task can be quantified.

  4. Hierarchical Control Using Networks Trained with Higher-Level Forward Models

    PubMed Central

    Wayne, Greg; Abbott, L.F.

    2015-01-01

    We propose and develop a hierarchical approach to network control of complex tasks. In this approach, a low-level controller directs the activity of a “plant,” the system that performs the task. However, the low-level controller may only be able to solve fairly simple problems involving the plant. To accomplish more complex tasks, we introduce a higher-level controller that controls the lower-level controller. We use this system to direct an articulated truck to a specified location through an environment filled with static or moving obstacles. The final system consists of networks that have memorized associations between the sensory data they receive and the commands they issue. These networks are trained on a set of optimal associations that are generated by minimizing cost functions. Cost function minimization requires predicting the consequences of sequences of commands, which is achieved by constructing forward models, including a model of the lower-level controller. The forward models and cost minimization are only used during training, allowing the trained networks to respond rapidly. In general, the hierarchical approach can be extended to larger numbers of levels, dividing complex tasks into more manageable sub-tasks. The optimization procedure and the construction of the forward models and controllers can be performed in similar ways at each level of the hierarchy, which allows the system to be modified to perform other tasks, or to be extended for more complex tasks without retraining lower-levels. PMID:25058706

  5. Adaptive distance metric learning for diffusion tensor image segmentation.

    PubMed

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C N; Chu, Winnie C W

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework.

  6. Adaptive Distance Metric Learning for Diffusion Tensor Image Segmentation

    PubMed Central

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C. N.; Chu, Winnie C. W.

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework. PMID:24651858

  7. Efficient multitasking: parallel versus serial processing of multiple tasks

    PubMed Central

    Fischer, Rico; Plessow, Franziska

    2015-01-01

    In the context of performance optimizations in multitasking, a central debate has unfolded in multitasking research around whether cognitive processes related to different tasks proceed only sequentially (one at a time), or can operate in parallel (simultaneously). This review features a discussion of theoretical considerations and empirical evidence regarding parallel versus serial task processing in multitasking. In addition, we highlight how methodological differences and theoretical conceptions determine the extent to which parallel processing in multitasking can be detected, to guide their employment in future research. Parallel and serial processing of multiple tasks are not mutually exclusive. Therefore, questions focusing exclusively on either task-processing mode are too simplified. We review empirical evidence and demonstrate that shifting between more parallel and more serial task processing critically depends on the conditions under which multiple tasks are performed. We conclude that efficient multitasking is reflected by the ability of individuals to adjust multitasking performance to environmental demands by flexibly shifting between different processing strategies of multiple task-component scheduling. PMID:26441742

  8. Efficient multitasking: parallel versus serial processing of multiple tasks.

    PubMed

    Fischer, Rico; Plessow, Franziska

    2015-01-01

    In the context of performance optimizations in multitasking, a central debate has unfolded in multitasking research around whether cognitive processes related to different tasks proceed only sequentially (one at a time), or can operate in parallel (simultaneously). This review features a discussion of theoretical considerations and empirical evidence regarding parallel versus serial task processing in multitasking. In addition, we highlight how methodological differences and theoretical conceptions determine the extent to which parallel processing in multitasking can be detected, to guide their employment in future research. Parallel and serial processing of multiple tasks are not mutually exclusive. Therefore, questions focusing exclusively on either task-processing mode are too simplified. We review empirical evidence and demonstrate that shifting between more parallel and more serial task processing critically depends on the conditions under which multiple tasks are performed. We conclude that efficient multitasking is reflected by the ability of individuals to adjust multitasking performance to environmental demands by flexibly shifting between different processing strategies of multiple task-component scheduling.

  9. Decision-Related Activity in Macaque V2 for Fine Disparity Discrimination Is Not Compatible with Optimal Linear Readout

    PubMed Central

    Clery, Stephane; Cumming, Bruce G.

    2017-01-01

    Fine judgments of stereoscopic depth rely mainly on relative judgments of depth (relative binocular disparity) between objects, rather than judgments of the distance to where the eyes are fixating (absolute disparity). In macaques, visual area V2 is the earliest site in the visual processing hierarchy for which neurons selective for relative disparity have been observed (Thomas et al., 2002). Here, we found that, in macaques trained to perform a fine disparity discrimination task, disparity-selective neurons in V2 were highly selective for the task, and their activity correlated with the animals' perceptual decisions (unexplained by the stimulus). This may partially explain similar correlations reported in downstream areas. Although compatible with a perceptual role of these neurons for the task, the interpretation of such decision-related activity is complicated by the effects of interneuronal “noise” correlations between sensory neurons. Recent work has developed simple predictions to differentiate decoding schemes (Pitkow et al., 2015) without needing measures of noise correlations, and found that data from early sensory areas were compatible with optimal linear readout of populations with information-limiting correlations. In contrast, our data here deviated significantly from these predictions. We additionally tested this prediction for previously reported results of decision-related activity in V2 for a related task, coarse disparity discrimination (Nienborg and Cumming, 2006), thought to rely on absolute disparity. Although these data followed the predicted pattern, they violated the prediction quantitatively. This suggests that optimal linear decoding of sensory signals is not generally a good predictor of behavior in simple perceptual tasks. SIGNIFICANCE STATEMENT Activity in sensory neurons that correlates with an animal's decision is widely believed to provide insights into how the brain uses information from sensory neurons. Recent theoretical work developed simple predictions to differentiate decoding schemes, and found support for optimal linear readout of early sensory populations with information-limiting correlations. Here, we observed decision-related activity for neurons in visual area V2 of macaques performing fine disparity discrimination, as yet the earliest site for this task. These findings, and previously reported results from V2 in a different task, deviated from the predictions for optimal linear readout of a population with information-limiting correlations. Our results suggest that optimal linear decoding of early sensory information is not a general decoding strategy used by the brain. PMID:28100751

  10. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment.

    PubMed

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Abdulhamid, Shafi'i Muhammad; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.

  11. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment

    PubMed Central

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing. PMID:28467505

  12. Selecting Tasks for Evaluating Human Performance as a Function of Gravity

    NASA Technical Reports Server (NTRS)

    Norcross, Jason R.; Gernhardt, Michael L.

    2011-01-01

    A challenge in understanding human performance as a function of gravity is determining which tasks to research. Initial studies began with treadmill walking, which was easy to quantify and control. However, with the development of pressurized rovers, it is less important to optimize human performance for ambulation as pressurized rovers will likely perform gross translation for them. Future crews are likely to spend much of their extravehicular activity (EVA) performing geology, construction,a nd maintenance type tasks. With these types of tasks, people have different performance strategies, and it is often difficult to quantify the task and measure steady-state metabolic rates or perform biomechanical analysis. For many of these types of tasks, subjective feedback may be the only data that can be collected. However, subjective data may not fully support a rigorous scientific comparison of human performance across different gravity levels and suit factors. NASA would benefit from having a wide variety of quantifiable tasks that allow human performance comparison across different conditions. In order to determine which tasks will effectively support scientific studies, many different tasks and data analysis techniques will need to be employed. Many of these tasks and techniques will not be effective, but some will produce quantifiable results that are sensitive enough to show performance differences. One of the primary concerns related to EVA performance is metabolic rate. The higher the metabolic rate, the faster the astronaut will exhaust consumables. The focus of this poster will be on how different tasks affect metabolic rate across different gravity levels.

  13. Optimal Mixing On The Sphere

    DTIC Science & Technology

    2010-11-10

    CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ...ORGANIZATION NAME(S) AND ADDRESS(ES) Woods Hole Oceanographic Institution,Woods Hole,MA,02543 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING...consider an alternate means of finding the minima of 〈|θ|2〉. We perform a two-part optimization process based on Matlab’s built-in nonlinear

  14. Optimal control of 2-wheeled mobile robot at energy performance index

    NASA Astrophysics Data System (ADS)

    Kaliński, Krzysztof J.; Mazur, Michał

    2016-03-01

    The paper presents the application of the optimal control method at the energy performance index towards motion control of the 2-wheeled mobile robot. With the use of the proposed method of control the 2-wheeled mobile robot can realise effectively the desired trajectory. The problem of motion control of mobile robots is usually neglected and thus performance of the realisation of the high level control tasks is limited.

  15. A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori

    2005-07-01

    The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectivesmore » of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.« less

  16. A Framework to Design and Optimize Chemical Flooding Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori

    2006-08-31

    The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectivesmore » of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.« less

  17. A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori

    2004-11-01

    The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectivesmore » of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.« less

  18. Quantitative analysis of task selection for brain-computer interfaces

    NASA Astrophysics Data System (ADS)

    Llera, Alberto; Gómez, Vicenç; Kappen, Hilbert J.

    2014-10-01

    Objective. To assess quantitatively the impact of task selection in the performance of brain-computer interfaces (BCI). Approach. We consider the task-pairs derived from multi-class BCI imagery movement tasks in three different datasets. We analyze for the first time the benefits of task selection on a large-scale basis (109 users) and evaluate the possibility of transferring task-pair information across days for a given subject. Main results. Selecting the subject-dependent optimal task-pair among three different imagery movement tasks results in approximately 20% potential increase in the number of users that can be expected to control a binary BCI. The improvement is observed with respect to the best task-pair fixed across subjects. The best task-pair selected for each subject individually during a first day of recordings is generally a good task-pair in subsequent days. In general, task learning from the user side has a positive influence in the generalization of the optimal task-pair, but special attention should be given to inexperienced subjects. Significance. These results add significant evidence to existing literature that advocates task selection as a necessary step towards usable BCIs. This contribution motivates further research focused on deriving adaptive methods for task selection on larger sets of mental tasks in practical online scenarios.

  19. The Effect of Two-dimensional and Stereoscopic Presentation on Middle School Students' Performance of Spatial Cognition Tasks

    NASA Astrophysics Data System (ADS)

    Price, Aaron; Lee, Hee-Sun

    2010-02-01

    We investigated whether and how student performance on three types of spatial cognition tasks differs when worked with two-dimensional or stereoscopic representations. We recruited nineteen middle school students visiting a planetarium in a large Midwestern American city and analyzed their performance on a series of spatial cognition tasks in terms of response accuracy and task completion time. Results show that response accuracy did not differ between the two types of representations while task completion time was significantly greater with the stereoscopic representations. The completion time increased as the number of mental manipulations of 3D objects increased in the tasks. Post-interviews provide evidence that some students continued to think of stereoscopic representations as two-dimensional. Based on cognitive load and cue theories, we interpret that, in the absence of pictorial depth cues, students may need more time to be familiar with stereoscopic representations for optimal performance. In light of these results, we discuss potential uses of stereoscopic representations for science learning.

  20. Differential working memory correlates for implicit sequence performance in young and older adults.

    PubMed

    Bo, Jin; Jennett, S; Seidler, R D

    2012-09-01

    Our recent work has revealed that visuospatial working memory (VSWM) relates to the rate of explicit motor sequence learning (Bo and Seidler in J Neurophysiol 101:3116-3125, 2009) and implicit sequence performance (Bo et al. in Exp Brain Res 214:73-81, 2011a) in young adults (YA). Although aging has a detrimental impact on many cognitive functions, including working memory, older adults (OA) still rely on their declining working memory resources in an effort to optimize explicit motor sequence learning. Here, we evaluated whether age-related differences in VSWM and/or verbal working memory (VWM) performance relates to implicit performance change in the serial reaction time (SRT) sequence task in OA. Participants performed two computerized working memory tasks adapted from change detection working memory assessments (Luck and Vogel in Nature 390:279-281, 1997), an implicit SRT task and several neuropsychological tests. We found that, although OA exhibited an overall reduction in both VSWM and VWM, both OA and YA showed similar performance in the implicit SRT task. Interestingly, while VSWM and VWM were significantly correlated with each other in YA, there was no correlation between these two working memory scores in OA. In YA, the rate of SRT performance change (exponential fit to the performance curve) was significantly correlated with both VSWM and VWM, while in contrast, OA's performance was only correlated with VWM, and not VSWM. These results demonstrate differential reliance on VSWM and VWM for SRT performance between YA and OA. OA may utilize VWM to maintain optimized performance of second-order conditional sequences.

  1. Stochastic Averaging for Constrained Optimization With Application to Online Resource Allocation

    NASA Astrophysics Data System (ADS)

    Chen, Tianyi; Mokhtari, Aryan; Wang, Xin; Ribeiro, Alejandro; Giannakis, Georgios B.

    2017-06-01

    Existing approaches to resource allocation for nowadays stochastic networks are challenged to meet fast convergence and tolerable delay requirements. The present paper leverages online learning advances to facilitate stochastic resource allocation tasks. By recognizing the central role of Lagrange multipliers, the underlying constrained optimization problem is formulated as a machine learning task involving both training and operational modes, with the goal of learning the sought multipliers in a fast and efficient manner. To this end, an order-optimal offline learning approach is developed first for batch training, and it is then generalized to the online setting with a procedure termed learn-and-adapt. The novel resource allocation protocol permeates benefits of stochastic approximation and statistical learning to obtain low-complexity online updates with learning errors close to the statistical accuracy limits, while still preserving adaptation performance, which in the stochastic network optimization context guarantees queue stability. Analysis and simulated tests demonstrate that the proposed data-driven approach improves the delay and convergence performance of existing resource allocation schemes.

  2. Learning optimal eye movements to unusual faces

    PubMed Central

    Peterson, Matthew F.; Eckstein, Miguel P.

    2014-01-01

    Eye movements, which guide the fovea’s high resolution and computational power to relevant areas of the visual scene, are integral to efficient, successful completion of many visual tasks. How humans modify their eye movements through experience with their perceptual environments, and its functional role in learning new tasks, has not been fully investigated. Here, we used a face identification task where only the mouth discriminated exemplars to assess if, how, and when eye movement modulation may mediate learning. By interleaving trials of unconstrained eye movements with trials of forced fixation, we attempted to separate the contributions of eye movements and covert mechanisms to performance improvements. Without instruction, a majority of observers substantially increased accuracy and learned to direct their initial eye movements towards the optimal fixation point. The proximity of an observer’s default face identification eye movement behavior to the new optimal fixation point and the observer’s peripheral processing ability were predictive of performance gains and eye movement learning. After practice in a subsequent condition in which observers were directed to fixate different locations along the face, including the relevant mouth region, all observers learned to make eye movements to the optimal fixation point. In this fully learned state, augmented fixation strategy accounted for 43% of total efficiency improvements while covert mechanisms accounted for the remaining 57%. The findings suggest a critical role for eye movement planning to perceptual learning, and elucidate factors that can predict when and how well an observer can learn a new task with unusual exemplars. PMID:24291712

  3. Task Difficulty and Prior Videogame Experience: Their Role in Performance and Motivation in Instructional Videogames

    DTIC Science & Technology

    2007-06-01

    Video game -based environments are an increasingly popular medium for training Soldiers. This research investigated how various strategies for...modifying task difficulty over the progression of an instructional video game impact learner performance and motivation. Further, the influence of prior... video game experience on these learning outcomes was examined, as well as the role prior experience played in determining the optimal approach for

  4. Communication Characterization and Optimization of Applications Using Topology-Aware Task Mapping on Large Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreepathi, Sarat; D'Azevedo, Eduardo; Philip, Bobby

    On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phasemore » of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.« less

  5. Optical Quality, Threshold Target Identification, and Military Target Task Performance After Advanced Keratorefractive Surgery

    DTIC Science & Technology

    2012-05-01

    undergo wavefront-guided (WFG) photorefractive keratectomy ( PRK ), WFG laser in situ keratomileusis ( LASIK ), wavefront optimized (WFO) PRK or WFO...Military, Refractive Surgery, PRK , LASIK , Night Vision, Wavefront Optimized, Wavefront Guided, Visual Performance, Quality of Vision, Outcomes...military. In a prospective, randomized treatment trial we will enroll 224 nearsighted soldiers to WFG photorefractive keratectomy ( PRK ), WFG LASIK , WFO PRK

  6. Optical Quality and Threshold Target Identification and Military Target Task Performance after Advanced Keratorefractive Surgery

    DTIC Science & Technology

    2013-05-01

    and Sensors Directorate. • Study participants and physicians select treatment: PRK or LASIK . WFG vs . WFO treatment modality is randomized. The...to undergo wavefront-guided (WFG) photorefractive keratectomy ( PRK ), WFG laser in situ keratomileusis ( LASIK ), wavefront optimized (WFO) PRK or WFO...TERMS Military, Refractive Surgery, PRK , LASIK , Night Vision, Wavefront Optimized, Wavefront Guided, Visual Performance, Quality of Vision, Outcomes

  7. Parameter meta-optimization of metaheuristics of solving specific NP-hard facility location problem

    NASA Astrophysics Data System (ADS)

    Skakov, E. S.; Malysh, V. N.

    2018-03-01

    The aim of the work is to create an evolutionary method for optimizing the values of the control parameters of metaheuristics of solving the NP-hard facility location problem. A system analysis of the tuning process of optimization algorithms parameters is carried out. The problem of finding the parameters of a metaheuristic algorithm is formulated as a meta-optimization problem. Evolutionary metaheuristic has been chosen to perform the task of meta-optimization. Thus, the approach proposed in this work can be called “meta-metaheuristic”. Computational experiment proving the effectiveness of the procedure of tuning the control parameters of metaheuristics has been performed.

  8. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment.

    PubMed

    Abdullahi, Mohammed; Ngadi, Md Asri

    2016-01-01

    Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.

  9. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment

    PubMed Central

    Abdullahi, Mohammed; Ngadi, Md Asri

    2016-01-01

    Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan. PMID:27348127

  10. Heimdall System for MSSS Sensor Tasking

    NASA Astrophysics Data System (ADS)

    Herz, A.; Jones, B.; Herz, E.; George, D.; Axelrad, P.; Gehly, S.

    In Norse Mythology, Heimdall uses his foreknowledge and keen eyesight to keep watch for disaster from his home near the Rainbow Bridge. Orbit Logic and the Colorado Center for Astrodynamics Research (CCAR) at the University of Colorado (CU) have developed the Heimdall System to schedule observations of known and uncharacterized objects and search for new objects from the Maui Space Surveillance Site. Heimdall addresses the current need for automated and optimized SSA sensor tasking driven by factors associated with improved space object catalog maintenance. Orbit Logic and CU developed an initial baseline prototype SSA sensor tasking capability for select sensors at the Maui Space Surveillance Site (MSSS) using STK and STK Scheduler, and then added a new Track Prioritization Component for FiSST-inspired computations for predicted Information Gain and Probability of Detection, and a new SSA-specific Figure-of-Merit (FOM) for optimized SSA sensor tasking. While the baseline prototype addresses automation and some of the multi-sensor tasking optimization, the SSA-improved prototype addresses all of the key elements required for improved tasking leading to enhanced object catalog maintenance. The Heimdall proof-of-concept was demonstrated for MSSS SSA sensor tasking for a 24 hour period to attempt observations of all operational satellites in the unclassified NORAD catalog, observe a small set of high priority GEO targets every 30 minutes, make a sky survey of the GEO belt region accessible to MSSS sensors, and observe particular GEO regions that have a high probability of finding new objects with any excess sensor time. This Heimdall prototype software paves the way for further R&D that will integrate this technology into the MSSS systems for operational scheduling, improve the software's scalability, and further tune and enhance schedule optimization. The Heimdall software for SSA sensor tasking provides greatly improved performance over manual tasking, improved coordinated sensor usage, and tasking schedules driven by catalog improvement goals (reduced overall covariance, etc.). The improved performance also enables more responsive sensor tasking to address external events, newly detected objects, newly detected object activity, and sensor anomalies. Instead of having to wait until the next day's scheduling phase, events can be addressed with new tasking schedules immediately (within seconds or minutes). Perhaps the most important benefit is improved SSA based on an overall improvement to the quality of the space catalog. By driving sensor tasking and scheduling based on predicted Information Gain and other relevant factors, better decisions are made in the application of available sensor resources, leading to an improved catalog and better information about the objects of most interest. The Heimdall software solution provides a configurable, automated system to improve sensor tasking efficiency and responsiveness for SSA applications. The FISST algorithms for Track Prioritization, SSA specific task and resource attributes, Scheduler algorithms, and configurable SSA-specific Figure-of-Merit together provide optimized and tunable scheduling for the Maui Space Surveillance Site and possibly other sites and organizations across the U.S. military and for allies around the world.

  11. Effect of feedback mode and task difficulty on quality of timing decisions in a zero-sum game.

    PubMed

    Tikuisis, Peter; Vartanian, Oshin; Mandel, David R

    2014-09-01

    The objective was to investigate the interaction between the mode of performance outcome feedback and task difficulty on timing decisions (i.e., when to act). Feedback is widely acknowledged to affect task performance. However, the extent to which feedback display mode and its impact on timing decisions is moderated by task difficulty remains largely unknown. Participants repeatedly engaged a zero-sum game involving silent duels with a computerized opponent and were given visual performance feedback after each engagement. They were sequentially tested on three different levels of task difficulty (low, intermediate, and high) in counterbalanced order. Half received relatively simple "inside view" binary outcome feedback, and the other half received complex "outside view" hit rate probability feedback. The key dependent variables were response time (i.e., time taken to make a decision) and survival outcome. When task difficulty was low to moderate, participants were more likely to learn and perform better from hit rate probability feedback than binary outcome feedback. However, better performance with hit rate feedback exacted a higher cognitive cost manifested by higher decision response time. The beneficial effect of hit rate probability feedback on timing decisions is partially moderated by task difficulty. Performance feedback mode should be judiciously chosen in relation to task difficulty for optimal performance in tasks involving timing decisions.

  12. Differences in Multitask Resource Reallocation After Change in Task Values.

    PubMed

    Matton, Nadine; Paubel, Pierre; Cegarra, Julien; Raufaste, Eric

    2016-12-01

    The objective was to characterize multitask resource reallocation strategies when managing subtasks with various assigned values. When solving a resource conflict in multitasking, Salvucci and Taatgen predict a globally rational strategy will be followed that favors the most urgent subtask and optimizes global performance. However, Katidioti and Taatgen identified a locally rational strategy that optimizes only a subcomponent of the whole task, leading to detrimental consequences on global performance. Moreover, the question remains open whether expertise would have an impact on the choice of the strategy. We adopted a multitask environment used for pilot selection with a change in emphasis on two out of four subtasks while all subtasks had to be maintained over a minimum performance. A laboratory eye-tracking study contrasted 20 recently selected pilot students considered as experienced with this task and 15 university students considered as novices. When two subtasks were emphasized, novices focused their resources particularly on one high-value subtask and failed to prevent both low-value subtasks falling below minimum performance. On the contrary, experienced people delayed the processing of one low-value subtask but managed to optimize global performance. In a multitasking environment where some subtasks are emphasized, novices follow a locally rational strategy whereas experienced participants follow a globally rational strategy. During complex training, trainees are only able to adjust their resource allocation strategy to subtask emphasis changes once they are familiar with the multitasking environment. © 2016, Human Factors and Ergonomics Society.

  13. Smartphone form factors: Effects of width and bottom bezel on touch performance, workload, and physical demand.

    PubMed

    Lee, Seul Chan; Cha, Min Chul; Hwangbo, Hwan; Mo, Sookhee; Ji, Yong Gu

    2018-02-01

    This study aimed at investigating the effect of two smartphone form factors (width and bottom bezel) on touch behaviors with one-handed interaction. User experiments on tapping tasks were conducted for four widths (67, 70, 72, and 74 mm) and five bottom bezel levels (2.5, 5, 7.5, 10, and 12.5 mm). Task performance, electromyography, and subjective workload data were collected to examine the touch behavior. The success rate and task completion time were collected as task performance measures. The NASA-TLX method was used to observe the subjective workload. The electromyogram signals of two thumb muscles, namely the first dorsal interosseous and abductor pollicis brevis, were observed. The task performances deteriorated with increasing width level. The subjective workload and electromyography data showed similar patterns with the task performances. The task performances of the bottom bezel devices were analyzed by using three different evaluation criteria. The results from these criteria indicated that tasks became increasingly difficult as the bottom bezel level decreased. The results of this study provide insights into the optimal range of smartphone form factors for one-handed interaction, which could contribute to the design of new smartphones. Copyright © 2017. Published by Elsevier Ltd.

  14. Optimizing the balance between task automation and human manual control in simulated submarine track management.

    PubMed

    Chen, Stephanie I; Visser, Troy A W; Huf, Samuel; Loft, Shayne

    2017-09-01

    Automation can improve operator performance and reduce workload, but can also degrade operator situation awareness (SA) and the ability to regain manual control. In 3 experiments, we examined the extent to which automation could be designed to benefit performance while ensuring that individuals maintained SA and could regain manual control. Participants completed a simulated submarine track management task under varying task load. The automation was designed to facilitate information acquisition and analysis, but did not make task decisions. Relative to a condition with no automation, the continuous use of automation improved performance and reduced subjective workload, but degraded SA. Automation that was engaged and disengaged by participants as required (adaptable automation) moderately improved performance and reduced workload relative to no automation, but degraded SA. Automation engaged and disengaged based on task load (adaptive automation) provided no benefit to performance or workload, and degraded SA relative to no automation. Automation never led to significant return-to-manual deficits. However, all types of automation led to degraded performance on a nonautomated task that shared information processing requirements with automated tasks. Given these outcomes, further research is urgently required to establish how to design automation to maximize performance while keeping operators cognitively engaged. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. Attention to sound improves auditory reliability in audio-tactile spatial optimal integration.

    PubMed

    Vercillo, Tiziana; Gori, Monica

    2015-01-01

    The role of attention on multisensory processing is still poorly understood. In particular, it is unclear whether directing attention toward a sensory cue dynamically reweights cue reliability during integration of multiple sensory signals. In this study, we investigated the impact of attention in combining audio-tactile signals in an optimal fashion. We used the Maximum Likelihood Estimation (MLE) model to predict audio-tactile spatial localization on the body surface. We developed a new audio-tactile device composed by several small units, each one consisting of a speaker and a tactile vibrator independently controllable by external software. We tested participants in an attentional and a non-attentional condition. In the attentional experiment, participants performed a dual task paradigm: they were required to evaluate the duration of a sound while performing an audio-tactile spatial task. Three unisensory or multisensory stimuli, conflictual or not conflictual sounds and vibrations arranged along the horizontal axis, were presented sequentially. In the primary task participants had to evaluate in a space bisection task the position of the second stimulus (the probe) with respect to the others (the standards). In the secondary task they had to report occasionally changes in duration of the second auditory stimulus. In the non-attentional task participants had only to perform the primary task (space bisection). Our results showed an enhanced auditory precision (and auditory weights) in the auditory attentional condition with respect to the control non-attentional condition. The results of this study support the idea that modality-specific attention modulates multisensory integration.

  16. The Motivation-Cognition Interface in Learning and Decision-Making.

    PubMed

    Maddox, W Todd; Markman, Arthur B

    2010-04-01

    In this article we discuss how incentive motivations and task demands affect performance. We present a three-factor framework that suggests that performance is determined from the interaction of global incentives, local incentives, and the psychological processes needed to achieve optimal task performance. We review work that examines the implications of the motivation-cognition interface in classification, choice and on phenomena such as stereotype threat and performance pressure. We show that under some conditions stereotype threat and pressure accentuate performance. We discuss the implications of this work for neuropsychological assessment, and outline a number of challenges for future research.

  17. Mission-based Scenario Research: Experimental Design And Analysis

    DTIC Science & Technology

    2012-01-01

    neurotechnologies called Brain-Computer Interaction Technologies. 15. SUBJECT TERMS neuroimaging, EEG, task loading, neurotechnologies , ground... neurotechnologies called Brain-Computer Interaction Technologies. INTRODUCTION Imagine a system that can identify operator fatigue during a long-term...BCIT), a class of neurotechnologies , that aim to improve task performance by incorporating measures of brain activity to optimize the interactions

  18. Adaptive optimal training of animal behavior

    NASA Astrophysics Data System (ADS)

    Bak, Ji Hyun; Choi, Jung Yoon; Akrami, Athena; Witten, Ilana; Pillow, Jonathan

    Neuroscience experiments often require training animals to perform tasks designed to elicit various sensory, cognitive, and motor behaviors. Training typically involves a series of gradual adjustments of stimulus conditions and rewards in order to bring about learning. However, training protocols are usually hand-designed, and often require weeks or months to achieve a desired level of task performance. Here we combine ideas from reinforcement learning and adaptive optimal experimental design to formulate methods for efficient training of animal behavior. Our work addresses two intriguing problems at once: first, it seeks to infer the learning rules underlying an animal's behavioral changes during training; second, it seeks to exploit these rules to select stimuli that will maximize the rate of learning toward a desired objective. We develop and test these methods using data collected from rats during training on a two-interval sensory discrimination task. We show that we can accurately infer the parameters of a learning algorithm that describes how the animal's internal model of the task evolves over the course of training. We also demonstrate by simulation that our method can provide a substantial speedup over standard training methods.

  19. Burnout and job performance: the moderating role of selection, optimization, and compensation strategies.

    PubMed

    Demerouti, Evangelia; Bakker, Arnold B; Leiter, Michael

    2014-01-01

    The present study aims to explain why research thus far has found only low to moderate associations between burnout and performance. We argue that employees use adaptive strategies that help them to maintain their performance (i.e., task performance, adaptivity to change) at acceptable levels despite experiencing burnout (i.e., exhaustion, disengagement). We focus on the strategies included in the selective optimization with compensation model. Using a sample of 294 employees and their supervisors, we found that compensation is the most successful strategy in buffering the negative associations of disengagement with supervisor-rated task performance and both disengagement and exhaustion with supervisor-rated adaptivity to change. In contrast, selection exacerbates the negative relationship of exhaustion with supervisor-rated adaptivity to change. In total, 42% of the hypothesized interactions proved to be significant. Our study uncovers successful and unsuccessful strategies that people use to deal with their burnout symptoms in order to achieve satisfactory job performance. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  20. Flexible Fusion Structure-Based Performance Optimization Learning for Multisensor Target Tracking

    PubMed Central

    Ge, Quanbo; Wei, Zhongliang; Cheng, Tianfa; Chen, Shaodong; Wang, Xiangfeng

    2017-01-01

    Compared with the fixed fusion structure, the flexible fusion structure with mixed fusion methods has better adjustment performance for the complex air task network systems, and it can effectively help the system to achieve the goal under the given constraints. Because of the time-varying situation of the task network system induced by moving nodes and non-cooperative target, and limitations such as communication bandwidth and measurement distance, it is necessary to dynamically adjust the system fusion structure including sensors and fusion methods in a given adjustment period. Aiming at this, this paper studies the design of a flexible fusion algorithm by using an optimization learning technology. The purpose is to dynamically determine the sensors’ numbers and the associated sensors to take part in the centralized and distributed fusion processes, respectively, herein termed sensor subsets selection. Firstly, two system performance indexes are introduced. Especially, the survivability index is presented and defined. Secondly, based on the two indexes and considering other conditions such as communication bandwidth and measurement distance, optimization models for both single target tracking and multi-target tracking are established. Correspondingly, solution steps are given for the two optimization models in detail. Simulation examples are demonstrated to validate the proposed algorithms. PMID:28481243

  1. Optimal SSN Tasking to Enhance Real-time Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Ferreira, J., III; Hussein, I.; Gerber, J.; Sivilli, R.

    2016-09-01

    Space Situational Awareness (SSA) is currently constrained by an overwhelming number of resident space objects (RSOs) that need to be tracked and the amount of data these observations produce. The Joint Centralized Autonomous Tasking System (JCATS) is an autonomous, net-centric tool that approaches these SSA concerns from an agile, information-based stance. Finite set statistics and stochastic optimization are used to maintain an RSO catalog and develop sensor tasking schedules based on operator configured, state information-gain metrics to determine observation priorities. This improves the efficiency of sensors to target objects as awareness changes and new information is needed, not at predefined frequencies solely. A net-centric, service-oriented architecture (SOA) allows for JCATS integration into existing SSA systems. Testing has shown operationally-relevant performance improvements and scalability across multiple types of scenarios and against current sensor tasking tools.

  2. The atomic simulation environment-a Python library for working with atoms.

    PubMed

    Hjorth Larsen, Ask; Jørgen Mortensen, Jens; Blomqvist, Jakob; Castelli, Ivano E; Christensen, Rune; Dułak, Marcin; Friis, Jesper; Groves, Michael N; Hammer, Bjørk; Hargus, Cory; Hermes, Eric D; Jennings, Paul C; Bjerre Jensen, Peter; Kermode, James; Kitchin, John R; Leonhard Kolsbjerg, Esben; Kubal, Joseph; Kaasbjerg, Kristen; Lysgaard, Steen; Bergmann Maronsson, Jón; Maxson, Tristan; Olsen, Thomas; Pastewka, Lars; Peterson, Andrew; Rostgaard, Carsten; Schiøtz, Jakob; Schütt, Ole; Strange, Mikkel; Thygesen, Kristian S; Vegge, Tejs; Vilhelmsen, Lasse; Walter, Michael; Zeng, Zhenhua; Jacobsen, Karsten W

    2017-07-12

    The atomic simulation environment (ASE) is a software package written in the Python programming language with the aim of setting up, steering, and analyzing atomistic simulations. In ASE, tasks are fully scripted in Python. The powerful syntax of Python combined with the NumPy array library make it possible to perform very complex simulation tasks. For example, a sequence of calculations may be performed with the use of a simple 'for-loop' construction. Calculations of energy, forces, stresses and other quantities are performed through interfaces to many external electronic structure codes or force fields using a uniform interface. On top of this calculator interface, ASE provides modules for performing many standard simulation tasks such as structure optimization, molecular dynamics, handling of constraints and performing nudged elastic band calculations.

  3. The atomic simulation environment—a Python library for working with atoms

    NASA Astrophysics Data System (ADS)

    Hjorth Larsen, Ask; Jørgen Mortensen, Jens; Blomqvist, Jakob; Castelli, Ivano E.; Christensen, Rune; Dułak, Marcin; Friis, Jesper; Groves, Michael N.; Hammer, Bjørk; Hargus, Cory; Hermes, Eric D.; Jennings, Paul C.; Bjerre Jensen, Peter; Kermode, James; Kitchin, John R.; Leonhard Kolsbjerg, Esben; Kubal, Joseph; Kaasbjerg, Kristen; Lysgaard, Steen; Bergmann Maronsson, Jón; Maxson, Tristan; Olsen, Thomas; Pastewka, Lars; Peterson, Andrew; Rostgaard, Carsten; Schiøtz, Jakob; Schütt, Ole; Strange, Mikkel; Thygesen, Kristian S.; Vegge, Tejs; Vilhelmsen, Lasse; Walter, Michael; Zeng, Zhenhua; Jacobsen, Karsten W.

    2017-07-01

    The atomic simulation environment (ASE) is a software package written in the Python programming language with the aim of setting up, steering, and analyzing atomistic simulations. In ASE, tasks are fully scripted in Python. The powerful syntax of Python combined with the NumPy array library make it possible to perform very complex simulation tasks. For example, a sequence of calculations may be performed with the use of a simple ‘for-loop’ construction. Calculations of energy, forces, stresses and other quantities are performed through interfaces to many external electronic structure codes or force fields using a uniform interface. On top of this calculator interface, ASE provides modules for performing many standard simulation tasks such as structure optimization, molecular dynamics, handling of constraints and performing nudged elastic band calculations.

  4. Parameter estimation with bio-inspired meta-heuristic optimization: modeling the dynamics of endocytosis.

    PubMed

    Tashkova, Katerina; Korošec, Peter; Silc, Jurij; Todorovski, Ljupčo; Džeroski, Sašo

    2011-10-11

    We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs) from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA), particle-swarm optimization (PSO), and differential evolution (DE), as well as a local-search derivative-based algorithm 717 (A717) to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Overall, the global meta-heuristic methods (DASA, PSO, and DE) clearly and significantly outperform the local derivative-based method (A717). Among the three meta-heuristics, differential evolution (DE) performs best in terms of the objective function, i.e., reconstructing the output, and in terms of convergence. These results hold for both real and artificial data, for all observability scenarios considered, and for all amounts of noise added to the artificial data. In sum, the meta-heuristic methods considered are suitable for estimating the parameters in the ODE model of the dynamics of endocytosis under a range of conditions: With the model and conditions being representative of parameter estimation tasks in ODE models of biochemical systems, our results clearly highlight the promise of bio-inspired meta-heuristic methods for parameter estimation in dynamic system models within system biology.

  5. Parameter estimation with bio-inspired meta-heuristic optimization: modeling the dynamics of endocytosis

    PubMed Central

    2011-01-01

    Background We address the task of parameter estimation in models of the dynamics of biological systems based on ordinary differential equations (ODEs) from measured data, where the models are typically non-linear and have many parameters, the measurements are imperfect due to noise, and the studied system can often be only partially observed. A representative task is to estimate the parameters in a model of the dynamics of endocytosis, i.e., endosome maturation, reflected in a cut-out switch transition between the Rab5 and Rab7 domain protein concentrations, from experimental measurements of these concentrations. The general parameter estimation task and the specific instance considered here are challenging optimization problems, calling for the use of advanced meta-heuristic optimization methods, such as evolutionary or swarm-based methods. Results We apply three global-search meta-heuristic algorithms for numerical optimization, i.e., differential ant-stigmergy algorithm (DASA), particle-swarm optimization (PSO), and differential evolution (DE), as well as a local-search derivative-based algorithm 717 (A717) to the task of estimating parameters in ODEs. We evaluate their performance on the considered representative task along a number of metrics, including the quality of reconstructing the system output and the complete dynamics, as well as the speed of convergence, both on real-experimental data and on artificial pseudo-experimental data with varying amounts of noise. We compare the four optimization methods under a range of observation scenarios, where data of different completeness and accuracy of interpretation are given as input. Conclusions Overall, the global meta-heuristic methods (DASA, PSO, and DE) clearly and significantly outperform the local derivative-based method (A717). Among the three meta-heuristics, differential evolution (DE) performs best in terms of the objective function, i.e., reconstructing the output, and in terms of convergence. These results hold for both real and artificial data, for all observability scenarios considered, and for all amounts of noise added to the artificial data. In sum, the meta-heuristic methods considered are suitable for estimating the parameters in the ODE model of the dynamics of endocytosis under a range of conditions: With the model and conditions being representative of parameter estimation tasks in ODE models of biochemical systems, our results clearly highlight the promise of bio-inspired meta-heuristic methods for parameter estimation in dynamic system models within system biology. PMID:21989196

  6. Particle swarm optimization based space debris surveillance network scheduling

    NASA Astrophysics Data System (ADS)

    Jiang, Hai; Liu, Jing; Cheng, Hao-Wen; Zhang, Yao

    2017-02-01

    The increasing number of space debris has created an orbital debris environment that poses increasing impact risks to existing space systems and human space flights. For the safety of in-orbit spacecrafts, we should optimally schedule surveillance tasks for the existing facilities to allocate resources in a manner that most significantly improves the ability to predict and detect events involving affected spacecrafts. This paper analyzes two criteria that mainly affect the performance of a scheduling scheme and introduces an artificial intelligence algorithm into the scheduling of tasks of the space debris surveillance network. A new scheduling algorithm based on the particle swarm optimization algorithm is proposed, which can be implemented in two different ways: individual optimization and joint optimization. Numerical experiments with multiple facilities and objects are conducted based on the proposed algorithm, and simulation results have demonstrated the effectiveness of the proposed algorithm.

  7. Asymptotically Optimal Motion Planning for Learned Tasks Using Time-Dependent Cost Maps

    PubMed Central

    Bowen, Chris; Ye, Gu; Alterovitz, Ron

    2015-01-01

    In unstructured environments in people’s homes and workspaces, robots executing a task may need to avoid obstacles while satisfying task motion constraints, e.g., keeping a plate of food level to avoid spills or properly orienting a finger to push a button. We introduce a sampling-based method for computing motion plans that are collision-free and minimize a cost metric that encodes task motion constraints. Our time-dependent cost metric, learned from a set of demonstrations, encodes features of a task’s motion that are consistent across the demonstrations and, hence, are likely required to successfully execute the task. Our sampling-based motion planner uses the learned cost metric to compute plans that simultaneously avoid obstacles and satisfy task constraints. The motion planner is asymptotically optimal and minimizes the Mahalanobis distance between the planned trajectory and the distribution of demonstrations in a feature space parameterized by the locations of task-relevant objects. The motion planner also leverages the distribution of the demonstrations to significantly reduce plan computation time. We demonstrate the method’s effectiveness and speed using a small humanoid robot performing tasks requiring both obstacle avoidance and satisfaction of learned task constraints. Note to Practitioners Motivated by the desire to enable robots to autonomously operate in cluttered home and workplace environments, this paper presents an approach for intuitively training a robot in a manner that enables it to repeat the task in novel scenarios and in the presence of unforeseen obstacles in the environment. Based on user-provided demonstrations of the task, our method learns features of the task that are consistent across the demonstrations and that we expect should be repeated by the robot when performing the task. We next present an efficient algorithm for planning robot motions to perform the task based on the learned features while avoiding obstacles. We demonstrate the effectiveness of our motion planner for scenarios requiring transferring a powder and pushing a button in environments with obstacles, and we plan to extend our results to more complex tasks in the future. PMID:26279642

  8. Optimal External Wrench Distribution During a Multi-Contact Sit-to-Stand Task.

    PubMed

    Bonnet, Vincent; Azevedo-Coste, Christine; Robert, Thomas; Fraisse, Philippe; Venture, Gentiane

    2017-07-01

    This paper aims at developing and evaluating a new practical method for the real-time estimate of joint torques and external wrenches during multi-contact sit-to-stand (STS) task using kinematics data only. The proposed method allows also identifying subject specific body inertial segment parameters that are required to perform inverse dynamics. The identification phase is performed using simple and repeatable motions. Thanks to an accurately identified model the estimate of the total external wrench can be used as an input to solve an under-determined multi-contact problem. It is solved using a constrained quadratic optimization process minimizing a hybrid human-like energetic criterion. The weights of this hybrid cost function are adjusted and a sensitivity analysis is performed in order to reproduce robustly human external wrench distribution. The results showed that the proposed method could successfully estimate the external wrenches under buttocks, feet, and hands during STS tasks (RMS error lower than 20 N and 6 N.m). The simplicity and generalization abilities of the proposed method allow paving the way of future diagnosis solutions and rehabilitation applications, including in-home use.

  9. Choosing colors for map display icons using models of visual search.

    PubMed

    Shive, Joshua; Francis, Gregory

    2013-04-01

    We show how to choose colors for icons on maps to minimize search time using predictions of a model of visual search. The model analyzes digital images of a search target (an icon on a map) and a search display (the map containing the icon) and predicts search time as a function of target-distractor color distinctiveness and target eccentricity. We parameterized the model using data from a visual search task and performed a series of optimization tasks to test the model's ability to choose colors for icons to minimize search time across icons. Map display designs made by this procedure were tested experimentally. In a follow-up experiment, we examined the model's flexibility to assign colors in novel search situations. The model fits human performance, performs well on the optimization tasks, and can choose colors for icons on maps with novel stimuli to minimize search time without requiring additional model parameter fitting. Models of visual search can suggest color choices that produce search time reductions for display icons. Designers should consider constructing visual search models as a low-cost method of evaluating color assignments.

  10. Software Would Largely Automate Design of Kalman Filter

    NASA Technical Reports Server (NTRS)

    Chuang, Jason C. H.; Negast, William J.

    2005-01-01

    Embedded Navigation Filter Automatic Designer (ENFAD) is a computer program being developed to automate the most difficult tasks in designing embedded software to implement a Kalman filter in a navigation system. The most difficult tasks are selection of error states of the filter and tuning of filter parameters, which are timeconsuming trial-and-error tasks that require expertise and rarely yield optimum results. An optimum selection of error states and filter parameters depends on navigation-sensor and vehicle characteristics, and on filter processing time. ENFAD would include a simulation module that would incorporate all possible error states with respect to a given set of vehicle and sensor characteristics. The first of two iterative optimization loops would vary the selection of error states until the best filter performance was achieved in Monte Carlo simulations. For a fixed selection of error states, the second loop would vary the filter parameter values until an optimal performance value was obtained. Design constraints would be satisfied in the optimization loops. Users would supply vehicle and sensor test data that would be used to refine digital models in ENFAD. Filter processing time and filter accuracy would be computed by ENFAD.

  11. Efficient Symbolic Task Planning for Multiple Mobile Robots

    DTIC Science & Technology

    2016-12-13

    Efficient Symbolic Task Planning for Multiple Mobile Robots Yuqian Jiang December 13, 2016 Abstract Symbolic task planning enables a robot to make...high-level deci- sions toward a complex goal by computing a sequence of actions with minimum expected costs. This thesis builds on a single- robot ...time complexity of optimal planning for multiple mobile robots . In this thesis we first investigate the performance of the state-of-the-art solvers of

  12. The effects of culture and cohesiveness on intragroup conflict and effectiveness.

    PubMed

    Nibler, Roger; Harris, Karen L

    2003-10-01

    To investigate the influence of culture and cohesiveness on intragroup conflict and effectiveness, the authors made comparisons among groups of U.S. friends and strangers and among groups of Chinese friends and strangers. Groups consisting of 5 members of the same culture engaged in a decision-making task. Among U.S. participants, task conflict and performance results tended to vary together. U.S. strangers reported little task conflict (disagreements about fact or opinion) and performed relatively poorly, whereas U.S. friends' performances benefited from an uninhibited exchange of individual ideas and opinions. In contrast, Chinese participants reported uniformly high levels of intragroup conflict and experienced relatively low performance. The results suggest that a task conflict advantage, with which group members feel comfortable enough to freely express and exchange opinions and disagree with each other to achieve optimal outcomes, might be culture specific.

  13. Investigating the feasibility of using partial least squares as a method of extracting salient information for the evaluation of digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Zhang, George Z.; Myers, Kyle J.; Park, Subok

    2013-03-01

    Digital breast tomosynthesis (DBT) has shown promise for improving the detection of breast cancer, but it has not yet been fully optimized due to a large space of system parameters to explore. A task-based statistical approach1 is a rigorous method for evaluating and optimizing this promising imaging technique with the use of optimal observers such as the Hotelling observer (HO). However, the high data dimensionality found in DBT has been the bottleneck for the use of a task-based approach in DBT evaluation. To reduce data dimensionality while extracting salient information for performing a given task, efficient channels have to be used for the HO. In the past few years, 2D Laguerre-Gauss (LG) channels, which are a complete basis for stationary backgrounds and rotationally symmetric signals, have been utilized for DBT evaluation2, 3 . But since background and signal statistics from DBT data are neither stationary nor rotationally symmetric, LG channels may not be efficient in providing reliable performance trends as a function of system parameters. Recently, partial least squares (PLS) has been shown to generate efficient channels for the Hotelling observer in detection tasks involving random backgrounds and signals.4 In this study, we investigate the use of PLS as a method for extracting salient information from DBT in order to better evaluate such systems.

  14. Collimator optimization in myocardial perfusion SPECT using the ideal observer and realistic background variability for lesion detection and joint detection and localization tasks

    NASA Astrophysics Data System (ADS)

    Ghaly, Michael; Du, Yong; Links, Jonathan M.; Frey, Eric C.

    2016-03-01

    In SPECT imaging, collimators are a major factor limiting image quality and largely determine the noise and resolution of SPECT images. In this paper, we seek the collimator with the optimal tradeoff between image noise and resolution with respect to performance on two tasks related to myocardial perfusion SPECT: perfusion defect detection and joint detection and localization. We used the Ideal Observer (IO) operating on realistic background-known-statistically (BKS) and signal-known-exactly (SKE) data. The areas under the receiver operating characteristic (ROC) and localization ROC (LROC) curves (AUCd, AUCd+l), respectively, were used as the figures of merit for both tasks. We used a previously developed population of 54 phantoms based on the eXtended Cardiac Torso Phantom (XCAT) that included variations in gender, body size, heart size and subcutaneous adipose tissue level. For each phantom, organ uptakes were varied randomly based on distributions observed in patient data. We simulated perfusion defects at six different locations with extents and severities of 10% and 25%, respectively, which represented challenging but clinically relevant defects. The extent and severity are, respectively, the perfusion defect’s fraction of the myocardial volume and reduction of uptake relative to the normal myocardium. Projection data were generated using an analytical projector that modeled attenuation, scatter, and collimator-detector response effects, a 9% energy resolution at 140 keV, and a 4 mm full-width at half maximum (FWHM) intrinsic spatial resolution. We investigated a family of eight parallel-hole collimators that spanned a large range of sensitivity-resolution tradeoffs. For each collimator and defect location, the IO test statistics were computed using a Markov Chain Monte Carlo (MCMC) method for an ensemble of 540 pairs of defect-present and -absent images that included the aforementioned anatomical and uptake variability. Sets of test statistics were computed for both tasks and analyzed using ROC and LROC analysis methodologies. The results of this study suggest that collimators with somewhat poorer resolution and higher sensitivity than those of a typical low-energy high-resolution (LEHR) collimator were optimal for both defect detection and joint detection and localization tasks in myocardial perfusion SPECT for the range of defect sizes investigated. This study also indicates that optimizing instrumentation for a detection task may provide near-optimal performance on the more challenging detection-localization task.

  15. Learning the ideal observer for SKE detection tasks by use of convolutional neural networks (Cum Laude Poster Award)

    NASA Astrophysics Data System (ADS)

    Zhou, Weimin; Anastasio, Mark A.

    2018-03-01

    It has been advocated that task-based measures of image quality (IQ) should be employed to evaluate and optimize imaging systems. Task-based measures of IQ quantify the performance of an observer on a medically relevant task. The Bayesian Ideal Observer (IO), which employs complete statistical information of the object and noise, achieves the upper limit of the performance for a binary signal classification task. However, computing the IO performance is generally analytically intractable and can be computationally burdensome when Markov-chain Monte Carlo (MCMC) techniques are employed. In this paper, supervised learning with convolutional neural networks (CNNs) is employed to approximate the IO test statistics for a signal-known-exactly and background-known-exactly (SKE/BKE) binary detection task. The receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC) are compared to those produced by the analytically computed IO. The advantages of the proposed supervised learning approach for approximating the IO are demonstrated.

  16. Effort in Multitasking: Local and Global Assessment of Effort.

    PubMed

    Kiesel, Andrea; Dignath, David

    2017-01-01

    When performing multiple tasks in succession, self-organization of task order might be superior compared to external-controlled task schedules, because self-organization allows optimizing processing modes and thus reduces switch costs, and it increases commitment to task goals. However, self-organization is an additional executive control process that is not required if task order is externally specified and as such it is considered as time-consuming and effortful. To compare self-organized and externally controlled task scheduling, we suggest assessing global subjective and objectives measures of effort in addition to local performance measures. In our new experimental approach, we combined characteristics of dual tasking settings and task switching settings and compared local and global measures of effort in a condition with free choice of task sequence and a condition with cued task sequence. In a multi-tasking environment, participants chose the task order while the task requirement of the not-yet-performed task remained the same. This task preview allowed participants to work on the previously non-chosen items in parallel and resulted in faster responses and fewer errors in task switch trials than in task repetition trials. The free-choice group profited more from this task preview than the cued group when considering local performance measures. Nevertheless, the free-choice group invested more effort than the cued group when considering global measures. Thus, self-organization in task scheduling seems to be effortful even in conditions in which it is beneficiary for task processing. In a second experiment, we reduced the possibility of task preview for the not-yet-performed tasks in order to hinder efficient self-organization. Here neither local nor global measures revealed substantial differences between the free-choice and a cued task sequence condition. Based on the results of both experiments, we suggest that global assessment of effort in addition to local performance measures might be a useful tool for multitasking research.

  17. Modifications to Optimize the AH-1Z Human Machine Interface

    DTIC Science & Technology

    2013-04-18

    accomplish this, a complete workload study of tasks performed by aircrew in the AH-1Z must be completed in the near future in order to understand...design flaws and guide future design and integration of increased capability. Additionally, employment of material solutions to provide aircrew with the...accomplish this, a complete workload study of tasks performed by aircrew in the AH-1Z must be completed in the near future in order to understand

  18. Decision-Related Activity in Macaque V2 for Fine Disparity Discrimination Is Not Compatible with Optimal Linear Readout.

    PubMed

    Clery, Stephane; Cumming, Bruce G; Nienborg, Hendrikje

    2017-01-18

    Fine judgments of stereoscopic depth rely mainly on relative judgments of depth (relative binocular disparity) between objects, rather than judgments of the distance to where the eyes are fixating (absolute disparity). In macaques, visual area V2 is the earliest site in the visual processing hierarchy for which neurons selective for relative disparity have been observed (Thomas et al., 2002). Here, we found that, in macaques trained to perform a fine disparity discrimination task, disparity-selective neurons in V2 were highly selective for the task, and their activity correlated with the animals' perceptual decisions (unexplained by the stimulus). This may partially explain similar correlations reported in downstream areas. Although compatible with a perceptual role of these neurons for the task, the interpretation of such decision-related activity is complicated by the effects of interneuronal "noise" correlations between sensory neurons. Recent work has developed simple predictions to differentiate decoding schemes (Pitkow et al., 2015) without needing measures of noise correlations, and found that data from early sensory areas were compatible with optimal linear readout of populations with information-limiting correlations. In contrast, our data here deviated significantly from these predictions. We additionally tested this prediction for previously reported results of decision-related activity in V2 for a related task, coarse disparity discrimination (Nienborg and Cumming, 2006), thought to rely on absolute disparity. Although these data followed the predicted pattern, they violated the prediction quantitatively. This suggests that optimal linear decoding of sensory signals is not generally a good predictor of behavior in simple perceptual tasks. Activity in sensory neurons that correlates with an animal's decision is widely believed to provide insights into how the brain uses information from sensory neurons. Recent theoretical work developed simple predictions to differentiate decoding schemes, and found support for optimal linear readout of early sensory populations with information-limiting correlations. Here, we observed decision-related activity for neurons in visual area V2 of macaques performing fine disparity discrimination, as yet the earliest site for this task. These findings, and previously reported results from V2 in a different task, deviated from the predictions for optimal linear readout of a population with information-limiting correlations. Our results suggest that optimal linear decoding of early sensory information is not a general decoding strategy used by the brain. Copyright © 2017 the authors 0270-6474/17/370715-11$15.00/0.

  19. DQE and system optimization for indirect-detection flat-panel imagers in diagnostic radiology

    NASA Astrophysics Data System (ADS)

    Siewerdsen, Jeffrey H.; Antonuk, Larry E.

    1998-07-01

    The performance of indirect-detection flat-panel imagers incorporating CsI:Tl x-ray converters is examined through calculation of the detective quantum efficiency (DQE) under conditions of chest radiography, fluoroscopy, and mammography. Calculations are based upon a cascaded systems model which has demonstrated excellent agreement with empirical signal, noise- power spectra, and DQE results. For each application, the DQE is calculated as a function of spatial-frequency and CsI:Tl thickness. A preliminary investigation into the optimization of flat-panel imaging systems is described, wherein the x-ray converter thickness which provides optimal DQE for a given imaging task is estimated. For each application, a number of example tasks involving detection of an object of variable size and contrast against a noisy background are considered. The method described is fairly general and can be extended to account for a variety of imaging tasks. For the specific examples considered, the preliminary results estimate optimal CsI:Tl thicknesses of approximately 450 micrometer (approximately 200 mg/cm2), approximately 320 micrometer (approximately 140 mg/cm2), and approximately 200 micrometer (approximately 90 mg/cm2) for chest radiography, fluoroscopy, and mammography, respectively. These results are expected to depend upon the imaging task as well as upon the quality of available CsI:Tl, and future improvements in scintillator fabrication could result in increased optimal thickness and DQE.

  20. Shape Optimization of Supersonic Turbines Using Response Surface and Neural Network Methods

    NASA Technical Reports Server (NTRS)

    Papila, Nilay; Shyy, Wei; Griffin, Lisa W.; Dorney, Daniel J.

    2001-01-01

    Turbine performance directly affects engine specific impulse, thrust-to-weight ratio, and cost in a rocket propulsion system. A global optimization framework combining the radial basis neural network (RBNN) and the polynomial-based response surface method (RSM) is constructed for shape optimization of a supersonic turbine. Based on the optimized preliminary design, shape optimization is performed for the first vane and blade of a 2-stage supersonic turbine, involving O(10) design variables. The design of experiment approach is adopted to reduce the data size needed by the optimization task. It is demonstrated that a major merit of the global optimization approach is that it enables one to adaptively revise the design space to perform multiple optimization cycles. This benefit is realized when an optimal design approaches the boundary of a pre-defined design space. Furthermore, by inspecting the influence of each design variable, one can also gain insight into the existence of multiple design choices and select the optimum design based on other factors such as stress and materials considerations.

  1. Analytical modeling and feasibility study of a multi-GPU cloud-based server (MGCS) framework for non-voxel-based dose calculations.

    PubMed

    Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A

    2017-04-01

    In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.

  2. Masking release for words in amplitude-modulated noise as a function of modulation rate and task

    PubMed Central

    Buss, Emily; Whittle, Lisa N.; Grose, John H.; Hall, Joseph W.

    2009-01-01

    For normal-hearing listeners, masked speech recognition can improve with the introduction of masker amplitude modulation. The present experiments tested the hypothesis that this masking release is due in part to an interaction between the temporal distribution of cues necessary to perform the task and the probability of those cues temporally coinciding with masker modulation minima. Stimuli were monosyllabic words masked by speech-shaped noise, and masker modulation was introduced via multiplication with a raised sinusoid of 2.5–40 Hz. Tasks included detection, three-alternative forced-choice identification, and open-set identification. Overall, there was more masking release associated with the closed than the open-set tasks. The best rate of modulation also differed as a function of task; whereas low modulation rates were associated with best performance for the detection and three-alternative identification tasks, performance improved with modulation rate in the open-set task. This task-by-rate interaction was also observed when amplitude-modulated speech was presented in a steady masker, and for low- and high-pass filtered speech presented in modulated noise. These results were interpreted as showing that the optimal rate of amplitude modulation depends on the temporal distribution of speech cues and the information required to perform a particular task. PMID:19603883

  3. The adaptive nature of eye movements in linguistic tasks: how payoff and architecture shape speed-accuracy trade-offs.

    PubMed

    Lewis, Richard L; Shvartsman, Michael; Singh, Satinder

    2013-07-01

    We explore the idea that eye-movement strategies in reading are precisely adapted to the joint constraints of task structure, task payoff, and processing architecture. We present a model of saccadic control that separates a parametric control policy space from a parametric machine architecture, the latter based on a small set of assumptions derived from research on eye movements in reading (Engbert, Nuthmann, Richter, & Kliegl, 2005; Reichle, Warren, & McConnell, 2009). The eye-control model is embedded in a decision architecture (a machine and policy space) that is capable of performing a simple linguistic task integrating information across saccades. Model predictions are derived by jointly optimizing the control of eye movements and task decisions under payoffs that quantitatively express different desired speed-accuracy trade-offs. The model yields distinct eye-movement predictions for the same task under different payoffs, including single-fixation durations, frequency effects, accuracy effects, and list position effects, and their modulation by task payoff. The predictions are compared to-and found to accord with-eye-movement data obtained from human participants performing the same task under the same payoffs, but they are found not to accord as well when the assumptions concerning payoff optimization and processing architecture are varied. These results extend work on rational analysis of oculomotor control and adaptation of reading strategy (Bicknell & Levy, ; McConkie, Rayner, & Wilson, 1973; Norris, 2009; Wotschack, 2009) by providing evidence for adaptation at low levels of saccadic control that is shaped by quantitatively varying task demands and the dynamics of processing architecture. Copyright © 2013 Cognitive Science Society, Inc.

  4. Impact of topographic mask models on scanner matching solutions

    NASA Astrophysics Data System (ADS)

    Tyminski, Jacek K.; Pomplun, Jan; Renwick, Stephen P.

    2014-03-01

    Of keen interest to the IC industry are advanced computational lithography applications such as Optical Proximity Correction of IC layouts (OPC), scanner matching by optical proximity effect matching (OPEM), and Source Optimization (SO) and Source-Mask Optimization (SMO) used as advanced reticle enhancement techniques. The success of these tasks is strongly dependent on the integrity of the lithographic simulators used in computational lithography (CL) optimizers. Lithographic mask models used by these simulators are key drivers impacting the accuracy of the image predications, and as a consequence, determine the validity of these CL solutions. Much of the CL work involves Kirchhoff mask models, a.k.a. thin masks approximation, simplifying the treatment of the mask near-field images. On the other hand, imaging models for hyper-NA scanner require that the interactions of the illumination fields with the mask topography be rigorously accounted for, by numerically solving Maxwell's Equations. The simulators used to predict the image formation in the hyper-NA scanners must rigorously treat the masks topography and its interaction with the scanner illuminators. Such imaging models come at a high computational cost and pose challenging accuracy vs. compute time tradeoffs. Additional complication comes from the fact that the performance metrics used in computational lithography tasks show highly non-linear response to the optimization parameters. Finally, the number of patterns used for tasks such as OPC, OPEM, SO, or SMO range from tens to hundreds. These requirements determine the complexity and the workload of the lithography optimization tasks. The tools to build rigorous imaging optimizers based on first-principles governing imaging in scanners are available, but the quantifiable benefits they might provide are not very well understood. To quantify the performance of OPE matching solutions, we have compared the results of various imaging optimization trials obtained with Kirchhoff mask models to those obtained with rigorous models involving solutions of Maxwell's Equations. In both sets of trials, we used sets of large numbers of patterns, with specifications representative of CL tasks commonly encountered in hyper-NA imaging. In this report we present OPEM solutions based on various mask models and discuss the models' impact on hyper- NA scanner matching accuracy. We draw conclusions on the accuracy of results obtained with thin mask models vs. the topographic OPEM solutions. We present various examples representative of the scanner image matching for patterns representative of the current generation of IC designs.

  5. Effects of Cognitive Interventions on Sports Anxiety and Performance.

    ERIC Educational Resources Information Center

    Murphy, Shane M.; Woolfolk, Robert L.

    Oxendine (1970) hypothesized that the arousal-performance relationship varies across tasks, such that gross motor activities will require high arousal for optimal performance while fine motor activities will be facilitated by low arousal, but adversely affected by high arousal. Although the effects of preparatory arousal on strength performance…

  6. A Model for Determining Optimal Governance Structure in DoD Acquisition Projects in a Performance-Based Environment

    DTIC Science & Technology

    2011-03-09

    task stability, technology application certainty, risk, and transaction-specific investments impact the selection of the optimal mode of governance...technology application certainty, risk, and transaction-specific investments impact the selection of the optimal mode of governance. Our model views...U.S. Defense Industry. The 1990s were a perfect storm of technological change, consolidation , budget downturns, environmental uncertainty, and the

  7. Skylab task and work performance /Experiment M-151 - Time and motion study/

    NASA Technical Reports Server (NTRS)

    Kubis, J. F.; Mclaughlin, E. J.

    1975-01-01

    The primary objective of Experiment M151 was to study the inflight adaptation of Skylab crewmen to a variety of task situations involving different types of activity. A parallel objective was to examine astronaut inflight performance for any behavioral stress effects associated with the working and living conditions of the Skylab environment. Training data provided the basis for comparison of preflight and inflight performance. Efficiency was evaluated through the adaptation function, namely, the relation of performance time over task trials. The results indicate that the initial changeover from preflight to inflight was accompanied by a substantial increase in performance time for most work and task activities. Equally important was the finding that crewmen adjusted rapidly to the weightless environment and became proficient in developing techniques with which to optimize task performance. By the end of the second inflight trial, most of the activities were performed almost as efficiently as on the last preflight trial. The analysis demonstrated the sensitivity of the adaptation function to differences in task and hardware configurations. The function was found to be more regular and less variable inflight than preflight. Translation and control of masses were accomplished easily and efficiently through the rapid development of the arms and legs as subtle guidance and restraint systems.

  8. Geometry and gravity influences on strength capability

    NASA Technical Reports Server (NTRS)

    Poliner, Jeffrey; Wilmington, Robert P.; Klute, Glenn K.

    1994-01-01

    Strength, defined as the capability of an individual to produce an external force, is one of the most important determining characteristics of human performance. Knowledge of strength capabilities of a group of individuals can be applied to designing equipment and workplaces, planning procedures and tasks, and training individuals. In the manned space program, with the high risk and cost associated with spaceflight, information pertaining to human performance is important to ensuring mission success and safety. Knowledge of individual's strength capabilities in weightlessness is of interest within many areas of NASA, including workplace design, tool development, and mission planning. The weightless environment of space places the human body in a completely different context. Astronauts perform a variety of manual tasks while in orbit. Their ability to perform these tasks is partly determined by their strength capability as demanded by that particular task. Thus, an important step in task planning, development, and evaluation is to determine the ability of the humans performing it. This can be accomplished by utilizing quantitative techniques to develop a database of human strength capabilities in weightlessness. Furthermore, if strength characteristics are known, equipment and tools can be built to optimize the operators' performance. This study examined strength in performing a simple task, specifically, using a tool to apply a torque to a fixture.

  9. Hyperbaric Oxygen Environment Can Enhance Brain Activity and Multitasking Performance

    PubMed Central

    Vadas, Dor; Kalichman, Leonid; Hadanny, Amir; Efrati, Shai

    2017-01-01

    Background: The Brain uses 20% of the total oxygen supply consumed by the entire body. Even though, <10% of the brain is active at any given time, it utilizes almost all the oxygen delivered. In order to perform complex tasks or more than one task (multitasking), the oxygen supply is shifted from one brain region to another, via blood perfusion modulation. The aim of the present study was to evaluate whether a hyperbaric oxygen (HBO) environment, with increased oxygen supply to the brain, will enhance the performance of complex and/or multiple activities. Methods: A prospective, double-blind randomized control, crossover trial including 22 healthy volunteers. Participants were asked to perform a cognitive task, a motor task and a simultaneous cognitive-motor task (multitasking). Participants were randomized to perform the tasks in two environments: (a) normobaric air (1 ATA 21% oxygen) (b) HBO (2 ATA 100% oxygen). Two weeks later participants were crossed to the alternative environment. Blinding of the normobaric environment was achieved in the same chamber with masks on while hyperbaric sensation was simulated by increasing pressure in the first minute and gradually decreasing to normobaric environment prior to tasks performance. Results: Compared to the performance at normobaric conditions, both cognitive and motor single tasks scores were significantly enhanced by HBO environment (p < 0.001 for both). Multitasking performance was also significantly enhanced in HBO environment (p = 0.006 for the cognitive part and p = 0.02 for the motor part). Conclusions: The improvement in performance of both single and multi-tasking while in an HBO environment supports the hypothesis which according to, oxygen is indeed a rate limiting factor for brain activity. Hyperbaric oxygenation can serve as an environment for brain performance. Further studies are needed to evaluate the optimal oxygen levels for maximal brain performance. PMID:29021747

  10. Hyperbaric Oxygen Environment Can Enhance Brain Activity and Multitasking Performance.

    PubMed

    Vadas, Dor; Kalichman, Leonid; Hadanny, Amir; Efrati, Shai

    2017-01-01

    Background: The Brain uses 20% of the total oxygen supply consumed by the entire body. Even though, <10% of the brain is active at any given time, it utilizes almost all the oxygen delivered. In order to perform complex tasks or more than one task (multitasking), the oxygen supply is shifted from one brain region to another, via blood perfusion modulation. The aim of the present study was to evaluate whether a hyperbaric oxygen (HBO) environment, with increased oxygen supply to the brain, will enhance the performance of complex and/or multiple activities. Methods: A prospective, double-blind randomized control, crossover trial including 22 healthy volunteers. Participants were asked to perform a cognitive task, a motor task and a simultaneous cognitive-motor task (multitasking). Participants were randomized to perform the tasks in two environments: (a) normobaric air (1 ATA 21% oxygen) (b) HBO (2 ATA 100% oxygen). Two weeks later participants were crossed to the alternative environment. Blinding of the normobaric environment was achieved in the same chamber with masks on while hyperbaric sensation was simulated by increasing pressure in the first minute and gradually decreasing to normobaric environment prior to tasks performance. Results: Compared to the performance at normobaric conditions, both cognitive and motor single tasks scores were significantly enhanced by HBO environment ( p < 0.001 for both). Multitasking performance was also significantly enhanced in HBO environment ( p = 0.006 for the cognitive part and p = 0.02 for the motor part). Conclusions: The improvement in performance of both single and multi-tasking while in an HBO environment supports the hypothesis which according to, oxygen is indeed a rate limiting factor for brain activity. Hyperbaric oxygenation can serve as an environment for brain performance. Further studies are needed to evaluate the optimal oxygen levels for maximal brain performance.

  11. Optimizing performance through intrinsic motivation and attention for learning: The OPTIMAL theory of motor learning.

    PubMed

    Wulf, Gabriele; Lewthwaite, Rebecca

    2016-10-01

    Effective motor performance is important for surviving and thriving, and skilled movement is critical in many activities. Much theorizing over the past few decades has focused on how certain practice conditions affect the processing of task-related information to affect learning. Yet, existing theoretical perspectives do not accommodate significant recent lines of evidence demonstrating motivational and attentional effects on performance and learning. These include research on (a) conditions that enhance expectancies for future performance, (b) variables that influence learners' autonomy, and (c) an external focus of attention on the intended movement effect. We propose the OPTIMAL (Optimizing Performance through Intrinsic Motivation and Attention for Learning) theory of motor learning. We suggest that motivational and attentional factors contribute to performance and learning by strengthening the coupling of goals to actions. We provide explanations for the performance and learning advantages of these variables on psychological and neuroscientific grounds. We describe a plausible mechanism for expectancy effects rooted in responses of dopamine to the anticipation of positive experience and temporally associated with skill practice. Learner autonomy acts perhaps largely through an enhanced expectancy pathway. Furthermore, we consider the influence of an external focus for the establishment of efficient functional connections across brain networks that subserve skilled movement. We speculate that enhanced expectancies and an external focus propel performers' cognitive and motor systems in productive "forward" directions and prevent "backsliding" into self- and non-task focused states. Expected success presumably breeds further success and helps consolidate memories. We discuss practical implications and future research directions.

  12. Shuttle/TDRSS Ku-band downlink study

    NASA Technical Reports Server (NTRS)

    Meyer, R.

    1976-01-01

    Assessing the adequacy of the baseline signal design approach, developing performance specifications for the return link hardware, and performing detailed design and parameter optimization tasks was accomplished by completing five specific study tasks. The results of these tasks show that the basic signal structure design is sound and that the goals can be met. Constraints placed on return link hardware by this structure allow reasonable specifications to be written so that no extreme technical risk areas in equipment design are foreseen. A third channel can be added to the PM mode without seriously degrading the other services. The feasibility of using only a PM mode was shown to exist, however, this will require use of some digital TV transmission techniques. Each task and its results are summarized.

  13. Motor planning flexibly optimizes performance under uncertainty about task goals.

    PubMed

    Wong, Aaron L; Haith, Adrian M

    2017-03-03

    In an environment full of potential goals, how does the brain determine which movement to execute? Existing theories posit that the motor system prepares for all potential goals by generating several motor plans in parallel. One major line of evidence for such theories is that presenting two competing goals often results in a movement intermediate between them. These intermediate movements are thought to reflect an unintentional averaging of the competing plans. However, normative theories suggest instead that intermediate movements might actually be deliberate, generated because they improve task performance over a random guessing strategy. To test this hypothesis, we vary the benefit of making an intermediate movement by changing movement speed. We find that participants generate intermediate movements only at (slower) speeds where they measurably improve performance. Our findings support the normative view that the motor system selects only a single, flexible motor plan, optimized for uncertain goals.

  14. Task-based optimization of image reconstruction in breast CT

    NASA Astrophysics Data System (ADS)

    Sanchez, Adrian A.; Sidky, Emil Y.; Pan, Xiaochuan

    2014-03-01

    We demonstrate a task-based assessment of image quality in dedicated breast CT in order to optimize the number of projection views acquired. The methodology we employ is based on the Hotelling Observer (HO) and its associated metrics. We consider two tasks: the Rayleigh task of discerning between two resolvable objects and a single larger object, and the signal detection task of classifying an image as belonging to either a signalpresent or signal-absent hypothesis. HO SNR values are computed for 50, 100, 200, 500, and 1000 projection view images, with the total imaging radiation dose held constant. We use the conventional fan-beam FBP algorithm and investigate the effect of varying the width of a Hanning window used in the reconstruction, since this affects both the noise properties of the image and the under-sampling artifacts which can arise in the case of sparse-view acquisitions. Our results demonstrate that fewer projection views should be used in order to increase HO performance, which in this case constitutes an upper-bound on human observer performance. However, the impact on HO SNR of using fewer projection views, each with a higher dose, is not as significant as the impact of employing regularization in the FBP reconstruction through a Hanning filter.

  15. Vitre-graf Coating on Mullite. Low Cost Silicon Array Project: Large Area Sillicon Sheet Task

    NASA Technical Reports Server (NTRS)

    Rossi, R. C.

    1979-01-01

    The processing parameters of the Vitre-Graf coating for optimal performance and economy when applied to mullite and graphite as substrates were presented. A minor effort was also performed on slip-cast fused silica substractes.

  16. Near-Optimal Re-Entry Trajectories for Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Chou, H.-C.; Ardema, M. D.; Bowles, J. V.

    1997-01-01

    A near-optimal guidance law for the descent trajectory for earth orbit re-entry of a fully reusable single-stage-to-orbit pure rocket launch vehicle is derived. A methodology is developed to investigate using both bank angle and altitude as control variables and selecting parameters that maximize various performance functions. The method is based on the energy-state model of the aircraft equations of motion. The major task of this paper is to obtain optimal re-entry trajectories under a variety of performance goals: minimum time, minimum surface temperature, minimum heating, and maximum heading change; four classes of trajectories were investigated: no banking, optimal left turn banking, optimal right turn banking, and optimal bank chattering. The cost function is in general a weighted sum of all performance goals. In particular, the trade-off between minimizing heat load into the vehicle and maximizing cross range distance is investigated. The results show that the optimization methodology can be used to derive a wide variety of near-optimal trajectories.

  17. Modulating Reward Induces Differential Neurocognitive Approaches to Sustained Attention.

    PubMed

    Esterman, Michael; Poole, Victoria; Liu, Guanyu; DeGutis, Joseph

    2017-08-01

    Reward and motivation have powerful effects on cognition and brain activity, yet it remains unclear how they affect sustained cognitive performance. We have recently shown that a variety of motivators improve accuracy and reduce variability during sustained attention. In the current study, we investigate how neural activity in task-positive networks supports these sustained attention improvements. Participants performed the gradual-onset continuous performance task with alternating motivated (rewarded) and unmotivated (unrewarded) blocks. During motivated blocks, we observed increased sustained neural recruitment of task-positive regions, which interacted with fluctuations in task performance. Specifically, during motivated blocks, participants recruited these regions in preparation for upcoming targets, and this activation predicted accuracy. In contrast, during unmotivated blocks, no such advanced preparation was observed. Furthermore, during motivated blocks, participants had similar activation levels during both optimal (in-the-zone) and suboptimal (out-of-the-zone) epochs of performance. In contrast, during unmotivated blocks, task-positive regions were only engaged to a similar degree as motivated blocks during suboptimal (out-of-the-zone) periods. These data support a framework in which motivated individuals act as "cognitive investors," engaging task-positive resources proactively and consistently during sustaining attention. When unmotivated, however, the same individuals act as "cognitive misers," engaging maximal task-positive resources only during periods of struggle. Published by Oxford University Press 2016.

  18. Improved configuration control for redundant robots

    NASA Technical Reports Server (NTRS)

    Seraji, H.; Colbaugh, R.

    1990-01-01

    This article presents a singularity-robust task-prioritized reformulation of the configuration control scheme for redundant robot manipulators. This reformulation suppresses large joint velocities near singularities, at the expense of small task trajectory errors. This is achieved by optimally reducing the joint velocities to induce minimal errors in the task performance by modifying the task trajectories. Furthermore, the same framework provides a means for assignment of priorities between the basic task of end-effector motion and the user-defined additional task for utilizing redundancy. This allows automatic relaxation of the additional task constraints in favor of the desired end-effector motion, when both cannot be achieved exactly. The improved configuration control scheme is illustrated for a variety of additional tasks, and extensive simulation results are presented.

  19. RLV Turbine Performance Optimization

    NASA Technical Reports Server (NTRS)

    Griffin, Lisa W.; Dorney, Daniel J.

    2001-01-01

    A task was developed at NASA/Marshall Space Flight Center (MSFC) to improve turbine aerodynamic performance through the application of advanced design and analysis tools. There are four major objectives of this task: 1) to develop, enhance, and integrate advanced turbine aerodynamic design and analysis tools; 2) to develop the methodology for application of the analytical techniques; 3) to demonstrate the benefits of the advanced turbine design procedure through its application to a relevant turbine design point; and 4) to verify the optimized design and analysis with testing. Final results of the preliminary design and the results of the two-dimensional (2D) detailed design of the first-stage vane of a supersonic turbine suitable for a reusable launch vehicle (R-LV) are presented. Analytical techniques for obtaining the results are also discussed.

  20. Feasibility of the adaptive and automatic presentation of tasks (ADAPT) system for rehabilitation of upper extremity function post-stroke.

    PubMed

    Choi, Younggeun; Gordon, James; Park, Hyeshin; Schweighofer, Nicolas

    2011-08-03

    Current guidelines for rehabilitation of arm and hand function after stroke recommend that motor training focus on realistic tasks that require reaching and manipulation and engage the patient intensively, actively, and adaptively. Here, we investigated the feasibility of a novel robotic task-practice system, ADAPT, designed in accordance with such guidelines. At each trial, ADAPT selects a functional task according to a training schedule and with difficulty based on previous performance. Once the task is selected, the robot picks up and presents the corresponding tool, simulates the dynamics of the tasks, and the patient interacts with the tool to perform the task. Five participants with chronic stroke with mild to moderate impairments (> 9 months post-stroke; Fugl-Meyer arm score 49.2 ± 5.6) practiced four functional tasks (selected out of six in a pre-test) with ADAPT for about one and half hour and 144 trials in a pseudo-random schedule of 3-trial blocks per task. No adverse events occurred and ADAPT successfully presented the six functional tasks without human intervention for a total of 900 trials. Qualitative analysis of trajectories showed that ADAPT simulated the desired task dynamics adequately, and participants reported good, although not excellent, task fidelity. During training, the adaptive difficulty algorithm progressively increased task difficulty leading towards an optimal challenge point based on performance; difficulty was then continuously adjusted to keep performance around the challenge point. Furthermore, the time to complete all trained tasks decreased significantly from pretest to one-hour post-test. Finally, post-training questionnaires demonstrated positive patient acceptance of ADAPT. ADAPT successfully provided adaptive progressive training for multiple functional tasks based on participant's performance. Our encouraging results establish the feasibility of ADAPT; its efficacy will next be tested in a clinical trial.

  1. How Attention Affects Spatial Resolution

    PubMed Central

    Carrasco, Marisa; Barbot, Antoine

    2015-01-01

    We summarize and discuss a series of psychophysical studies on the effects of spatial covert attention on spatial resolution, our ability to discriminate fine patterns. Heightened resolution is beneficial in most, but not all, visual tasks. We show how endogenous attention (voluntary, goal driven) and exogenous attention (involuntary, stimulus driven) affect performance on a variety of tasks mediated by spatial resolution, such as visual search, crowding, acuity, and texture segmentation. Exogenous attention is an automatic mechanism that increases resolution regardless of whether it helps or hinders performance. In contrast, endogenous attention flexibly adjusts resolution to optimize performance according to task demands. We illustrate how psychophysical studies can reveal the underlying mechanisms of these effects and allow us to draw linking hypotheses with known neurophysiological effects of attention. PMID:25948640

  2. The director task: A test of Theory-of-Mind use or selective attention?

    PubMed

    Rubio-Fernández, Paula

    2017-08-01

    Over two decades, the director task has increasingly been employed as a test of the use of Theory of Mind in communication, first in psycholinguistics and more recently in social cognition research. A new version of this task was designed to test two independent hypotheses. First, optimal performance in the director task, as established by the standard metrics of interference, is possible by using selective attention alone, and not necessarily Theory of Mind. Second, pragmatic measures of Theory-of-Mind use can reveal that people actively represent the director's mental states, contrary to recent claims that they only use domain-general cognitive processes to perform this task. The results of this study support both hypotheses and provide a new interactive paradigm to reliably test Theory-of-Mind use in referential communication.

  3. Barriers to success: physical separation optimizes event-file retrieval in shared workspaces.

    PubMed

    Klempova, Bibiana; Liepelt, Roman

    2017-07-08

    Sharing tasks with other persons can simplify our work and life, but seeing and hearing other people's actions may also be very distracting. The joint Simon effect (JSE) is a standard measure of referential response coding when two persons share a Simon task. Sequential modulations of the joint Simon effect (smJSE) are interpreted as a measure of event-file processing containing stimulus information, response information and information about the just relevant control-state active in a given social situation. This study tested effects of physical (Experiment 1) and virtual (Experiment 2) separation of shared workspaces on referential coding and event-file processing using a joint Simon task. In Experiment 1, participants performed this task in individual (go-nogo), joint and standard Simon task conditions with and without a transparent curtain (physical separation) placed along the imagined vertical midline of the monitor. In Experiment 2, participants performed the same tasks with and without receiving background music (virtual separation). For response times, physical separation enhanced event-file retrieval indicated by an enlarged smJSE in the joint Simon task with curtain than without curtain (Experiment1), but did not change referential response coding. In line with this, we also found evidence for enhanced event-file processing through physical separation in the joint Simon task for error rates. Virtual separation did neither impact event-file processing, nor referential coding, but generally slowed down response times in the joint Simon task. For errors, virtual separation hampered event-file processing in the joint Simon task. For the cognitively more demanding standard two-choice Simon task, we found music to have a degrading effect on event-file retrieval for response times. Our findings suggest that adding a physical separation optimizes event-file processing in shared workspaces, while music seems to lead to a more relaxed task processing mode under shared task conditions. In addition, music had an interfering impact on joint error processing and more generally when dealing with a more complex task in isolation.

  4. Lightweight design of automobile frame based on magnesium alloy

    NASA Astrophysics Data System (ADS)

    Lyu, R.; Jiang, X.; Minoru, O.; Ju, D. Y.

    2018-06-01

    The structural performance and lightweighting of car base frame design is a challenging task due to all the performance targets that must be satisfied. In this paper, three kinds of materials (iron, aluminum and magnesium alloy) replacement along with section design optimization strategy is proposed to develop a lightweight car frame structure to satisfy the tensile and safety while reducing weight. Two kinds of cross-sections are considered as the design variables. Using Ansys static structure, the design optimization problem is solved, comparing the results of each step, structure of the base flame is optimized for lightweight.

  5. Selecting Tasks for Evaluating Human Performance as a Function of Gravity

    NASA Technical Reports Server (NTRS)

    Norcross, J. R.; Gernhardt, M. L.

    2010-01-01

    A challenge in understanding human performance as a function of gravity is determining which tasks to research. Initial studies began with treadmill walking, which was easy to quantify and control. However, with the development of pressurized rovers, it is less important to optimize human performance for ambulation as rovers will likely perform gross translation for them. Future crews are likely to spend much of their extravehicular activity (EVA) performing geology, construction and maintenance type tasks, for which it is difficult to measure steady-state-workloads. To evaluate human performance in reduced gravity, we have collected metabolic, biomechanical and subjective data for different tasks at varied gravity levels. Methods: Ten subjects completed 5 different tasks including weight transfer, shoveling, treadmill walking, treadmill running and treadmill incline walking. All tasks were performed shirt-sleeved at 1-g, 3/8-g and 1/6-g. Off-loaded conditions were achieved via the Active Response Gravity Offload System. Treadmill tasks were performed for 3 minutes with reported oxygen consumption (VO2) averaged over the last 2 minutes. Shoveling was performed for 3 minutes with metabolic cost reported as ml O2 consumed per kg material shoveled. Weight transfer reports metabolic cost as liters O2 consumed to complete the task. Statistical analysis was performed via repeated measures ANOVA. Results: Statistically significant metabolic differences were noted between all 3 gravity levels for treadmill running and incline walking. For the other 3 tasks, there were significant differences between 1-g and each reduced gravity, but not between 1/6-g and 3/8-g. For weight transfer, significant differences were seen between gravities in both trial-average VO2 and time-to-completion with noted differences in strategy for task completion. Conclusion: To determine if gravity has a metabolic effect on human performance, this research may indicate that tasks should be selected that require the subject to work vertically against the force of gravity.

  6. Virtual Environments for Soldier Training via Editable Demonstrations (VESTED)

    DTIC Science & Technology

    2011-04-01

    demonstrations as visual depictions of task performance, though sound and especially verbal communications involved with the task can also be essential...or any component cue alone (e.g., Janelle, Champenoy, Coombes , & Mousseau, 2003). Neurophysiology. Recent neurophysiological research has...provides insight about how VESTED functions, what features to modify should it yield less than optimal results, and how to encode, communicate and

  7. Altered behavioral and neural responsiveness to counterfactual gains in the elderly.

    PubMed

    Tobia, Michael J; Guo, Rong; Gläscher, Jan; Schwarze, Ulrike; Brassen, Stefanie; Büchel, Christian; Obermayer, Klaus; Sommer, Tobias

    2016-06-01

    Counterfactual information processing refers to the consideration of events that did not occur in comparison to those actually experienced, in order to determine optimal actions, and can be formulated as computational learning signals, referred to as fictive prediction errors. Decision making and the neural circuitry for counterfactual processing are altered in healthy elderly adults. This experiment investigated age differences in neural systems for decision making with knowledge of counterfactual outcomes. Two groups of healthy adult participants, young (N = 30; ages 19-30 years) and elderly (N = 19; ages 65-80 years), were scanned with fMRI during 240 trials of a strategic sequential investment task in which a particular strategy of differentially weighting counterfactual gains and losses during valuation is associated with more optimal performance. Elderly participants earned significantly less than young adults, differently weighted counterfactual consequences and exploited task knowledge, and exhibited altered activity in a fronto-striatal circuit while making choices, compared to young adults. The degree to which task knowledge was exploited was positively correlated with modulation of neural activity by expected value in the vmPFC for young adults, but not in the elderly. These findings demonstrate that elderly participants' poor task performance may be related to different counterfactual processing.

  8. Mental workload and cognitive task automaticity: an evaluation of subjective and time estimation metrics.

    PubMed

    Liu, Y; Wickens, C D

    1994-11-01

    The evaluation of mental workload is becoming increasingly important in system design and analysis. The present study examined the structure and assessment of mental workload in performing decision and monitoring tasks by focusing on two mental workload measurements: subjective assessment and time estimation. The task required the assignment of a series of incoming customers to the shortest of three parallel service lines displayed on a computer monitor. The subject was either in charge of the customer assignment (manual mode) or was monitoring an automated system performing the same task (automatic mode). In both cases, the subjects were required to detect the non-optimal assignments that they or the computer had made. Time pressure was manipulated by the experimenter to create fast and slow conditions. The results revealed a multi-dimensional structure of mental workload and a multi-step process of subjective workload assessment. The results also indicated that subjective workload was more influenced by the subject's participatory mode than by the factor of task speed. The time estimation intervals produced while performing the decision and monitoring tasks had significantly greater length and larger variability than those produced while either performing no other tasks or performing a well practised customer assignment task. This result seemed to indicate that time estimation was sensitive to the presence of perceptual/cognitive demands, but not to response related activities to which behavioural automaticity has developed.

  9. Balancing a U-Shaped Assembly Line by Applying Nested Partitions Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhagwat, Nikhil V.

    2005-01-01

    In this study, we applied the Nested Partitions method to a U-line balancing problem and conducted experiments to evaluate the application. From the results, it is quite evident that the Nested Partitions method provided near optimal solutions (optimal in some cases). Besides, the execution time is quite short as compared to the Branch and Bound algorithm. However, for larger data sets, the algorithm took significantly longer times for execution. One of the reasons could be the way in which the random samples are generated. In the present study, a random sample is a solution in itself which requires assignment ofmore » tasks to various stations. The time taken to assign tasks to stations is directly proportional to the number of tasks. Thus, if the number of tasks increases, the time taken to generate random samples for the different regions also increases. The performance index for the Nested Partitions method in the present study was the number of stations in the random solutions (samples) generated. The total idle time for the samples can be used as another performance index. ULINO method is known to have used a combination of bounds to come up with good solutions. This approach of combining different performance indices can be used to evaluate the random samples and obtain even better solutions. Here, we used deterministic time values for the tasks. In industries where majority of tasks are performed manually, the stochastic version of the problem could be of vital importance. Experimenting with different objective functions (No. of stations was used in this study) could be of some significance to some industries where in the cost associated with creation of a new station is not the same. For such industries, the results obtained by using the present approach will not be of much value. Labor costs, task incompletion costs or a combination of those can be effectively used as alternate objective functions.« less

  10. Reward Rate Optimization in Two-Alternative Decision Making: Empirical Tests of Theoretical Predictions

    ERIC Educational Resources Information Center

    Simen, Patrick; Contreras, David; Buck, Cara; Hu, Peter; Holmes, Philip; Cohen, Jonathan D.

    2009-01-01

    The drift-diffusion model (DDM) implements an optimal decision procedure for stationary, 2-alternative forced-choice tasks. The height of a decision threshold applied to accumulating information on each trial determines a speed-accuracy tradeoff (SAT) for the DDM, thereby accounting for a ubiquitous feature of human performance in speeded response…

  11. Large-Scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation

    DTIC Science & Technology

    2016-08-10

    AFRL-AFOSR-JP-TR-2016-0073 Large-scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation ...2016 4.  TITLE AND SUBTITLE Large-scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation 5a...performances on various machine learning tasks and it naturally lends itself to fast parallel implementations . Despite this, very little work has been

  12. The Mass-Longevity Triangle: Pareto Optimality and the Geometry of Life-History Trait Space

    PubMed Central

    Szekely, Pablo; Korem, Yael; Moran, Uri; Mayo, Avi; Alon, Uri

    2015-01-01

    When organisms need to perform multiple tasks they face a fundamental tradeoff: no phenotype can be optimal at all tasks. This situation was recently analyzed using Pareto optimality, showing that tradeoffs between tasks lead to phenotypes distributed on low dimensional polygons in trait space. The vertices of these polygons are archetypes—phenotypes optimal at a single task. This theory was applied to examples from animal morphology and gene expression. Here we ask whether Pareto optimality theory can apply to life history traits, which include longevity, fecundity and mass. To comprehensively explore the geometry of life history trait space, we analyze a dataset of life history traits of 2105 endothermic species. We find that, to a first approximation, life history traits fall on a triangle in log-mass log-longevity space. The vertices of the triangle suggest three archetypal strategies, exemplified by bats, shrews and whales, with specialists near the vertices and generalists in the middle of the triangle. To a second approximation, the data lies in a tetrahedron, whose extra vertex above the mass-longevity triangle suggests a fourth strategy related to carnivory. Each animal species can thus be placed in a coordinate system according to its distance from the archetypes, which may be useful for genome-scale comparative studies of mammalian aging and other biological aspects. We further demonstrate that Pareto optimality can explain a range of previous studies which found animal and plant phenotypes which lie in triangles in trait space. This study demonstrates the applicability of multi-objective optimization principles to understand life history traits and to infer archetypal strategies that suggest why some mammalian species live much longer than others of similar mass. PMID:26465336

  13. RELATIONSHIP OF COGNITIVE BEHAVIORAL THERAPY EFFECTS AND HOMEWORK IN AN INDICATED PREVENTION OF DEPRESSION INTERVENTION FOR NON-PROFESSIONAL CAREGIVERS (.).

    PubMed

    Otero, Patricia; Vázquez, Fernando L; Hermida, Elisabet; Díaz, Olga; Torres, Ángela

    2015-06-01

    Activities designed to be performed outside of the intervention are considered an essential aspect of the effectiveness of cognitive-behavioral therapy. However, these have received little attention in interventions aimed at individuals with subclinical depressive symptoms who do not yet meet diagnostic criteria for depression (indicated prevention). In this study, the completion of tasks given as homework and their relationship with post-treatment depressive symptoms was with relation to an indicated prevention of depression intervention. Eighty-nine female non-professional caregivers recruited from an official registry completed an intervention involving 11 homework tasks. Tasks performed were recorded and depressive symptoms were assessed with the Center for Epidemiologic Studies Depression Scale (CES-D). Among caregivers, 80.9% completed 9-11 tasks. The number of tasks performed was associated with post-treatment depressive symptoms, with 9 being optimal for clinically significant improvement. These findings highlight the relationship between homework and post-treatment depressive symptoms.

  14. Closed-loop, pilot/vehicle analysis of the approach and landing task

    NASA Technical Reports Server (NTRS)

    Anderson, M. R.; Schmidt, D. K.

    1986-01-01

    In the case of approach and landing, it is universally accepted that the pilot uses more than one vehicle response, or output, to close his control loops. Therefore, to model this task, a multi-loop analysis technique is required. The analysis problem has been in obtaining reasonable analytic estimates of the describing functions representing the pilot's loop compensation. Once these pilot describing functions are obtained, appropriate performance and workload metrics must then be developed for the landing task. The optimal control approach provides a powerful technique for obtaining the necessary describing functions, once the appropriate task objective is defined in terms of a quadratic objective function. An approach is presented through the use of a simple, reasonable objective function and model-based metrics to evaluate loop performance and pilot workload. The results of an analysis of the LAHOS (Landing and Approach of Higher Order Systems) study performed by R.E. Smith is also presented.

  15. Obsessive-compulsive disorder: a disorder of pessimal (non-functional) motor behavior.

    PubMed

    Zor, R; Keren, H; Hermesh, H; Szechtman, H; Mort, J; Eilam, D

    2009-10-01

    To determine whether in addition to repetitiveness, the motor rituals of patients with obsessive-compulsive disorder (OCD) involve reduced functionality due to numerous and measurable acts that are irrelevant and unnecessary for task completion. Comparing motor rituals of OCD patients with behavior of non-patient control individuals who were instructed to perform the same motor task. Obsessive-compulsive disorder behavior comprises abundant acts that were not performed by the controls. These acts seem unnecessary or even irrelevant for the task that the patients were performing, and therefore are termed 'non-functional'. Non-functional acts comprise some 60% of OCD motor behavior. Moreover, OCD behavior consists of short chains of functional acts bounded by long chains of non-functional acts. The abundance of irrelevant or unnecessary acts in OCD motor rituals represents reduced functionality in terms of task completion, typifying OCD rituals as pessimal behavior (antonym of optimal behavior).

  16. A multi-country perspective on nurses' tasks below their skill level: reports from domestically trained nurses and foreign trained nurses from developing countries.

    PubMed

    Bruyneel, Luk; Li, Baoyue; Aiken, Linda; Lesaffre, Emmanuel; Van den Heede, Koen; Sermeus, Walter

    2013-02-01

    Several studies have concluded that the use of nurses' time and energy is often not optimized. Given widespread migration of nurses from developing to developed countries, it is important for human resource planning to know whether nursing education in developing countries is associated with more exaggerated patterns of inefficiency. First, to describe nurses' reports on tasks below their skill level. Second, to examine the association between nurses' migratory status (domestically trained nurse or foreign trained nurse from a developing country) and reports on these tasks. The Registered Nurse Forecasting Study used a cross-sectional quantitative research design to gather data from 33,731 nurses (62% response rate) in 486 hospitals in Belgium, England, Finland, Germany, Greece, Ireland, the Netherlands, Norway, Poland, Spain, Sweden and Switzerland. For this analysis, nurse-reported information on migratory status and tasks below their skill level performed during their last shift was used. Random effects models estimated the effect of nurses' migratory status on reports of these tasks. 832 nurses were trained in a developing country (2.5% of total sample). Across countries, a high proportion of both domestically trained and foreign trained nurses from developing countries reported having performed tasks below their skill level during their last shift. After adjusting for nurses' type of last shift worked, years of experience, and level of education, there remained a pronounced overall effect of being a foreign trained nurse from a developing country and an increase in reports of tasks below skill level performed during the last shift. The findings suggest that there remains much room for improvement to optimize the use of nurses' time and energy. Special attention should be given to raising the professional level of practice of foreign trained nurses from developing countries. Further research is needed to understand the influence of professional practice standards, skill levels of foreign trained nurses from developing countries and values attached to these tasks resulting from previous work experiences in their home countries. This will allow us to better understand the conditions under which foreign trained nurses from developing countries can optimally contribute to professional nursing practice in developed country contexts. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Behavior and neural basis of near-optimal visual search

    PubMed Central

    Ma, Wei Ji; Navalpakkam, Vidhya; Beck, Jeffrey M; van den Berg, Ronald; Pouget, Alexandre

    2013-01-01

    The ability to search efficiently for a target in a cluttered environment is one of the most remarkable functions of the nervous system. This task is difficult under natural circumstances, as the reliability of sensory information can vary greatly across space and time and is typically a priori unknown to the observer. In contrast, visual-search experiments commonly use stimuli of equal and known reliability. In a target detection task, we randomly assigned high or low reliability to each item on a trial-by-trial basis. An optimal observer would weight the observations by their trial-to-trial reliability and combine them using a specific nonlinear integration rule. We found that humans were near-optimal, regardless of whether distractors were homogeneous or heterogeneous and whether reliability was manipulated through contrast or shape. We present a neural-network implementation of near-optimal visual search based on probabilistic population coding. The network matched human performance. PMID:21552276

  18. The Effect of Visual Information on the Manual Approach and Landing

    NASA Technical Reports Server (NTRS)

    Wewerinke, P. H.

    1982-01-01

    The effect of visual information in combination with basic display information on the approach performance. A pre-experimental model analysis was performed in terms of the optimal control model. The resulting aircraft approach performance predictions were compared with the results of a moving base simulator program. The results illustrate that the model provides a meaningful description of the visual (scene) perception process involved in the complex (multi-variable, time varying) manual approach task with a useful predictive capability. The theoretical framework was shown to allow a straight-forward investigation of the complex interaction of a variety of task variables.

  19. Recalibration of the Multisensory Temporal Window of Integration Results from Changing Task Demands

    PubMed Central

    Mégevand, Pierre; Molholm, Sophie; Nayak, Ashabari; Foxe, John J.

    2013-01-01

    The notion of the temporal window of integration, when applied in a multisensory context, refers to the breadth of the interval across which the brain perceives two stimuli from different sensory modalities as synchronous. It maintains a unitary perception of multisensory events despite physical and biophysical timing differences between the senses. The boundaries of the window can be influenced by attention and past sensory experience. Here we examined whether task demands could also influence the multisensory temporal window of integration. We varied the stimulus onset asynchrony between simple, short-lasting auditory and visual stimuli while participants performed two tasks in separate blocks: a temporal order judgment task that required the discrimination of subtle auditory-visual asynchronies, and a reaction time task to the first incoming stimulus irrespective of its sensory modality. We defined the temporal window of integration as the range of stimulus onset asynchronies where performance was below 75% in the temporal order judgment task, as well as the range of stimulus onset asynchronies where responses showed multisensory facilitation (race model violation) in the reaction time task. In 5 of 11 participants, we observed audio-visual stimulus onset asynchronies where reaction time was significantly accelerated (indicating successful integration in this task) while performance was accurate in the temporal order judgment task (indicating successful segregation in that task). This dissociation suggests that in some participants, the boundaries of the temporal window of integration can adaptively recalibrate in order to optimize performance according to specific task demands. PMID:23951203

  20. Ideal AFROC and FROC observers.

    PubMed

    Khurd, Parmeshwar; Liu, Bin; Gindi, Gene

    2010-02-01

    Detection of multiple lesions in images is a medically important task and free-response receiver operating characteristic (FROC) analyses and its variants, such as alternative FROC (AFROC) analyses, are commonly used to quantify performance in such tasks. However, ideal observers that optimize FROC or AFROC performance metrics have not yet been formulated in the general case. If available, such ideal observers may turn out to be valuable for imaging system optimization and in the design of computer aided diagnosis techniques for lesion detection in medical images. In this paper, we derive ideal AFROC and FROC observers. They are ideal in that they maximize, amongst all decision strategies, the area, or any partial area, under the associated AFROC or FROC curve. Calculation of observer performance for these ideal observers is computationally quite complex. We can reduce this complexity by considering forms of these observers that use false positive reports derived from signal-absent images only. We also consider a Bayes risk analysis for the multiple-signal detection task with an appropriate definition of costs. A general decision strategy that minimizes Bayes risk is derived. With particular cost constraints, this general decision strategy reduces to the decision strategy associated with the ideal AFROC or FROC observer.

  1. When and Why Threats Go Undetected: Impacts of Event Rate and Shift Length on Threat Detection Accuracy During Airport Baggage Screening.

    PubMed

    Meuter, Renata F I; Lacherez, Philippe F

    2016-03-01

    We aimed to assess the impact of task demands and individual characteristics on threat detection in baggage screeners. Airport security staff work under time constraints to ensure optimal threat detection. Understanding the impact of individual characteristics and task demands on performance is vital to ensure accurate threat detection. We examined threat detection in baggage screeners as a function of event rate (i.e., number of bags per minute) and time on task across 4 months. We measured performance in terms of the accuracy of detection of Fictitious Threat Items (FTIs) randomly superimposed on X-ray images of real passenger bags. Analyses of the percentage of correct FTI identifications (hits) show that longer shifts with high baggage throughput result in worse threat detection. Importantly, these significant performance decrements emerge within the first 10 min of these busy screening shifts only. Longer shift lengths, especially when combined with high baggage throughput, increase the likelihood that threats go undetected. Shorter shift rotations, although perhaps difficult to implement during busy screening periods, would ensure more consistently high vigilance in baggage screeners and, therefore, optimal threat detection and passenger safety. © 2015, Human Factors and Ergonomics Society.

  2. Total systems design analysis of high performance structures

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1993-01-01

    Designer-control parameters were identified at interdiscipline interfaces to optimize structural systems performance and downstream development and operations with reliability and least life-cycle cost. Interface tasks and iterations are tracked through a matrix of performance disciplines integration versus manufacturing, verification, and operations interactions for a total system design analysis. Performance integration tasks include shapes, sizes, environments, and materials. Integrity integrating tasks are reliability and recurring structural costs. Significant interface designer control parameters were noted as shapes, dimensions, probability range factors, and cost. Structural failure concept is presented, and first-order reliability and deterministic methods, benefits, and limitations are discussed. A deterministic reliability technique combining benefits of both is proposed for static structures which is also timely and economically verifiable. Though launch vehicle environments were primarily considered, the system design process is applicable to any surface system using its own unique filed environments.

  3. Optimality of the basic colour categories for classification

    PubMed Central

    Griffin, Lewis D

    2005-01-01

    Categorization of colour has been widely studied as a window into human language and cognition, and quite separately has been used pragmatically in image-database retrieval systems. This suggests the hypothesis that the best category system for pragmatic purposes coincides with human categories (i.e. the basic colours). We have tested this hypothesis by assessing the performance of different category systems in a machine-vision task. The task was the identification of the odd-one-out from triples of images obtained using a web-based image-search service. In each triple, two of the images had been retrieved using the same search term, the other a different term. The terms were simple concrete nouns. The results were as follows: (i) the odd-one-out task can be performed better than chance using colour alone; (ii) basic colour categorization performs better than random systems of categories; (iii) a category system that performs better than the basic colours could not be found; and (iv) it is not just the general layout of the basic colours that is important, but also the detail. We conclude that (i) the results support the plausibility of an explanation for the basic colours as a result of a pressure-to-optimality and (ii) the basic colours are good categories for machine vision image-retrieval systems. PMID:16849219

  4. A Bayesian hierarchical diffusion model decomposition of performance in Approach–Avoidance Tasks

    PubMed Central

    Krypotos, Angelos-Miltiadis; Beckers, Tom; Kindt, Merel; Wagenmakers, Eric-Jan

    2015-01-01

    Common methods for analysing response time (RT) tasks, frequently used across different disciplines of psychology, suffer from a number of limitations such as the failure to directly measure the underlying latent processes of interest and the inability to take into account the uncertainty associated with each individual's point estimate of performance. Here, we discuss a Bayesian hierarchical diffusion model and apply it to RT data. This model allows researchers to decompose performance into meaningful psychological processes and to account optimally for individual differences and commonalities, even with relatively sparse data. We highlight the advantages of the Bayesian hierarchical diffusion model decomposition by applying it to performance on Approach–Avoidance Tasks, widely used in the emotion and psychopathology literature. Model fits for two experimental data-sets demonstrate that the model performs well. The Bayesian hierarchical diffusion model overcomes important limitations of current analysis procedures and provides deeper insight in latent psychological processes of interest. PMID:25491372

  5. Multi-Robot Coalitions Formation with Deadlines: Complexity Analysis and Solutions

    PubMed Central

    2017-01-01

    Multi-robot task allocation is one of the main problems to address in order to design a multi-robot system, very especially when robots form coalitions that must carry out tasks before a deadline. A lot of factors affect the performance of these systems and among them, this paper is focused on the physical interference effect, produced when two or more robots want to access the same point simultaneously. To our best knowledge, this paper presents the first formal description of multi-robot task allocation that includes a model of interference. Thanks to this description, the complexity of the allocation problem is analyzed. Moreover, the main contribution of this paper is to provide the conditions under which the optimal solution of the aforementioned allocation problem can be obtained solving an integer linear problem. The optimal results are compared to previous allocation algorithms already proposed by the first two authors of this paper and with a new method proposed in this paper. The results obtained show how the new task allocation algorithms reach up more than an 80% of the median of the optimal solution, outperforming previous auction algorithms with a huge reduction of the execution time. PMID:28118384

  6. Multi-Robot Coalitions Formation with Deadlines: Complexity Analysis and Solutions.

    PubMed

    Guerrero, Jose; Oliver, Gabriel; Valero, Oscar

    2017-01-01

    Multi-robot task allocation is one of the main problems to address in order to design a multi-robot system, very especially when robots form coalitions that must carry out tasks before a deadline. A lot of factors affect the performance of these systems and among them, this paper is focused on the physical interference effect, produced when two or more robots want to access the same point simultaneously. To our best knowledge, this paper presents the first formal description of multi-robot task allocation that includes a model of interference. Thanks to this description, the complexity of the allocation problem is analyzed. Moreover, the main contribution of this paper is to provide the conditions under which the optimal solution of the aforementioned allocation problem can be obtained solving an integer linear problem. The optimal results are compared to previous allocation algorithms already proposed by the first two authors of this paper and with a new method proposed in this paper. The results obtained show how the new task allocation algorithms reach up more than an 80% of the median of the optimal solution, outperforming previous auction algorithms with a huge reduction of the execution time.

  7. An efficient and accurate solution methodology for bilevel multi-objective programming problems using a hybrid evolutionary-local-search algorithm.

    PubMed

    Deb, Kalyanmoy; Sinha, Ankur

    2010-01-01

    Bilevel optimization problems involve two optimization tasks (upper and lower level), in which every feasible upper level solution must correspond to an optimal solution to a lower level optimization problem. These problems commonly appear in many practical problem solving tasks including optimal control, process optimization, game-playing strategy developments, transportation problems, and others. However, they are commonly converted into a single level optimization problem by using an approximate solution procedure to replace the lower level optimization task. Although there exist a number of theoretical, numerical, and evolutionary optimization studies involving single-objective bilevel programming problems, not many studies look at the context of multiple conflicting objectives in each level of a bilevel programming problem. In this paper, we address certain intricate issues related to solving multi-objective bilevel programming problems, present challenging test problems, and propose a viable and hybrid evolutionary-cum-local-search based algorithm as a solution methodology. The hybrid approach performs better than a number of existing methodologies and scales well up to 40-variable difficult test problems used in this study. The population sizing and termination criteria are made self-adaptive, so that no additional parameters need to be supplied by the user. The study indicates a clear niche of evolutionary algorithms in solving such difficult problems of practical importance compared to their usual solution by a computationally expensive nested procedure. The study opens up many issues related to multi-objective bilevel programming and hopefully this study will motivate EMO and other researchers to pay more attention to this important and difficult problem solving activity.

  8. Caffeine dosing strategies to optimize alertness during sleep loss.

    PubMed

    Vital-Lopez, Francisco G; Ramakrishnan, Sridhar; Doty, Tracy J; Balkin, Thomas J; Reifman, Jaques

    2018-05-28

    Sleep loss, which affects about one-third of the US population, can severely impair physical and neurobehavioural performance. Although caffeine, the most widely used stimulant in the world, can mitigate these effects, currently there are no tools to guide the timing and amount of caffeine consumption to optimize its benefits. In this work, we provide an optimization algorithm, suited for mobile computing platforms, to determine when and how much caffeine to consume, so as to safely maximize neurobehavioural performance at the desired time of the day, under any sleep-loss condition. The algorithm is based on our previously validated Unified Model of Performance, which predicts the effect of caffeine consumption on a psychomotor vigilance task. We assessed the algorithm by comparing the caffeine-dosing strategies (timing and amount) it identified with the dosing strategies used in four experimental studies, involving total and partial sleep loss. Through computer simulations, we showed that the algorithm yielded caffeine-dosing strategies that enhanced performance of the predicted psychomotor vigilance task by up to 64% while using the same total amount of caffeine as in the original studies. In addition, the algorithm identified strategies that resulted in equivalent performance to that in the experimental studies while reducing caffeine consumption by up to 65%. Our work provides the first quantitative caffeine optimization tool for designing effective strategies to maximize neurobehavioural performance and to avoid excessive caffeine consumption during any arbitrary sleep-loss condition. © 2018 The Authors. Journal of Sleep Research published by John Wiley & Sons Ltd on behalf of European Sleep Research Society.

  9. The LHCb Grid Simulation: Proof of Concept

    NASA Astrophysics Data System (ADS)

    Hushchyn, M.; Ustyuzhanin, A.; Arzymatov, K.; Roiser, S.; Baranov, A.

    2017-10-01

    The Worldwide LHC Computing Grid provides access to data and computational resources to analyze it for researchers with different geographical locations. The grid has a hierarchical topology with multiple sites distributed over the world with varying number of CPUs, amount of disk storage and connection bandwidth. Job scheduling and data distribution strategy are key elements of grid performance. Optimization of algorithms for those tasks requires their testing on real grid which is hard to achieve. Having a grid simulator might simplify this task and therefore lead to more optimal scheduling and data placement algorithms. In this paper we demonstrate a grid simulator for the LHCb distributed computing software.

  10. Acute psychosocial stress and children's memory.

    PubMed

    de Veld, Danielle M J; Riksen-Walraven, J Marianne; de Weerth, Carolina

    2014-07-01

    We investigated whether children's performance on working memory (WM) and delayed retrieval (DR) tasks decreased after stress exposure, and how physiological stress responses related to performance under stress. About 158 children (83 girls; Mage = 10.61 years, SD = 0.52) performed two WM tasks (WM forward and WM backward) and a DR memory task first during a control condition, and 1 week later during a stress challenge. Salivary alpha-amylase (sAA) and cortisol were assessed during the challenge. Only WM backward performance declined over conditions. Correlations between physiological stress responses and performance within the stress challenge were present only for WM forward and DR. For WM forward, higher cortisol responses were related to better performance. For DR, there was an inverted U-shape relation between cortisol responses and performance, as well as a cortisol × sAA interaction, with concurrent high or low responses related to optimal performance. This emphasizes the importance of including curvilinear and interaction effects when relating physiology to memory.

  11. Slushy weightings for the optimal pilot model. [considering visual tracking task

    NASA Technical Reports Server (NTRS)

    Dillow, J. D.; Picha, D. G.; Anderson, R. O.

    1975-01-01

    A pilot model is described which accounts for the effect of motion cues in a well defined visual tracking task. The effect of visual and motion cues are accounted for in the model in two ways. First, the observation matrix in the pilot model is structured to account for the visual and motion inputs presented to the pilot. Secondly, the weightings in the quadratic cost function associated with the pilot model are modified to account for the pilot's perception of the variables he considers important in the task. Analytic results obtained using the pilot model are compared to experimental results and in general good agreement is demonstrated. The analytic model yields small improvements in tracking performance with the addition of motion cues for easily controlled task dynamics and large improvements in tracking performance with the addition of motion cues for difficult task dynamics.

  12. Training Lessons Learned from Peak Performance Episodes

    DTIC Science & Technology

    1986-06-01

    PEAK PERFORMAN4CE Fina Report- EPSOESOctober 1984-December 1985 % 6. PERFORMING ORG. REPORT NUMBER * 7. AUTHOR(@) 6. CONTRACT OR GRANT NUMBER(s) James...L. Fobes - 9. PERFORMING ORGANIZATION NAME AND) ADDRESS 10. PROGRAM ELEMENT. PROJECT, TASK U.S. Army Research Institute Field UnitAEA OKUINMER...peak performance indicates that three cogni- . tive components enable these episodes: psychological readiness (activating . optimal arousal and emotion

  13. Error rate information in attention allocation pilot models

    NASA Technical Reports Server (NTRS)

    Faulkner, W. H.; Onstott, E. D.

    1977-01-01

    The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.

  14. Are Individual Differences in Performance on Perceptual and Cognitive Optimization Problems Determined by General Intelligence?

    ERIC Educational Resources Information Center

    Burns, Nicholas R.; Lee, Michael D.; Vickers, Douglas

    2006-01-01

    Studies of human problem solving have traditionally used deterministic tasks that require the execution of a systematic series of steps to reach a rational and optimal solution. Most real-world problems, however, are characterized by uncertainty, the need to consider an enormous number of variables and possible courses of action at each stage in…

  15. Procedural learning: A developmental study of motor sequence learning and probabilistic classification learning in school-aged children.

    PubMed

    Mayor-Dubois, Claire; Zesiger, Pascal; Van der Linden, Martial; Roulet-Perez, Eliane

    2016-01-01

    In this study, we investigated motor and cognitive procedural learning in typically developing children aged 8-12 years with a serial reaction time (SRT) task and a probabilistic classification learning (PCL) task. The aims were to replicate and extend the results of previous SRT studies, to investigate PCL in school-aged children, to explore the contribution of declarative knowledge to SRT and PCL performance, to explore the strategies used by children in the PCL task via a mathematical model, and to see whether performances obtained in motor and cognitive tasks correlated. The results showed similar learning effects in the three age groups in the SRT and in the first half of the PCL tasks. Participants did not develop explicit knowledge in the SRT task whereas declarative knowledge of the cue-outcome associations correlated with the performances in the second half of the PCL task, suggesting a participation of explicit knowledge after some time of exposure in PCL. An increasing proportion of the optimal strategy use with increasing age was observed in the PCL task. Finally, no correlation appeared between cognitive and motor performance. In conclusion, we extended the hypothesis of age invariance from motor to cognitive procedural learning, which had not been done previously. The ability to adopt more efficient learning strategies with age may rely on the maturation of the fronto-striatal loops. The lack of correlation between performance in the SRT task and the first part of the PCL task suggests dissociable developmental trajectories within the procedural memory system.

  16. Objective Fidelity Evaluation in Multisensory Virtual Environments: Auditory Cue Fidelity in Flight Simulation

    PubMed Central

    Meyer, Georg F.; Wong, Li Ting; Timson, Emma; Perfect, Philip; White, Mark D.

    2012-01-01

    We argue that objective fidelity evaluation of virtual environments, such as flight simulation, should be human-performance-centred and task-specific rather than measure the match between simulation and physical reality. We show how principled experimental paradigms and behavioural models to quantify human performance in simulated environments that have emerged from research in multisensory perception provide a framework for the objective evaluation of the contribution of individual cues to human performance measures of fidelity. We present three examples in a flight simulation environment as a case study: Experiment 1: Detection and categorisation of auditory and kinematic motion cues; Experiment 2: Performance evaluation in a target-tracking task; Experiment 3: Transferrable learning of auditory motion cues. We show how the contribution of individual cues to human performance can be robustly evaluated for each task and that the contribution is highly task dependent. The same auditory cues that can be discriminated and are optimally integrated in experiment 1, do not contribute to target-tracking performance in an in-flight refuelling simulation without training, experiment 2. In experiment 3, however, we demonstrate that the auditory cue leads to significant, transferrable, performance improvements with training. We conclude that objective fidelity evaluation requires a task-specific analysis of the contribution of individual cues. PMID:22957068

  17. Using Patient Feedback to Optimize the Design of a Certolizumab Pegol Electromechanical Self-Injection Device: Insights from Human Factors Studies.

    PubMed

    Domańska, Barbara; Stumpp, Oliver; Poon, Steven; Oray, Serkan; Mountian, Irina; Pichon, Clovis

    2018-01-01

    We incorporated patient feedback from human factors studies (HFS) in the patient-centric design and validation of ava ® , an electromechanical device (e-Device) for self-injecting the anti-tumor necrosis factor certolizumab pegol (CZP). Healthcare professionals, caregivers, healthy volunteers, and patients with rheumatoid arthritis, psoriatic arthritis, ankylosing spondylitis, or Crohn's disease participated in 11 formative HFS to optimize the e-Device design through intended user feedback; nine studies involved simulated injections. Formative participant questionnaire feedback was collected following e-Device prototype handling. Validation HFS (one EU study and one US study) assessed the safe and effective setup and use of the e-Device using 22 predefined critical tasks. Task outcomes were categorized as "failures" if participants did not succeed within three attempts. Two hundred eighty-three participants entered formative (163) and validation (120) HFS; 260 participants performed one or more simulated e-Device self-injections. Design changes following formative HFS included alterations to buttons and the graphical user interface screen. All validation HFS participants completed critical tasks necessary for CZP dose delivery, with minimal critical task failures (12 of 572 critical tasks, 2.1%, in the EU study, and 2 of 5310 critical tasks, less than 0.1%, in the US study). CZP e-Device development was guided by intended user feedback through HFS, ensuring the final design addressed patients' needs. In both validation studies, participants successfully performed all critical tasks, demonstrating safe and effective e-Device self-injections. UCB Pharma. Plain language summary available on the journal website.

  18. Exploring Cognitive Flexibility With a Noninvasive BCI Using Simultaneous Steady-State Visual Evoked Potentials and Sensorimotor Rhythms.

    PubMed

    Edelman, Bradley J; Meng, Jianjun; Gulachek, Nicholas; Cline, Christopher C; He, Bin

    2018-05-01

    EEG-based brain-computer interface (BCI) technology creates non-biological pathways for conveying a user's mental intent solely through noninvasively measured neural signals. While optimizing the performance of a single task has long been the focus of BCI research, in order to translate this technology into everyday life, realistic situations, in which multiple tasks are performed simultaneously, must be investigated. In this paper, we explore the concept of cognitive flexibility, or multitasking, within the BCI framework by utilizing a 2-D cursor control task, using sensorimotor rhythms (SMRs), and a four-target visual attention task, using steady-state visual evoked potentials (SSVEPs), both individually and simultaneously. We found no significant difference between the accuracy of the tasks when executing them alone (SMR-57.9% ± 15.4% and SSVEP-59.0% ± 14.2%) and simultaneously (SMR-54.9% ± 17.2% and SSVEP-57.5% ± 15.4%). These modest decreases in performance were supported by similar, non-significant changes in the electrophysiology of the SSVEP and SMR signals. In this sense, we report that multiple BCI tasks can be performed simultaneously without a significant deterioration in performance; this finding will help drive these systems toward realistic daily use in which a user's cognition will need to be involved in multiple tasks at once.

  19. Display/control requirements for VTOL aircraft

    NASA Technical Reports Server (NTRS)

    Hoffman, W. C.; Curry, R. E.; Kleinman, D. L.; Hollister, W. M.; Young, L. R.

    1975-01-01

    Quantative metrics were determined for system control performance, workload for control, monitoring performance, and workload for monitoring. Pilot tasks were allocated for navigation and guidance of automated commercial V/STOL aircraft in all weather conditions using an optimal control model of the human operator to determine display elements and design.

  20. Processor design optimization methodology for synthetic vision systems

    NASA Astrophysics Data System (ADS)

    Wren, Bill; Tarleton, Norman G.; Symosek, Peter F.

    1997-06-01

    Architecture optimization requires numerous inputs from hardware to software specifications. The task of varying these input parameters to obtain an optimal system architecture with regard to cost, specified performance and method of upgrade considerably increases the development cost due to the infinitude of events, most of which cannot even be defined by any simple enumeration or set of inequalities. We shall address the use of a PC-based tool using genetic algorithms to optimize the architecture for an avionics synthetic vision system, specifically passive millimeter wave system implementation.

  1. Optimization of breast mass classification using sequential forward floating selection (SFFS) and a support vector machine (SVM) model

    PubMed Central

    Tan, Maxine; Pu, Jiantao; Zheng, Bin

    2014-01-01

    Purpose: Improving radiologists’ performance in classification between malignant and benign breast lesions is important to increase cancer detection sensitivity and reduce false-positive recalls. For this purpose, developing computer-aided diagnosis (CAD) schemes has been attracting research interest in recent years. In this study, we investigated a new feature selection method for the task of breast mass classification. Methods: We initially computed 181 image features based on mass shape, spiculation, contrast, presence of fat or calcifications, texture, isodensity, and other morphological features. From this large image feature pool, we used a sequential forward floating selection (SFFS)-based feature selection method to select relevant features, and analyzed their performance using a support vector machine (SVM) model trained for the classification task. On a database of 600 benign and 600 malignant mass regions of interest (ROIs), we performed the study using a ten-fold cross-validation method. Feature selection and optimization of the SVM parameters were conducted on the training subsets only. Results: The area under the receiver operating characteristic curve (AUC) = 0.805±0.012 was obtained for the classification task. The results also showed that the most frequently-selected features by the SFFS-based algorithm in 10-fold iterations were those related to mass shape, isodensity and presence of fat, which are consistent with the image features frequently used by radiologists in the clinical environment for mass classification. The study also indicated that accurately computing mass spiculation features from the projection mammograms was difficult, and failed to perform well for the mass classification task due to tissue overlap within the benign mass regions. Conclusions: In conclusion, this comprehensive feature analysis study provided new and valuable information for optimizing computerized mass classification schemes that may have potential to be useful as a “second reader” in future clinical practice. PMID:24664267

  2. Evaluating stereoscopic displays: both efficiency measures and perceived workload sensitive to manipulations in binocular disparity

    NASA Astrophysics Data System (ADS)

    van Beurden, Maurice H. P. H.; Ijsselsteijn, Wijnand A.; de Kort, Yvonne A. W.

    2011-03-01

    Stereoscopic displays are known to offer a number of key advantages in visualizing complex 3D structures or datasets. The large majority of studies that focus on evaluating stereoscopic displays for professional applications use completion time and/or the percentage of correct answers to measure potential performance advantages. However, completion time and accuracy may not fully reflect all the benefits of stereoscopic displays. In this paper, we argue that perceived workload is an additional valuable indicator reflecting the extent to which users can benefit from using stereoscopic displays. We performed an experiment in which participants were asked to perform a visual path-tracing task within a convoluted 3D wireframe structure, varying in level of complexity of the visualised structure and level of disparity of the visualisation. The results showed that an optimal performance (completion time, accuracy and workload), depend both on task difficulty and disparity level. Stereoscopic disparity revealed a faster and more accurate task performance, whereas we observed a trend that performance on difficult tasks stands to benefit more from higher levels of disparity than performance on easy tasks. Perceived workload (as measured using the NASA-TLX) showed a similar response pattern, providing evidence that perceived workload is sensitive to variations in disparity as well as task difficulty. This suggests that perceived workload could be a useful concept, in addition to standard performance indicators, in characterising and measuring human performance advantages when using stereoscopic displays.

  3. HURON (HUman and Robotic Optimization Network) Multi-Agent Temporal Activity Planner/Scheduler

    NASA Technical Reports Server (NTRS)

    Hua, Hook; Mrozinski, Joseph J.; Elfes, Alberto; Adumitroaie, Virgil; Shelton, Kacie E.; Smith, Jeffrey H.; Lincoln, William P.; Weisbin, Charles R.

    2012-01-01

    HURON solves the problem of how to optimize a plan and schedule for assigning multiple agents to a temporal sequence of actions (e.g., science tasks). Developed as a generic planning and scheduling tool, HURON has been used to optimize space mission surface operations. The tool has also been used to analyze lunar architectures for a variety of surface operational scenarios in order to maximize return on investment and productivity. These scenarios include numerous science activities performed by a diverse set of agents: humans, teleoperated rovers, and autonomous rovers. Once given a set of agents, activities, resources, resource constraints, temporal constraints, and de pendencies, HURON computes an optimal schedule that meets a specified goal (e.g., maximum productivity or minimum time), subject to the constraints. HURON performs planning and scheduling optimization as a graph search in state-space with forward progression. Each node in the graph contains a state instance. Starting with the initial node, a graph is automatically constructed with new successive nodes of each new state to explore. The optimization uses a set of pre-conditions and post-conditions to create the children states. The Python language was adopted to not only enable more agile development, but to also allow the domain experts to easily define their optimization models. A graphical user interface was also developed to facilitate real-time search information feedback and interaction by the operator in the search optimization process. The HURON package has many potential uses in the fields of Operations Research and Management Science where this technology applies to many commercial domains requiring optimization to reduce costs. For example, optimizing a fleet of transportation truck routes, aircraft flight scheduling, and other route-planning scenarios involving multiple agent task optimization would all benefit by using HURON.

  4. Structural design of high-performance capacitive accelerometers using parametric optimization with uncertainties

    NASA Astrophysics Data System (ADS)

    Teves, André da Costa; Lima, Cícero Ribeiro de; Passaro, Angelo; Silva, Emílio Carlos Nelli

    2017-03-01

    Electrostatic or capacitive accelerometers are among the highest volume microelectromechanical systems (MEMS) products nowadays. The design of such devices is a complex task, since they depend on many performance requirements, which are often conflicting. Therefore, optimization techniques are often used in the design stage of these MEMS devices. Because of problems with reliability, the technology of MEMS is not yet well established. Thus, in this work, size optimization is combined with the reliability-based design optimization (RBDO) method to improve the performance of accelerometers. To account for uncertainties in the dimensions and material properties of these devices, the first order reliability method is applied to calculate the probabilities involved in the RBDO formulation. Practical examples of bulk-type capacitive accelerometer designs are presented and discussed to evaluate the potential of the implemented RBDO solver.

  5. Support vector machine firefly algorithm based optimization of lens system.

    PubMed

    Shamshirband, Shahaboddin; Petković, Dalibor; Pavlović, Nenad T; Ch, Sudheer; Altameem, Torki A; Gani, Abdullah

    2015-01-01

    Lens system design is an important factor in image quality. The main aspect of the lens system design methodology is the optimization procedure. Since optimization is a complex, nonlinear task, soft computing optimization algorithms can be used. There are many tools that can be employed to measure optical performance, but the spot diagram is the most useful. The spot diagram gives an indication of the image of a point object. In this paper, the spot size radius is considered an optimization criterion. Intelligent soft computing scheme support vector machines (SVMs) coupled with the firefly algorithm (FFA) are implemented. The performance of the proposed estimators is confirmed with the simulation results. The result of the proposed SVM-FFA model has been compared with support vector regression (SVR), artificial neural networks, and generic programming methods. The results show that the SVM-FFA model performs more accurately than the other methodologies. Therefore, SVM-FFA can be used as an efficient soft computing technique in the optimization of lens system designs.

  6. Schizotypy and Performance on an Insight Problem-Solving Task: The Contribution of Persecutory Ideation.

    PubMed

    Cosgrave, Jan; Haines, Ross; Golodetz, Stuart; Claridge, Gordon; Wulff, Katharina; van Heugten-van der Kloet, Dalena

    2018-01-01

    Insight problem solving is thought to underpin creative thought as it incorporates both divergent (generating multiple ideas and solutions) and convergent (arriving at the optimal solution) thinking approaches. The current literature on schizotypy and creativity is mixed and requires clarification. An alternate approach was employed by designing an exploratory web-based study using only correlates of schizotypal traits (paranoia, dissociation, cognitive failures, fantasy proneness, and unusual sleep experiences) and examining which (if any) predicted optimal performance on an insight problem-solving task. One hundred and twenty-one participants were recruited online from the general population and completed the number reduction task. The discovery of the hidden rule (HR) was used as a measure of insight. Multivariate logistic regression analyses highlighted persecutory ideation to best predict the discovery of the HR (OR = 1.05; 95% CI 1.01-1.10, p = 0.017), with a one-point increase in persecutory ideas corresponding to the participant being 5% more likely to discover the HR. This result suggests that persecutory ideation, above other schizotypy correlates, may be involved in insight problem solving.

  7. Variable-Complexity Multidisciplinary Optimization on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Grossman, Bernard; Mason, William H.; Watson, Layne T.; Haftka, Raphael T.

    1998-01-01

    This report covers work conducted under grant NAG1-1562 for the NASA High Performance Computing and Communications Program (HPCCP) from December 7, 1993, to December 31, 1997. The objective of the research was to develop new multidisciplinary design optimization (MDO) techniques which exploit parallel computing to reduce the computational burden of aircraft MDO. The design of the High-Speed Civil Transport (HSCT) air-craft was selected as a test case to demonstrate the utility of our MDO methods. The three major tasks of this research grant included: development of parallel multipoint approximation methods for the aerodynamic design of the HSCT, use of parallel multipoint approximation methods for structural optimization of the HSCT, mathematical and algorithmic development including support in the integration of parallel computation for items (1) and (2). These tasks have been accomplished with the development of a response surface methodology that incorporates multi-fidelity models. For the aerodynamic design we were able to optimize with up to 20 design variables using hundreds of expensive Euler analyses together with thousands of inexpensive linear theory simulations. We have thereby demonstrated the application of CFD to a large aerodynamic design problem. For the predicting structural weight we were able to combine hundreds of structural optimizations of refined finite element models with thousands of optimizations based on coarse models. Computations have been carried out on the Intel Paragon with up to 128 nodes. The parallel computation allowed us to perform combined aerodynamic-structural optimization using state of the art models of a complex aircraft configurations.

  8. It looks easy! Heuristics for combinatorial optimization problems.

    PubMed

    Chronicle, Edward P; MacGregor, James N; Ormerod, Thomas C; Burr, Alistair

    2006-04-01

    Human performance on instances of computationally intractable optimization problems, such as the travelling salesperson problem (TSP), can be excellent. We have proposed a boundary-following heuristic to account for this finding. We report three experiments with TSPs where the capacity to employ this heuristic was varied. In Experiment 1, participants free to use the heuristic produced solutions significantly closer to optimal than did those prevented from doing so. Experiments 2 and 3 together replicated this finding in larger problems and demonstrated that a potential confound had no effect. In all three experiments, performance was closely matched by a boundary-following model. The results implicate global rather than purely local processes. Humans may have access to simple, perceptually based, heuristics that are suited to some combinatorial optimization tasks.

  9. Task control and cognitive abilities of self and spouse in collaboration in middle-aged and older couples.

    PubMed

    Berg, Cynthia A; Smith, Timothy W; Ko, Kelly J; Beveridge, Ryan M; Story, Nathan; Henry, Nancy J M; Florsheim, Paul; Pearce, Gale; Uchino, Bert N; Skinner, Michelle A; Glazer, Kelly

    2007-09-01

    Collaborative problem solving may be used by older couples to optimize cognitive functioning, with some suggestion that older couples exhibit greater collaborative expertise. The study explored age differences in 2 aspects of collaborative expertise: spouses' knowledge of their own and their spouse's cognitive abilities and the ability to fit task control to these cognitive abilities. The participants were 300 middle-aged and older couples who completed a hypothetical errand task. The interactions were coded for control asserted by husbands and wives. Fluid intelligence was assessed, and spouses rated their own and their spouse's cognitive abilities. The results revealed no age differences in couple expertise, either in the ability to predict their own and their spouse's cognitive abilities or in the ability to fit task control to abilities. However, gender differences were found. Women fit task control to their own and their spouse's cognitive abilities; men only fit task control to their spouse's cognitive abilities. For women only, the fit between control and abilities was associated with better performance. The results indicate no age differences in couple expertise but point to gender as a factor in optimal collaboration. (PsycINFO Database Record (c) 2007 APA, all rights reserved).

  10. Acquisition and production of skilled behavior in dynamic decision-making tasks: Modeling strategic behavior in human-automation interaction: Why and aid can (and should) go unused

    NASA Technical Reports Server (NTRS)

    Kirlik, Alex

    1991-01-01

    Advances in computer and control technology offer the opportunity for task-offload aiding in human-machine systems. A task-offload aid (e.g., an autopilot, an intelligent assistant) can be selectively engaged by the human operator to dynamically delegate tasks to an automated system. Successful design and performance prediction in such systems requires knowledge of the factors influencing the strategy the operator develops and uses for managing interaction with the task-offload aid. A model is presented that shows how such strategies can be predicted as a function of three task context properties (frequency and duration of secondary tasks and costs of delaying secondary tasks) and three aid design properties (aid engagement and disengagement times, aid performance relative to human performance). Sensitivity analysis indicates how each of these contextual and design factors affect the optimal aid aid usage strategy and attainable system performance. The model is applied to understanding human-automation interaction in laboratory experiments on human supervisory control behavior. The laboratory task allowed subjects freedom to determine strategies for using an autopilot in a dynamic, multi-task environment. Modeling results suggested that many subjects may indeed have been acting appropriately by not using the autopilot in the way its designers intended. Although autopilot function was technically sound, this aid was not designed with due regard to the overall task context in which it was placed. These results demonstrate the need for additional research on how people may strategically manage their own resources, as well as those provided by automation, in an effort to keep workload and performance at acceptable levels.

  11. Decision Making and Ratio Processing in Patients with Mild Cognitive Impairment.

    PubMed

    Pertl, Marie-Theres; Benke, Thomas; Zamarian, Laura; Delazer, Margarete

    2015-01-01

    Making advantageous decisions is important in everyday life. This study aimed at assessing how patients with mild cognitive impairment (MCI) make decisions under risk. Additionally, it investigated the relationship between decision making, ratio processing, basic numerical abilities, and executive functions. Patients with MCI (n = 22) were compared with healthy controls (n = 29) on a complex task of decision making under risk (Game of Dice Task-Double, GDT-D), on two tasks evaluating basic decision making under risk, on a task of ratio processing, and on several neuropsychological background tests. Patients performed significantly lower than controls on the GDT-D and on ratio processing, whereas groups performed comparably on basic decision tasks. Specifically, in the GDT-D, patients obtained lower net scores and lower mean expected values, which indicate a less advantageous performance relative to that of controls. Performance on the GDT-D correlated significantly with performance in basic decision tasks, ratio processing, and executive-function measures when the analysis was performed on the whole sample. Patients with MCI make sub-optimal decisions in complex risk situations, whereas they perform at the same level as healthy adults in simple decision situations. Ratio processing and executive functions have an impact on the decision-making performance of both patients and healthy older adults. In order to facilitate advantageous decisions in complex everyday situations, information should be presented in an easily comprehensible form and cognitive training programs for patients with MCI should focus--among other abilities--on executive functions and ratio processing.

  12. Compact Heat Exchanger Design and Testing for Advanced Reactors and Advanced Power Cycles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Xiaodong; Zhang, Xiaoqin; Christensen, Richard

    The goal of the proposed research is to demonstrate the thermal hydraulic performance of innovative surface geometries in compact heat exchangers used as intermediate heat exchangers (IHXs) and recuperators for the supercritical carbon dioxide (s-CO 2) Brayton cycle. Printed-circuit heat exchangers (PCHEs) are the primary compact heat exchangers of interest. The overall objectives are: To develop optimized PCHE designs for different working fluid combinations including helium to s-CO 2, liquid salt to s-CO 2, sodium to s-CO 2, and liquid salt to helium; To experimentally and numerically investigate thermal performance, thermal stress and failure mechanism of PCHEs under various transients;more » and To study diffusion bonding techniques for elevated-temperature alloys and examine post-test material integrity of the PCHEs. The project objectives were accomplished by defining and executing five different tasks corresponding to these specific objectives. The first task involved a thorough literature review and a selection of IHX candidates with different surface geometries as well as a summary of prototypic operational conditions. The second task involved optimization of PCHE design with numerical analyses of thermal-hydraulic performances and mechanical integrity. The subsequent task dealt with the development of testing facilities and engineering design of PCHE to be tested in s-CO 2 fluid conditions. The next task involved experimental investigation and validation of the thermal-hydraulic performances and thermal stress distribution of prototype PCHEs manufactured with particular surface geometries. The last task involved an investigation of diffusion bonding process and posttest destructive testing to validate mechanical design methods adopted in the design process. The experimental work utilized the two test facilities at The Ohio State University (OSU) including one existing High-Temperature Helium Test Facility (HTHF) and the newly developed s-CO 2 test loop (STL) facility and s-CO 2 test facility at University of Wisconsin – Madison (UW).« less

  13. Optimal Quantum Spatial Search on Random Temporal Networks

    NASA Astrophysics Data System (ADS)

    Chakraborty, Shantanav; Novo, Leonardo; Di Giorgio, Serena; Omar, Yasser

    2017-12-01

    To investigate the performance of quantum information tasks on networks whose topology changes in time, we study the spatial search algorithm by continuous time quantum walk to find a marked node on a random temporal network. We consider a network of n nodes constituted by a time-ordered sequence of Erdös-Rényi random graphs G (n ,p ), where p is the probability that any two given nodes are connected: After every time interval τ , a new graph G (n ,p ) replaces the previous one. We prove analytically that, for any given p , there is always a range of values of τ for which the running time of the algorithm is optimal, i.e., O (√{n }), even when search on the individual static graphs constituting the temporal network is suboptimal. On the other hand, there are regimes of τ where the algorithm is suboptimal even when each of the underlying static graphs are sufficiently connected to perform optimal search on them. From this first study of quantum spatial search on a time-dependent network, it emerges that the nontrivial interplay between temporality and connectivity is key to the algorithmic performance. Moreover, our work can be extended to establish high-fidelity qubit transfer between any two nodes of the network. Overall, our findings show that one can exploit temporality to achieve optimal quantum information tasks on dynamical random networks.

  14. Microvascular Anastomosis: Proposition of a Learning Curve.

    PubMed

    Mokhtari, Pooneh; Tayebi Meybodi, Ali; Benet, Arnau; Lawton, Michael T

    2018-04-14

    Learning to perform a microvascular anastomosis is one of the most difficult tasks in cerebrovascular surgery. Previous studies offer little regarding the optimal protocols to maximize learning efficiency. This failure stems mainly from lack of knowledge about the learning curve of this task. To delineate this learning curve and provide information about its various features including acquisition, improvement, consistency, stability, and recall. Five neurosurgeons with an average surgical experience history of 5 yr and without any experience in bypass surgery performed microscopic anastomosis on progressively smaller-caliber silastic tubes (Biomet, Palm Beach Gardens, Florida) during 24 consecutive sessions. After a 1-, 2-, and 8-wk retention interval, they performed recall test on 0.7-mm silastic tubes. The anastomoses were rated based on anastomosis patency and presence of any leaks. Improvement rate was faster during initial sessions compared to the final practice sessions. Performance decline was observed in the first session of working on a smaller-caliber tube. However, this rapidly improved during the following sessions of practice. Temporary plateaus were seen in certain segments of the curve. The retention interval between the acquisition and recall phase did not cause a regression to the prepractice performance level. Learning the fine motor task of microvascular anastomosis adapts to the basic rules of learning such as the "power law of practice." Our results also support the improvement of performance during consecutive sessions of practice. The objective evidence provided may help in developing optimized learning protocols for microvascular anastomosis.

  15. Urine sampling and collection system optimization and testing

    NASA Technical Reports Server (NTRS)

    Fogal, G. L.; Geating, J. A.; Koesterer, M. G.

    1975-01-01

    A Urine Sampling and Collection System (USCS) engineering model was developed to provide for the automatic collection, volume sensing and sampling of urine from each micturition. The purpose of the engineering model was to demonstrate verification of the system concept. The objective of the optimization and testing program was to update the engineering model, to provide additional performance features and to conduct system testing to determine operational problems. Optimization tasks were defined as modifications to minimize system fluid residual and addition of thermoelectric cooling.

  16. Design of a video system providing optimal visual information for controlling payload and experiment operations with television

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A program was conducted which included the design of a set of simplified simulation tasks, design of apparatus and breadboard TV equipment for task performance, and the implementation of a number of simulation tests. Performance measurements were made under controlled conditions and the results analyzed to permit evaluation of the relative merits (effectivity) of various TV systems. Burden factors were subsequently generated for each TV system to permit tradeoff evaluation of system characteristics against performance. For the general remote operation mission, the 2-view system is recommended. This system is characterized and the corresponding equipment specifications were generated.

  17. Conditioning Military Women for Optimal Performance: Effects of Contraceptive Use

    DTIC Science & Technology

    1998-10-01

    illnesses (i.e. the common cold, sore throat, influenza, mononucleosis ) that afflict soldiers and athletes during physical training (52). The...goal: bring the number to 15 in each treatment group). Task 26: Submit final report when required. Task 27: Prepare abstracts for submission to...absence of any DEPO subjects, trends (and treatment effects) are impossible to discern. INF-g See Figure 8 (all experimental groups combined). This figure

  18. Reliability Centred Maintenance (RCM) Analysis of Laser Machine in Filling Lithos at PT X

    NASA Astrophysics Data System (ADS)

    Suryono, M. A. E.; Rosyidi, C. N.

    2018-03-01

    PT. X used automated machines which work for sixteen hours per day. Therefore, the machines should be maintained to keep the availability of the machines. The aim of this research is to determine maintenance tasks according to the cause of component’s failure using Reliability Centred Maintenance (RCM) and determine the amount of optimal inspection frequency which must be performed to the machine at filling lithos process. In this research, RCM is used as an analysis tool to determine the critical component and find optimal inspection frequencies to maximize machine’s reliability. From the analysis, we found that the critical machine in filling lithos process is laser machine in Line 2. Then we proceed to determine the cause of machine’s failure. Lastube component has the highest Risk Priority Number (RPN) among other components such as power supply, lens, chiller, laser siren, encoder, conveyor, and mirror galvo. Most of the components have operational consequences and the others have hidden failure consequences and safety consequences. Time-directed life-renewal task, failure finding task, and servicing task can be used to overcome these consequences. The results of data analysis show that the inspection must be performed once a month for laser machine in the form of preventive maintenance to lowering the downtime.

  19. Sight and sound persistently out of synch: stable individual differences in audiovisual synchronisation revealed by implicit measures of lip-voice integration

    PubMed Central

    Ipser, Alberta; Agolli, Vlera; Bajraktari, Anisa; Al-Alawi, Fatimah; Djaafara, Nurfitriani; Freeman, Elliot D.

    2017-01-01

    Are sight and sound out of synch? Signs that they are have been dismissed for over two centuries as an artefact of attentional and response bias, to which traditional subjective methods are prone. To avoid such biases, we measured performance on objective tasks that depend implicitly on achieving good lip-synch. We measured the McGurk effect (in which incongruent lip-voice pairs evoke illusory phonemes), and also identification of degraded speech, while manipulating audiovisual asynchrony. Peak performance was found at an average auditory lag of ~100 ms, but this varied widely between individuals. Participants’ individual optimal asynchronies showed trait-like stability when the same task was re-tested one week later, but measures based on different tasks did not correlate. This discounts the possible influence of common biasing factors, suggesting instead that our different tasks probe different brain networks, each subject to their own intrinsic auditory and visual processing latencies. Our findings call for renewed interest in the biological causes and cognitive consequences of individual sensory asynchronies, leading potentially to fresh insights into the neural representation of sensory timing. A concrete implication is that speech comprehension might be enhanced, by first measuring each individual’s optimal asynchrony and then applying a compensatory auditory delay. PMID:28429784

  20. Feasibility of the adaptive and automatic presentation of tasks (ADAPT) system for rehabilitation of upper extremity function post-stroke

    PubMed Central

    2011-01-01

    Background Current guidelines for rehabilitation of arm and hand function after stroke recommend that motor training focus on realistic tasks that require reaching and manipulation and engage the patient intensively, actively, and adaptively. Here, we investigated the feasibility of a novel robotic task-practice system, ADAPT, designed in accordance with such guidelines. At each trial, ADAPT selects a functional task according to a training schedule and with difficulty based on previous performance. Once the task is selected, the robot picks up and presents the corresponding tool, simulates the dynamics of the tasks, and the patient interacts with the tool to perform the task. Methods Five participants with chronic stroke with mild to moderate impairments (> 9 months post-stroke; Fugl-Meyer arm score 49.2 ± 5.6) practiced four functional tasks (selected out of six in a pre-test) with ADAPT for about one and half hour and 144 trials in a pseudo-random schedule of 3-trial blocks per task. Results No adverse events occurred and ADAPT successfully presented the six functional tasks without human intervention for a total of 900 trials. Qualitative analysis of trajectories showed that ADAPT simulated the desired task dynamics adequately, and participants reported good, although not excellent, task fidelity. During training, the adaptive difficulty algorithm progressively increased task difficulty leading towards an optimal challenge point based on performance; difficulty was then continuously adjusted to keep performance around the challenge point. Furthermore, the time to complete all trained tasks decreased significantly from pretest to one-hour post-test. Finally, post-training questionnaires demonstrated positive patient acceptance of ADAPT. Conclusions ADAPT successfully provided adaptive progressive training for multiple functional tasks based on participant's performance. Our encouraging results establish the feasibility of ADAPT; its efficacy will next be tested in a clinical trial. PMID:21813010

  1. Accelerating Dust Storm Simulation by Balancing Task Allocation in Parallel Computing Environment

    NASA Astrophysics Data System (ADS)

    Gui, Z.; Yang, C.; XIA, J.; Huang, Q.; YU, M.

    2013-12-01

    Dust storm has serious negative impacts on environment, human health, and assets. The continuing global climate change has increased the frequency and intensity of dust storm in the past decades. To better understand and predict the distribution, intensity and structure of dust storm, a series of dust storm models have been developed, such as Dust Regional Atmospheric Model (DREAM), the NMM meteorological module (NMM-dust) and Chinese Unified Atmospheric Chemistry Environment for Dust (CUACE/Dust). The developments and applications of these models have contributed significantly to both scientific research and our daily life. However, dust storm simulation is a data and computing intensive process. Normally, a simulation for a single dust storm event may take several days or hours to run. It seriously impacts the timeliness of prediction and potential applications. To speed up the process, high performance computing is widely adopted. By partitioning a large study area into small subdomains according to their geographic location and executing them on different computing nodes in a parallel fashion, the computing performance can be significantly improved. Since spatiotemporal correlations exist in the geophysical process of dust storm simulation, each subdomain allocated to a node need to communicate with other geographically adjacent subdomains to exchange data. Inappropriate allocations may introduce imbalance task loads and unnecessary communications among computing nodes. Therefore, task allocation method is the key factor, which may impact the feasibility of the paralleling. The allocation algorithm needs to carefully leverage the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire system. This presentation introduces two algorithms for such allocation and compares them with evenly distributed allocation method. Specifically, 1) In order to get optimized solutions, a quadratic programming based modeling method is proposed. This algorithm performs well with small amount of computing tasks. However, its efficiency decreases significantly as the subdomain number and computing node number increase. 2) To compensate performance decreasing for large scale tasks, a K-Means clustering based algorithm is introduced. Instead of dedicating to get optimized solutions, this method can get relatively good feasible solutions within acceptable time. However, it may introduce imbalance communication for nodes or node-isolated subdomains. This research shows both two algorithms have their own strength and weakness for task allocation. A combination of the two algorithms is under study to obtain a better performance. Keywords: Scheduling; Parallel Computing; Load Balance; Optimization; Cost Model

  2. Effects of orientation on Rey complex figure performance.

    PubMed

    Ferraro, F Richard; Grossman, Jennifer; Bren, Amy; Hoverson, Allysa

    2002-10-01

    An experiment was performed that examined the impact of stimulus orientation on performance on the Rey complex figure. A total of 48 undergraduates (24 men, 24 women) were randomly assigned to one of four Rey figure orientation groups (0 degrees, 90 degrees, 180 degrees, and 270 degrees ). Participants followed standard procedures for the Rey figure, initially copying it in whatever orientation group they were assigned to. Next, all participants performed a 15-20 min lexical decision experiment, used as a filler task. Finally, and unbeknownest to them, participants were asked to recall as much of the figure as they could. As expected, results revealed a main effect of Task (F = 83.92, p < .01), in which copy performance was superior to recall performance. However, the main effect for orientation was not significant, nor did orientation interact with task (Fs < .68, ps > .57). The results are important from an applied setting, especially if testing conditions are less than optimal and a fixed stimulus position is not possible (e.g., testing at the bedside).

  3. Performance in a GO/NOGO perceptual task reflects a balance between impulsive and instrumental components of behaviour

    PubMed Central

    Berditchevskaia, A.; Cazé, R. D.; Schultz, S. R.

    2016-01-01

    In recent years, simple GO/NOGO behavioural tasks have become popular due to the relative ease with which they can be combined with technologies such as in vivo multiphoton imaging. To date, it has been assumed that behavioural performance can be captured by the average performance across a session, however this neglects the effect of motivation on behaviour within individual sessions. We investigated the effect of motivation on mice performing a GO/NOGO visual discrimination task. Performance within a session tended to follow a stereotypical trajectory on a Receiver Operating Characteristic (ROC) chart, beginning with an over-motivated state with many false positives, and transitioning through a more or less optimal regime to end with a low hit rate after satiation. Our observations are reproduced by a new model, the Motivated Actor-Critic, introduced here. Our results suggest that standard measures of discriminability, obtained by averaging across a session, may significantly underestimate behavioural performance. PMID:27272438

  4. Model-Based Speech Signal Coding Using Optimized Temporal Decomposition for Storage and Broadcasting Applications

    NASA Astrophysics Data System (ADS)

    Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret

    2003-12-01

    A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.

  5. A Structural Health Monitoring Software Tool for Optimization, Diagnostics and Prognostics

    DTIC Science & Technology

    2011-01-01

    A Structural Health Monitoring Software Tool for Optimization, Diagnostics and Prognostics Seth S . Kessler1, Eric B. Flynn2, Christopher T...technology more accessible, and commercially practical. 1. INTRODUCTION Currently successful laboratory non- destructive testing and monitoring...PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES

  6. Kinematically Optimal Robust Control of Redundant Manipulators

    NASA Astrophysics Data System (ADS)

    Galicki, M.

    2017-12-01

    This work deals with the problem of the robust optimal task space trajectory tracking subject to finite-time convergence. Kinematic and dynamic equations of a redundant manipulator are assumed to be uncertain. Moreover, globally unbounded disturbances are allowed to act on the manipulator when tracking the trajectory by the endeffector. Furthermore, the movement is to be accomplished in such a way as to minimize both the manipulator torques and their oscillations thus eliminating the potential robot vibrations. Based on suitably defined task space non-singular terminal sliding vector variable and the Lyapunov stability theory, we derive a class of chattering-free robust kinematically optimal controllers, based on the estimation of transpose Jacobian, which seem to be effective in counteracting both uncertain kinematics and dynamics, unbounded disturbances and (possible) kinematic and/or algorithmic singularities met on the robot trajectory. The numerical simulations carried out for a redundant manipulator of a SCARA type consisting of the three revolute kinematic pairs and operating in a two-dimensional task space, illustrate performance of the proposed controllers as well as comparisons with other well known control schemes.

  7. Benevolent sexism alters executive brain responses.

    PubMed

    Dardenne, Benoit; Dumont, Muriel; Sarlet, Marie; Phillips, Christophe; Balteau, Evelyne; Degueldre, Christian; Luxen, André; Salmon, Eric; Maquet, Pierre; Collette, Fabienne

    2013-07-10

    Benevolence is widespread in our societies. It is defined as considering a subordinate group nicely but condescendingly, that is, with charity. Deleterious consequences for the target have been reported in the literature. In this experiment, we used functional MRI (fMRI) to identify whether being the target of (sexist) benevolence induces changes in brain activity associated with a working memory task. Participants were confronted by benevolent, hostile, or neutral comments before and while performing a reading span test in an fMRI environment. fMRI data showed that brain regions associated previously with intrusive thought suppression (bilateral, dorsolateral, prefrontal, and anterior cingulate cortex) reacted specifically to benevolent sexism compared with hostile sexism and neutral conditions during the performance of the task. These findings indicate that, despite being subjectively positive, benevolence modifies task-related brain networks by recruiting supplementary areas likely to impede optimal cognitive performance.

  8. Diverse task scheduling for individualized requirements in cloud manufacturing

    NASA Astrophysics Data System (ADS)

    Zhou, Longfei; Zhang, Lin; Zhao, Chun; Laili, Yuanjun; Xu, Lida

    2018-03-01

    Cloud manufacturing (CMfg) has emerged as a new manufacturing paradigm that provides ubiquitous, on-demand manufacturing services to customers through network and CMfg platforms. In CMfg system, task scheduling as an important means of finding suitable services for specific manufacturing tasks plays a key role in enhancing the system performance. Customers' requirements in CMfg are highly individualized, which leads to diverse manufacturing tasks in terms of execution flows and users' preferences. We focus on diverse manufacturing tasks and aim to address their scheduling issue in CMfg. First of all, a mathematical model of task scheduling is built based on analysis of the scheduling process in CMfg. To solve this scheduling problem, we propose a scheduling method aiming for diverse tasks, which enables each service demander to obtain desired manufacturing services. The candidate service sets are generated according to subtask directed graphs. An improved genetic algorithm is applied to searching for optimal task scheduling solutions. The effectiveness of the scheduling method proposed is verified by a case study with individualized customers' requirements. The results indicate that the proposed task scheduling method is able to achieve better performance than some usual algorithms such as simulated annealing and pattern search.

  9. An Ideal Observer Analysis of Visual Working Memory

    ERIC Educational Resources Information Center

    Sims, Chris R.; Jacobs, Robert A.; Knill, David C.

    2012-01-01

    Limits in visual working memory (VWM) strongly constrain human performance across many tasks. However, the nature of these limits is not well understood. In this article we develop an ideal observer analysis of human VWM by deriving the expected behavior of an optimally performing but limited-capacity memory system. This analysis is framed around…

  10. Optimal Designs for Performance Assessments: The Subject Factor.

    ERIC Educational Resources Information Center

    Parkes, Jay

    Much speculation abounds concerning how expensive performance assessments are or are going to be. Recent projections indicate that, in order to achieve an acceptably high generalizability coefficient, many additional tasks may need to be added, which will enlarge costs. Such projections are, to some degree, correct, and to some degree simplistic.…

  11. Effect of Food Deprivation on a Delayed Nonmatch-to-place T-maze Task

    PubMed Central

    Jang, Eun-Hae; Ahn, Seo-Hee; Lee, Ye-Seul; Lee, Hye-Ryeon

    2013-01-01

    Food deprivation can affect performance on difficult cognitive task, such as the delayed nonmatch-to-place T-maze task (DNMT). The importance of food deprivation on maintaining high motivation for DNMT task has been emphasized, but not many studies have investigated the optimal conditions for depriving rodents to maximize performance. Establishing appropriate conditions for food deprivation is necessary to maintain DNMT task motivation. We applied different conditions of food deprivation (1-h food restriction vs. 1.5-g food restriction; single caging vs. group caging) and measured body weight and the number of correct choices that 8-week-old C57BL/6J mice made during the DNMT task. The 1.5-g food restriction group maintained 76.0±0.6% of their initial body weight, but the final body weight of the 1-h food restriction condition group was reduced to 62.2±0.8% of their initial body weight. These results propose that 1.5-g food restriction condition is effective condition for maintaining both body weight and motivation to complete the DNMT task. PMID:23833561

  12. Hybrid reflecting objectives for functional multiphoton microscopy in turbid media

    PubMed Central

    Vučinić, Dejan; Bartol, Thomas M.; Sejnowski, Terrence J.

    2010-01-01

    Most multiphoton imaging of biological specimens is performed using microscope objectives optimized for high image quality under wide-field illumination. We present a class of objectives designed de novo without regard for these traditional constraints, driven exclusively by the needs of fast multiphoton imaging in turbid media: the delivery of femtosecond pulses without dispersion and the efficient collection of fluorescence. We model the performance of one such design optimized for a typical brain-imaging setup and show that it can greatly outperform objectives commonly used for this task. PMID:16880851

  13. Wavefront-Guided Versus Wavefront-Optimized Photorefractive Keratectomy: Visual and Military Task Performance.

    PubMed

    Ryan, Denise S; Sia, Rose K; Stutzman, Richard D; Pasternak, Joseph F; Howard, Robin S; Howell, Christopher L; Maurer, Tana; Torres, Mark F; Bower, Kraig S

    2017-01-01

    To compare visual performance, marksmanship performance, and threshold target identification following wavefront-guided (WFG) versus wavefront-optimized (WFO) photorefractive keratectomy (PRK). In this prospective, randomized clinical trial, active duty U.S. military Soldiers, age 21 or over, electing to undergo PRK were randomized to undergo WFG (n = 27) or WFO (n = 27) PRK for myopia or myopic astigmatism. Binocular visual performance was assessed preoperatively and 1, 3, and 6 months postoperatively: Super Vision Test high contrast, Super Vision Test contrast sensitivity (CS), and 25% contrast acuity with night vision goggle filter. CS function was generated testing at five spatial frequencies. Marksmanship performance in low light conditions was evaluated in a firing tunnel. Target detection and identification performance was tested for probability of identification of varying target sets and probability of detection of humans in cluttered environments. Visual performance, CS function, marksmanship, and threshold target identification demonstrated no statistically significant differences over time between the two treatments. Exploratory regression analysis of firing range tasks at 6 months showed no significant differences or correlations between procedures. Regression analysis of vehicle and handheld probability of identification showed a significant association with pretreatment performance. Both WFG and WFO PRK results translate to excellent and comparable visual and military performance. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.

  14. TRU Waste Management Program. Cost/schedule optimization analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Detamore, J.A.; Raudenbush, M.H.; Wolaver, R.W.

    This Current Year Work Plan presents in detail a description of the activities to be performed by the Joint Integration Office Rockwell International (JIO/RI) during FY86. It breaks down the activities into two major work areas: Program Management and Program Analysis. Program Management is performed by the JIO/RI by providing technical planning and guidance for the development of advanced TRU waste management capabilities. This includes equipment/facility design, engineering, construction, and operations. These functions are integrated to allow transition from interim storage to final disposition. JIO/RI tasks include program requirements identification, long-range technical planning, budget development, program planning document preparation, taskmore » guidance development, task monitoring, task progress information gathering and reporting to DOE, interfacing with other agencies and DOE lead programs, integrating public involvement with program efforts, and preparation of reports for DOE detailing program status. Program Analysis is performed by the JIO/RI to support identification and assessment of alternatives, and development of long-term TRU waste program capabilities. These analyses include short-term analyses in response to DOE information requests, along with performing an RH Cost/Schedule Optimization report. Systems models will be developed, updated, and upgraded as needed to enhance JIO/RI's capability to evaluate the adequacy of program efforts in various fields. A TRU program data base will be maintained and updated to provide DOE with timely responses to inventory related questions.« less

  15. Are we under-utilizing the talents of primary care personnel? A job analytic examination

    PubMed Central

    Hysong, Sylvia J; Best, Richard G; Moore, Frank I

    2007-01-01

    Background Primary care staffing decisions are often made unsystematically, potentially leading to increased costs, dissatisfaction, turnover, and reduced quality of care. This article aims to (1) catalogue the domain of primary care tasks, (2) explore the complexity associated with these tasks, and (3) examine how tasks performed by different job titles differ in function and complexity, using Functional Job Analysis to develop a new tool for making evidence-based staffing decisions. Methods Seventy-seven primary care personnel from six US Department of Veterans Affairs (VA) Medical Centers, representing six job titles, participated in two-day focus groups to generate 243 unique task statements describing the content of VA primary care. Certified job analysts rated tasks on ten dimensions representing task complexity, skills, autonomy, and error consequence. Two hundred and twenty-four primary care personnel from the same clinics then completed a survey indicating whether they performed each task. Tasks were catalogued using an adaptation of an existing classification scheme; complexity differences were tested via analysis of variance. Results Objective one: Task statements were categorized into four functions: service delivery (65%), administrative duties (15%), logistic support (9%), and workforce management (11%). Objective two: Consistent with expectations, 80% of tasks received ratings at or below the mid-scale value on all ten scales. Objective three: Service delivery and workforce management tasks received higher ratings on eight of ten scales (multiple functional complexity dimensions, autonomy, human error consequence) than administrative and logistic support tasks. Similarly, tasks performed by more highly trained job titles received higher ratings on six of ten scales than tasks performed by lower trained job titles. Contrary to expectations, the distribution of tasks across functions did not significantly vary by job title. Conclusion Primary care personnel are not being utilized to the extent of their training; most personnel perform many tasks that could reasonably be performed by personnel with less training. Primary care clinics should use evidence-based information to optimize job-person fit, adjusting clinic staff mix and allocation of work across staff to enhance efficiency and effectiveness. PMID:17397534

  16. Influence of Sequential vs. Simultaneous Dual-Task Exercise Training on Cognitive Function in Older Adults.

    PubMed

    Tait, Jamie L; Duckham, Rachel L; Milte, Catherine M; Main, Luana C; Daly, Robin M

    2017-01-01

    Emerging research indicates that exercise combined with cognitive training may improve cognitive function in older adults. Typically these programs have incorporated sequential training, where exercise and cognitive training are undertaken separately. However, simultaneous or dual-task training, where cognitive and/or motor training are performed simultaneously with exercise, may offer greater benefits. This review summary provides an overview of the effects of combined simultaneous vs. sequential training on cognitive function in older adults. Based on the available evidence, there are inconsistent findings with regard to the cognitive benefits of sequential training in comparison to cognitive or exercise training alone. In contrast, simultaneous training interventions, particularly multimodal exercise programs in combination with secondary tasks regulated by sensory cues, have significantly improved cognition in both healthy older and clinical populations. However, further research is needed to determine the optimal characteristics of a successful simultaneous training program for optimizing cognitive function in older people.

  17. Performance evaluation of a six-axis generalized force-reflecting teleoperator

    NASA Technical Reports Server (NTRS)

    Hannaford, B.; Wood, L.; Guggisberg, B.; Mcaffee, D.; Zak, H.

    1989-01-01

    Work in real-time distributed computation and control has culminated in a prototype force-reflecting telemanipulation system having a dissimilar master (cable-driven, force-reflecting hand controller) and a slave (PUMA 560 robot with custom controller), an extremely high sampling rate (1000 Hz), and a low loop computation delay (5 msec). In a series of experiments with this system and five trained test operators covering over 100 hours of teleoperation, performance was measured in a series of generic and application-driven tasks with and without force feedback, and with control shared between teleoperation and local sensor referenced control. Measurements defining task performance included 100-Hz recording of six-axis force/torque information from the slave manipulator wrist, task completion time, and visual observation of predefined task errors. The task consisted of high precision peg-in-hole insertion, electrical connectors, velcro attach-de-attach, and a twist-lock multi-pin connector. Each task was repeated three times under several operating conditions: normal bilateral telemanipulation, forward position control without force feedback, and shared control. In shared control, orientation was locally servo controlled to comply with applied torques, while translation was under operator control. All performance measures improved as capability was added along a spectrum of capabilities ranging from pure position control through force-reflecting teleoperation and shared control. Performance was optimal for the bare-handed operator.

  18. Optimization of Land Use Suitability for Agriculture Using Integrated Geospatial Model and Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Mansor, S. B.; Pormanafi, S.; Mahmud, A. R. B.; Pirasteh, S.

    2012-08-01

    In this study, a geospatial model for land use allocation was developed from the view of simulating the biological autonomous adaptability to environment and the infrastructural preference. The model was developed based on multi-agent genetic algorithm. The model was customized to accommodate the constraint set for the study area, namely the resource saving and environmental-friendly. The model was then applied to solve the practical multi-objective spatial optimization allocation problems of land use in the core region of Menderjan Basin in Iran. The first task was to study the dominant crops and economic suitability evaluation of land. Second task was to determine the fitness function for the genetic algorithms. The third objective was to optimize the land use map using economical benefits. The results has indicated that the proposed model has much better performance for solving complex multi-objective spatial optimization allocation problems and it is a promising method for generating land use alternatives for further consideration in spatial decision-making.

  19. The effects of huperzine A and IDRA 21 on visual recognition memory in young macaques

    PubMed Central

    Malkova, Ludise; Kozikowski, Alan P.; Gale, Karen

    2011-01-01

    Nootropic agents or cognitive enhancers are purported to improve mental functions such as cognition, memory, or attention. The aim of our study was to determine the effects of two possible cognitive enhancers, huperzine A and IDRA 21, in normal young adult monkeys performing a visual memory task of varying degrees of difficulty. Huperzine A is a reversible acetylcholinesterase (AChE) inhibitor, its administration results in regionally specific increases in acetylcholine levels in the brain. In human clinical trials, Huperzine A resulted in cognitive improvement in patients with mild to moderate form of Alzheimer's disease (AD) showing its potential as a palliative agent in the treatment of AD. IDRA 21 is a positive allosteric modulator of glutamate AMPA receptors. It increases excitatory synaptic strength by attenuating rapid desensitization of AMPA receptors and may thus have beneficial therapeutic effects to ameliorate memory deficits in patients with cognitive impairments, including AD. The present study evaluated the effects of the two drugs in normal, intact, young adult monkeys to determine whether they can result in cognitive enhancement in a system that is presumably functioning optimally. Six young pigtail macaques (Macaca nemestrina) were trained on delayed non-matching-to-sample task, a measure of visual recognition memory, up to criterion of 90% correct responses on each of the four delays (10s, 30s, 60s, and 90s). They were then tested on two versions of the task: Task 1 included the four delays intermixed within a session and the monkeys performed it with the accuracy of 90%. Task 2 included, in each of 24 trials, a list of six objects presented in succession. Two objects from the list were then presented for choice paired with novel objects and following two of the four delays intermixed within a session. This task with a higher mnemonic demand yielded an average performance of 64% correct. Oral administration of huperzine A did not significantly affect the monkeys' performance on either task. However, a significant negative correlation was found between the baseline performance on each delay and the change in performance under huperzine A, suggesting that under conditions in which the subjects were performing poorly (55 – 69%), the drug resulted in improved performance, whereas no improvement was obtained when the baseline was close to 90%. In fact, when the subjects were performing very well, huperzine A tended to reduce the performance accuracy, indicating that in a system that functions optimally, the increased availability of acetylcholine does not improve performance or memory, especially when the animals are close to the maximum performance. In contrast, oral administration of IDRA 21 significantly improved performance on Task 2, especially on the longest delay. This finding supports the potential use of this drug in treatment of cognitive and memory disorders. PMID:21185313

  20. Mediators of methylphenidate effects on math performance in children with attention-deficit hyperactivity disorder.

    PubMed

    Froehlich, Tanya E; Antonini, Tanya N; Brinkman, William B; Langberg, Joshua M; Simon, John O; Adams, Ryan; Fredstrom, Bridget; Narad, Megan E; Kingery, Kathleen M; Altaye, Mekibib; Matheson, Heather; Tamm, Leanne; Epstein, Jeffery N

    2014-01-01

    Stimulant medications, such as methylphenidate (MPH), improve the academic performance of children with attention-deficit hyperactivity disorder (ADHD). However, the mechanism by which MPH exerts an effect on academic performance is unclear. We examined MPH effects on math performance and investigated possible mediation of MPH effects by changes in time on-task, inhibitory control, selective attention, and reaction time variability. Children with ADHD aged 7 to 11 years (N = 93) completed a timed math worksheet (with problems tailored to each individual's level of proficiency) and 2 neuropsychological tasks (Go/No-Go and Child Attention Network Test) at baseline, then participated in a 4-week, randomized, controlled, titration trial of MPH. Children were then randomly assigned to their optimal MPH dose or placebo for 1 week (administered double-blind) and repeated the math and neuropsychological tasks (posttest). Baseline and posttest videorecordings of children performing the math task were coded to assess time on-task. Children taking MPH completed 23 more math problems at posttest compared to baseline, whereas the placebo group completed 24 fewer problems on posttest versus baseline, but the effects on math accuracy (percent correct) did not differ. Path analyses revealed that only change in time on-task was a significant mediator of MPH's improvements in math productivity. MPH-derived math productivity improvements may be explained in part by increased time spent on-task, rather than improvements in neurocognitive parameters, such as inhibitory control, selective attention, or reaction time variability.

  1. Automatic motor task selection via a bandit algorithm for a brain-controlled button

    NASA Astrophysics Data System (ADS)

    Fruitet, Joan; Carpentier, Alexandra; Munos, Rémi; Clerc, Maureen

    2013-02-01

    Objective. Brain-computer interfaces (BCIs) based on sensorimotor rhythms use a variety of motor tasks, such as imagining moving the right or left hand, the feet or the tongue. Finding the tasks that yield best performance, specifically to each user, is a time-consuming preliminary phase to a BCI experiment. This study presents a new adaptive procedure to automatically select (online) the most promising motor task for an asynchronous brain-controlled button. Approach. We develop for this purpose an adaptive algorithm UCB-classif based on the stochastic bandit theory and design an EEG experiment to test our method. We compare (offline) the adaptive algorithm to a naïve selection strategy which uses uniformly distributed samples from each task. We also run the adaptive algorithm online to fully validate the approach. Main results. By not wasting time on inefficient tasks, and focusing on the most promising ones, this algorithm results in a faster task selection and a more efficient use of the BCI training session. More precisely, the offline analysis reveals that the use of this algorithm can reduce the time needed to select the most appropriate task by almost half without loss in precision, or alternatively, allow us to investigate twice the number of tasks within a similar time span. Online tests confirm that the method leads to an optimal task selection. Significance. This study is the first one to optimize the task selection phase by an adaptive procedure. By increasing the number of tasks that can be tested in a given time span, the proposed method could contribute to reducing ‘BCI illiteracy’.

  2. Optimized statistical parametric mapping procedure for NIRS data contaminated by motion artifacts : Neurometric analysis of body schema extension.

    PubMed

    Suzuki, Satoshi

    2017-09-01

    This study investigated the spatial distribution of brain activity on body schema (BS) modification induced by natural body motion using two versions of a hand-tracing task. In Task 1, participants traced Japanese Hiragana characters using the right forefinger, requiring no BS expansion. In Task 2, participants performed the tracing task with a long stick, requiring BS expansion. Spatial distribution was analyzed using general linear model (GLM)-based statistical parametric mapping of near-infrared spectroscopy data contaminated with motion artifacts caused by the hand-tracing task. Three methods were utilized in series to counter the artifacts, and optimal conditions and modifications were investigated: a model-free method (Step 1), a convolution matrix method (Step 2), and a boxcar-function-based Gaussian convolution method (Step 3). The results revealed four methodological findings: (1) Deoxyhemoglobin was suitable for the GLM because both Akaike information criterion and the variance against the averaged hemodynamic response function were smaller than for other signals, (2) a high-pass filter with a cutoff frequency of .014 Hz was effective, (3) the hemodynamic response function computed from a Gaussian kernel function and its first- and second-derivative terms should be included in the GLM model, and (4) correction of non-autocorrelation and use of effective degrees of freedom were critical. Investigating z-maps computed according to these guidelines revealed that contiguous areas of BA7-BA40-BA21 in the right hemisphere became significantly activated ([Formula: see text], [Formula: see text], and [Formula: see text], respectively) during BS modification while performing the hand-tracing task.

  3. A Neuroscience Approach to Optimizing Brain Resources for Human Performance in Extreme Environments

    PubMed Central

    Paulus, Martin P.; Potterat, Eric G.; Taylor, Marcus K.; Van Orden, Karl F.; Bauman, James; Momen, Nausheen; Padilla, Genieleah A.; Swain, Judith L.

    2009-01-01

    Extreme environments requiring optimal cognitive and behavioral performance occur in a wide variety of situations ranging from complex combat operations to elite athletic competitions. Although a large literature characterizes psychological and other aspects of individual differences in performances in extreme environments, virtually nothing is known about the underlying neural basis for these differences. This review summarizes the cognitive, emotional, and behavioral consequences of exposure to extreme environments, discusses predictors of performance, and builds a case for the use of neuroscience approaches to quantify and understand optimal cognitive and behavioral performance. Extreme environments are defined as an external context that exposes individuals to demanding psychological and/or physical conditions, and which may have profound effects on cognitive and behavioral performance. Examples of these types of environments include combat situations, Olympic-level competition, and expeditions in extreme cold, at high altitudes, or in space. Optimal performance is defined as the degree to which individuals achieve a desired outcome when completing goal-oriented tasks. It is hypothesized that individual variability with respect to optimal performance in extreme environments depends on a well “contextualized” internal body state that is associated with an appropriate potential to act. This hypothesis can be translated into an experimental approach that may be useful for quantifying the degree to which individuals are particularly suited to performing optimally in demanding environments. PMID:19447132

  4. Effects of psychological priming, video, and music on anaerobic exercise performance.

    PubMed

    Loizou, G; Karageorghis, C I

    2015-12-01

    Peak performance videos accompanied by music can help athletes to optimize their pre-competition mindset and are often used. Priming techniques can be incorporated into such videos to influence athletes' motivational state. There has been limited empirical work investigating the combined effects of such stimuli on anaerobic performance. The present study examined the psychological and psychophysiological effects of video, music, and priming when used as a pre-performance intervention for an anaerobic endurance task. Psychological measures included the main axes of the circumplex model of affect and liking scores taken pre-task, and the Exercise-induced Feeling Inventory, which was administered post-task. Physiological measures comprised heart rate variability and heart rate recorded pre-task. Fifteen males (age = 26.3 ± 2.8 years) were exposed to four conditions prior to performing the Wingate Anaerobic Test: music-only, video and music, video with music and motivational primes, and a no-video/no-music control. Results indicate that the combined video, music, and primes condition was the most effective in terms of influencing participants' pre-task affect and subsequent anaerobic performance; this was followed by the music-only condition. The findings indicate the utility of such stimuli as a pre-performance technique to enhance athletes' or exercisers' psychological states. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  5. Effect of display size on utilization of traffic situation display for self-spacing task. [transport aircraft

    NASA Technical Reports Server (NTRS)

    Abbott, T. S.; Moen, G. C.

    1981-01-01

    The weather radar cathode ray tube (CRT) is the prime candidate for presenting cockpit display of traffic information (CDTI) in current, conventionally equipped transport aircraft. Problems may result from this, since the CRT size is not optimized for CDTI applications and the CRT is not in the pilot's primary visual scan area. The impact of display size on the ability of pilots to utilize the traffic information to maintain a specified spacing interval behind a lead aircraft during an approach task was studied. The five display sizes considered are representative of the display hardware configurations of airborne weather radar systems. From a pilot's subjective workload viewpoint, even the smallest display size was usable for performing the self spacing task. From a performane viewpoint, the mean spacing values, which are indicative of how well the pilots were able to perform the task, exhibit the same trends, irrespective of display size; however, the standard deviation of the spacing intervals decreased (performance improves) as the display size increased. Display size, therefore, does have a significant effect on pilot performance.

  6. Balancing on tightropes and slacklines

    PubMed Central

    Paoletti, P.; Mahadevan, L.

    2012-01-01

    Balancing on a tightrope or a slackline is an example of a neuromechanical task where the whole body both drives and responds to the dynamics of the external environment, often on multiple timescales. Motivated by a range of neurophysiological observations, here we formulate a minimal model for this system and use optimal control theory to design a strategy for maintaining an upright position. Our analysis of the open and closed-loop dynamics shows the existence of an optimal rope sag where balancing requires minimal effort, consistent with qualitative observations and suggestive of strategies for optimizing balancing performance while standing and walking. Our consideration of the effects of nonlinearities, potential parameter coupling and delays on the overall performance shows that although these factors change the results quantitatively, the existence of an optimal strategy persists. PMID:22513724

  7. The effects of auditory stimulation on the arithmetic performance of children with ADHD and nondisabled children.

    PubMed

    Abikoff, H; Courtney, M E; Szeibel, P J; Koplewicz, H S

    1996-05-01

    This study evaluated the impact of extra-task stimulation on the academic task performance of children with attention-deficit/hyperactivity disorder (ADHD). Twenty boys with ADHD and 20 nondisabled boys worked on an arithmetic task during high stimulation (music), low stimulation (speech), and no stimulation (silence). The music "distractors" were individualized for each child, and the arithmetic problems were at each child's ability level. A significant Group x Condition interaction was found for number of correct answers. Specifically, the nondisabled youngsters performed similarly under all three auditory conditions. In contrast, the children with ADHD did significantly better under the music condition than speech or silence conditions. However, a significant Group x Order interaction indicated that arithmetic performance was enhanced only for those children with ADHD who received music as the first condition. The facilitative effects of salient auditory stimulation on the arithmetic performance of the children with ADHD provide some support for the underarousal/optimal stimulation theory of ADHD.

  8. Optimizing the Strength and SCC Resistance of Aluminum Alloys used for Refurbishing Aging Aircraft

    DTIC Science & Technology

    2001-05-07

    Element Number Authors Ferrer, Charles P. Project Number Task Number Work Unit Number Performing Organization Name(s) and Address( es ) US Naval Academy...Annapolis, MD 21402 Performing Organization Number(s) Sponsoring/Monitoring Agency Name(s) and Address( es ) Monitoring Agency Acronym Monitoring...NUMBERS 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS( ES ) 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND

  9. Hidden Markov model analysis of force/torque information in telemanipulation

    NASA Technical Reports Server (NTRS)

    Hannaford, Blake; Lee, Paul

    1991-01-01

    A model for the prediction and analysis of sensor information recorded during robotic performance of telemanipulation tasks is presented. The model uses the hidden Markov model to describe the task structure, the operator's or intelligent controller's goal structure, and the sensor signals. A methodology for constructing the model parameters based on engineering knowledge of the task is described. It is concluded that the model and its optimal state estimation algorithm, the Viterbi algorithm, are very succesful at the task of segmenting the data record into phases corresponding to subgoals of the task. The model provides a rich modeling structure within a statistical framework, which enables it to represent complex systems and be robust to real-world sensory signals.

  10. Brain Network Changes and Memory Decline in Aging

    PubMed Central

    Beason-Held, Lori L.; Hohman, Timothy J.; Venkatraman, Vijay; An, Yang; Resnick, Susan M.

    2016-01-01

    One theory of age-related cognitive decline proposes that changes within the default mode network (DMN) of the brain impact the ability to successfully perform cognitive operations. To investigate this theory, we examined functional covariance within brain networks using regional cerebral blood flow data, measured by 15O-water PET, from 99 participants (mean baseline age 68.6 ±7.5) in the Baltimore Longitudinal Study of Aging collected over a 7.4 year period. The sample was divided in tertiles based on longitudinal performance on a verbal recognition memory task administered during scanning, and functional covariance was compared between the upper (improvers) and lower (decliners) tertile groups. The DMN and verbal memory networks (VMN) were then examined during the verbal memory scan condition. For each network, group differences in node-to-network coherence and individual node-to-node covariance relationships were assessed at baseline and in change over time. Compared with improvers, decliners showed differences in node-to-network coherence and in node-to-node relationships in the DMN but not the VMN during verbal memory. These DMN differences reflected greater covariance with better task performance at baseline and both increasing and declining covariance with declining task performance over time for decliners. When examined during the resting state alone, the direction of change in DMN covariance was similar to that seen during task performance, but node-to-node relationships differed from those observed during the task condition. These results suggest that disengagement of DMN components during task performance is not essential for successful cognitive performance as previously proposed. Instead, a proper balance in network processes may be needed to support optimal task performance. PMID:27319002

  11. Task-based strategy for optimized contrast enhanced breast imaging: analysis of six imaging techniques for mammography and tomosynthesis

    NASA Astrophysics Data System (ADS)

    Ikejimba, Lynda; Kiarashi, Nooshin; Lin, Yuan; Chen, Baiyu; Ghate, Sujata V.; Zerhouni, Moustafa; Samei, Ehsan; Lo, Joseph Y.

    2012-03-01

    Digital breast tomosynthesis (DBT) is a novel x-ray imaging technique that provides 3D structural information of the breast. In contrast to 2D mammography, DBT minimizes tissue overlap potentially improving cancer detection and reducing number of unnecessary recalls. The addition of a contrast agent to DBT and mammography for lesion enhancement has the benefit of providing functional information of a lesion, as lesion contrast uptake and washout patterns may help differentiate between benign and malignant tumors. This study used a task-based method to determine the optimal imaging approach by analyzing six imaging paradigms in terms of their ability to resolve iodine at a given dose: contrast enhanced mammography and tomosynthesis, temporal subtraction mammography and tomosynthesis, and dual energy subtraction mammography and tomosynthesis. Imaging performance was characterized using a detectability index d', derived from the system task transfer function (TTF), an imaging task, iodine contrast, and the noise power spectrum (NPS). The task modeled a 5 mm lesion containing iodine concentrations between 2.1 mg/cc and 8.6 mg/cc. TTF was obtained using an edge phantom, and the NPS was measured over several exposure levels, energies, and target-filter combinations. Using a structured CIRS phantom, d' was generated as a function of dose and iodine concentration. In general, higher dose gave higher d', but for the lowest iodine concentration and lowest dose, dual energy subtraction tomosynthesis and temporal subtraction tomosynthesis demonstrated the highest performance.

  12. Two-phase strategy of controlling motor coordination determined by task performance optimality.

    PubMed

    Shimansky, Yury P; Rand, Miya K

    2013-02-01

    A quantitative model of optimal coordination between hand transport and grip aperture has been derived in our previous studies of reach-to-grasp movements without utilizing explicit knowledge of the optimality criterion or motor plant dynamics. The model's utility for experimental data analysis has been demonstrated. Here we show how to generalize this model for a broad class of reaching-type, goal-directed movements. The model allows for measuring the variability of motor coordination and studying its dependence on movement phase. The experimentally found characteristics of that dependence imply that execution noise is low and does not affect motor coordination significantly. From those characteristics it is inferred that the cost of neural computations required for information acquisition and processing is included in the criterion of task performance optimality as a function of precision demand for state estimation and decision making. The precision demand is an additional optimized control variable that regulates the amount of neurocomputational resources activated dynamically. It is shown that an optimal control strategy in this case comprises two different phases. During the initial phase, the cost of neural computations is significantly reduced at the expense of reducing the demand for their precision, which results in speed-accuracy tradeoff violation and significant inter-trial variability of motor coordination. During the final phase, neural computations and thus motor coordination are considerably more precise to reduce the cost of errors in making a contact with the target object. The generality of the optimal coordination model and the two-phase control strategy is illustrated on several diverse examples.

  13. Unobtrusive monitoring of divided attention in a cognitive health coaching intervention for the elderly.

    PubMed

    McKanna, James A; Pavel, Misha; Jimison, Holly

    2010-11-13

    Assessment of cognitive functionality is an important aspect of care for elders. Unfortunately, few tools exist to measure divided attention, the ability to allocate attention to different aspects of tasks. An accurate determination of divided attention would allow inference of generalized cognitive decline, as well as providing a quantifiable indicator of an important component of driving skill. We propose a new method for determining relative divided attention ability through unobtrusive monitoring of computer use. Specifically, we measure performance on a dual-task cognitive computer exercise as part of a health coaching intervention. This metric indicates whether the user has the ability to pay attention to both tasks at once, or is primarily attending to one task at a time (sacrificing optimal performance). The monitoring of divided attention in a home environment is a key component of both the early detection of cognitive problems and for assessing the efficacy of coaching interventions.

  14. A proposal of optimal sampling design using a modularity strategy

    NASA Astrophysics Data System (ADS)

    Simone, A.; Giustolisi, O.; Laucelli, D. B.

    2016-08-01

    In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.

  15. The effect of compression and attention allocation on speech intelligibility

    NASA Astrophysics Data System (ADS)

    Choi, Sangsook; Carrell, Thomas

    2003-10-01

    Research investigating the effects of amplitude compression on speech intelligibility for individuals with sensorineural hearing loss has demonstrated contradictory results [Souza and Turner (1999)]. Because percent-correct measures may not be the best indicator of compression effectiveness, a speech intelligibility and motor coordination task was developed to provide data that may more thoroughly explain the perception of compressed speech signals. In the present study, a pursuit rotor task [Dlhopolsky (2000)] was employed along with word identification task to measure the amount of attention required to perceive compressed and non-compressed words in noise. Monosyllabic words were mixed with speech-shaped noise at a fixed signal-to-noise ratio and compressed using a wide dynamic range compression scheme. Participants with normal hearing identified each word with or without a simultaneous pursuit-rotor task. Also, participants completed the pursuit-rotor task without simultaneous word presentation. It was expected that the performance on the additional motor task would reflect effect of the compression better than simple word-accuracy measures. Results were complex. For example, in some conditions an irrelevant task actually improved performance on a simultaneous listening task. This suggests there might be an optimal level of attention required for recognition of monosyllabic words.

  16. An Agent-Based Simulation for Investigating the Impact of Stereotypes on Task-Oriented Group Formation

    NASA Astrophysics Data System (ADS)

    Maghami, Mahsa; Sukthankar, Gita

    In this paper, we introduce an agent-based simulation for investigating the impact of social factors on the formation and evolution of task-oriented groups. Task-oriented groups are created explicitly to perform a task, and all members derive benefits from task completion. However, even in cases when all group members act in a way that is locally optimal for task completion, social forces that have mild effects on choice of associates can have a measurable impact on task completion performance. In this paper, we show how our simulation can be used to model the impact of stereotypes on group formation. In our simulation, stereotypes are based on observable features, learned from prior experience, and only affect an agent's link formation preferences. Even without assuming stereotypes affect the agents' willingness or ability to complete tasks, the long-term modifications that stereotypes have on the agents' social network impair the agents' ability to form groups with sufficient diversity of skills, as compared to agents who form links randomly. An interesting finding is that this effect holds even in cases where stereotype preference and skill existence are completely uncorrelated.

  17. How does temporal preparation speed up response implementation in choice tasks? Evidence for an early cortical activation.

    PubMed

    Tandonnet, Christophe; Davranche, Karen; Meynier, Chloé; Burle, Borís; Vidal, Franck; Hasbroucq, Thierry

    2012-02-01

    We investigated the influence of temporal preparation on information processing. Single-pulse transcranial magnetic stimulation (TMS) of the primary motor cortex was delivered during a between-hand choice task. The time interval between the warning and the imperative stimulus varied across blocks of trials was either optimal (500 ms) or nonoptimal (2500 ms) for participants' performance. Silent period duration was shorter prior to the first evidence of response selection for the optimal condition. Amplitude of the motor evoked potential specific to the responding hand increased earlier for the optimal condition. These results revealed an early release of cortical inhibition and a faster integration of the response selection-related inputs to the corticospinal pathway when temporal preparation is better. Temporal preparation may induce cortical activation prior to response selection that speeds up the implementation of the selected response. Copyright © 2011 Society for Psychophysiological Research.

  18. Acquisition of a visual discrimination and reversal learning task by Labrador retrievers.

    PubMed

    Lazarowski, Lucia; Foster, Melanie L; Gruen, Margaret E; Sherman, Barbara L; Case, Beth C; Fish, Richard E; Milgram, Norton W; Dorman, David C

    2014-05-01

    Optimal cognitive ability is likely important for military working dogs (MWD) trained to detect explosives. An assessment of a dog's ability to rapidly learn discriminations might be useful in the MWD selection process. In this study, visual discrimination and reversal tasks were used to assess cognitive performance in Labrador retrievers selected for an explosives detection program using a modified version of the Toronto General Testing Apparatus (TGTA), a system developed for assessing performance in a battery of neuropsychological tests in canines. The results of the current study revealed that, as previously found with beagles tested using the TGTA, Labrador retrievers (N = 16) readily acquired both tasks and learned the discrimination task significantly faster than the reversal task. The present study confirmed that the modified TGTA system is suitable for cognitive evaluations in Labrador retriever MWDs and can be used to further explore effects of sex, phenotype, age, and other factors in relation to canine cognition and learning, and may provide an additional screening tool for MWD selection.

  19. Linear models to perform treaty verification tasks for enhanced information security

    DOE PAGES

    MacGahan, Christopher J.; Kupinski, Matthew A.; Brubaker, Erik M.; ...

    2016-11-12

    Linear mathematical models were applied to binary-discrimination tasks relevant to arms control verification measurements in which a host party wishes to convince a monitoring party that an item is or is not treaty accountable. These models process data in list-mode format and can compensate for the presence of variability in the source, such as uncertain object orientation and location. The Hotelling observer applies an optimal set of weights to binned detector data, yielding a test statistic that is thresholded to make a decision. The channelized Hotelling observer applies a channelizing matrix to the vectorized data, resulting in a lower dimensionalmore » vector available to the monitor to make decisions. We demonstrate how incorporating additional terms in this channelizing-matrix optimization offers benefits for treaty verification. We present two methods to increase shared information and trust between the host and monitor. The first method penalizes individual channel performance in order to maximize the information available to the monitor while maintaining optimal performance. Second, we present a method that penalizes predefined sensitive information while maintaining the capability to discriminate between binary choices. Data used in this study was generated using Monte Carlo simulations for fission neutrons, accomplished with the GEANT4 toolkit. Custom models for plutonium inspection objects were measured in simulation by a radiation imaging system. Model performance was evaluated and presented using the area under the receiver operating characteristic curve.« less

  20. Linear models to perform treaty verification tasks for enhanced information security

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacGahan, Christopher J.; Kupinski, Matthew A.; Brubaker, Erik M.

    Linear mathematical models were applied to binary-discrimination tasks relevant to arms control verification measurements in which a host party wishes to convince a monitoring party that an item is or is not treaty accountable. These models process data in list-mode format and can compensate for the presence of variability in the source, such as uncertain object orientation and location. The Hotelling observer applies an optimal set of weights to binned detector data, yielding a test statistic that is thresholded to make a decision. The channelized Hotelling observer applies a channelizing matrix to the vectorized data, resulting in a lower dimensionalmore » vector available to the monitor to make decisions. We demonstrate how incorporating additional terms in this channelizing-matrix optimization offers benefits for treaty verification. We present two methods to increase shared information and trust between the host and monitor. The first method penalizes individual channel performance in order to maximize the information available to the monitor while maintaining optimal performance. Second, we present a method that penalizes predefined sensitive information while maintaining the capability to discriminate between binary choices. Data used in this study was generated using Monte Carlo simulations for fission neutrons, accomplished with the GEANT4 toolkit. Custom models for plutonium inspection objects were measured in simulation by a radiation imaging system. Model performance was evaluated and presented using the area under the receiver operating characteristic curve.« less

  1. Linear models to perform treaty verification tasks for enhanced information security

    NASA Astrophysics Data System (ADS)

    MacGahan, Christopher J.; Kupinski, Matthew A.; Brubaker, Erik M.; Hilton, Nathan R.; Marleau, Peter A.

    2017-02-01

    Linear mathematical models were applied to binary-discrimination tasks relevant to arms control verification measurements in which a host party wishes to convince a monitoring party that an item is or is not treaty accountable. These models process data in list-mode format and can compensate for the presence of variability in the source, such as uncertain object orientation and location. The Hotelling observer applies an optimal set of weights to binned detector data, yielding a test statistic that is thresholded to make a decision. The channelized Hotelling observer applies a channelizing matrix to the vectorized data, resulting in a lower dimensional vector available to the monitor to make decisions. We demonstrate how incorporating additional terms in this channelizing-matrix optimization offers benefits for treaty verification. We present two methods to increase shared information and trust between the host and monitor. The first method penalizes individual channel performance in order to maximize the information available to the monitor while maintaining optimal performance. Second, we present a method that penalizes predefined sensitive information while maintaining the capability to discriminate between binary choices. Data used in this study was generated using Monte Carlo simulations for fission neutrons, accomplished with the GEANT4 toolkit. Custom models for plutonium inspection objects were measured in simulation by a radiation imaging system. Model performance was evaluated and presented using the area under the receiver operating characteristic curve.

  2. Improving Sensorimotor Function and Adaptation using Stochastic Vestibular Stimulation

    NASA Technical Reports Server (NTRS)

    Galvan, R. C.; Bloomberg, J. J.; Mulavara, A. P.; Clark, T. K.; Merfeld, D. M.; Oman, C. M.

    2014-01-01

    Astronauts experience sensorimotor changes during adaption to G-transitions that occur when entering and exiting microgravity. Post space flight, these sensorimotor disturbances can include postural and gait instability, visual performance changes, manual control disruptions, spatial disorientation, and motion sickness, all of which can hinder the operational capabilities of the astronauts. Crewmember safety would be significantly increased if sensorimotor changes brought on by gravitational changes could be mitigated and adaptation could be facilitated. The goal of this research is to investigate and develop the use of electrical stochastic vestibular stimulation (SVS) as a countermeasure to augment sensorimotor function and facilitate adaptation. For this project, SVS will be applied via electrodes on the mastoid processes at imperceptible amplitude levels. We hypothesize that SVS will improve sensorimotor performance through the phenomena of stochastic resonance, which occurs when the response of a nonlinear system to a weak input signal is optimized by the application of a particular nonzero level of noise. In line with the theory of stochastic resonance, a specific optimal level of SVS will be found and tested for each subject [1]. Three experiments are planned to investigate the use of SVS in sensory-dependent tasks and performance. The first experiment will aim to demonstrate stochastic resonance in the vestibular system through perception based motion recognition thresholds obtained using a 6-degree of freedom Stewart platform in the Jenks Vestibular Laboratory at Massachusetts Eye and Ear Infirmary. A range of SVS amplitudes will be applied to each subject and the subjectspecific optimal SVS level will be identified as that which results in the lowest motion recognition threshold, through previously established, well developed methods [2,3,4]. The second experiment will investigate the use of optimal SVS in facilitating sensorimotor adaptation to system disturbances. Subjects will adapt to wearing minifying glasses, resulting in decreased vestibular ocular reflex (VOR) gain. The VOR gain will then be intermittently measured while the subject readapts to normal vision, with and without optimal SVS. We expect that optimal SVS will cause a steepening of the adaptation curve. The third experiment will test the use of optimal SVS in an operationally relevant aerospace task, using the tilt translation sled at NASA Johnson Space Center, a test platform capable of recreating the tilt-gain and tilt-translation illusions associated with landing of a spacecraft post-space flight. In this experiment, a perception based manual control measure will be used to compare performance with and without optimal SVS. We expect performance to improve in this task when optimal SVS is applied. The ultimate goal of this work is to systematically investigate and further understand the potential benefits of stochastic vestibular stimulation in the context of human space flight so that it may be used in the future as a component of a comprehensive countermeasure plan for adaptation to G-transitions.

  3. Assessment of electrosurgical hand controls integrated into a laparoscopic grasper.

    PubMed

    Brown-Clerk, Bernadette; Rousek, Justin B; Lowndes, Bethany R; Eikhout, Sandra M; Balogh, Bradley J; Hallbeck, M Susan

    2011-12-01

    The aim of this study was to quantitatively and qualitatively determine the optimal ergonomic placement of novel electrosurgical hand controls integrated into a standard laparoscopic grasper to optimize functionality. This device will allow laparoscopic surgeons to hand-operate standard electrosurgical equipment, eliminating the use of electrosurgical foot pedals, which are prone to activation errors and cause uncomfortable body positions for the physician. Three hand control designs were evaluated by 26 participants during the performance of four basic inanimate laparoscopic electrosurgical tasks. Task completion time, actuation force, forearm electromyography (EMG) and user preference were evaluated for each hand control design. Task speed was controlled using a metronome to minimize subject variability, and resulted in no significant completion time differences between task types (P > 0.05). Hand control design 1 (CD 1) resulted in the ability to generate significantly greater actuation force for three of the four tasks (P < 0.05) with minimal forearm muscle activation. Additionally, CD 1 was rated significantly better for comfort and ease-of-use compared to the other two hand control designs (P < 0.05). As a result, CD 1 was determined to be an advantageous ergonomic design for the novel electrosurgical hand controls.

  4. Surgical task analysis of simulated laparoscopic cholecystectomy with a navigation system.

    PubMed

    Sugino, T; Kawahira, H; Nakamura, R

    2014-09-01

       Advanced surgical procedures, which have become complex and difficult, increase the burden of surgeons. Quantitative analysis of surgical procedures can improve training, reduce variability, and enable optimization of surgical procedures. To this end, a surgical task analysis system was developed that uses only surgical navigation information.    Division of the surgical procedure, task progress analysis, and task efficiency analysis were done. First, the procedure was divided into five stages. Second, the operating time and progress rate were recorded to document task progress during specific stages, including the dissecting task. Third, the speed of the surgical instrument motion (mean velocity and acceleration), as well as the size and overlap ratio of the approximate ellipse of the location log data distribution, was computed to estimate the task efficiency during each stage. These analysis methods were evaluated based on experimental validation with two groups of surgeons, i.e., skilled and "other" surgeons. The performance metrics and analytical parameters included incidents during the operation, the surgical environment, and the surgeon's skills or habits.    Comparison of groups revealed that skilled surgeons tended to perform the procedure in less time and involved smaller regions; they also manipulated the surgical instruments more gently.    Surgical task analysis developed for quantitative assessment of surgical procedures and surgical performance may provide practical methods and metrics for objective evaluation of surgical expertise.

  5. Acquisition and production of skilled behavior in dynamic decision-making tasks

    NASA Technical Reports Server (NTRS)

    Kirlik, Alex

    1993-01-01

    Summaries of the four projects completed during the performance of this research are included. The four projects described are: Perceptual Augmentation Aiding for Situation Assessment, Perceptual Augmentation Aiding for Dynamic Decision-Making and Control, Action Advisory Aiding for Dynamic Decision-Making and Control, and Display Design to Support Time-Constrained Route Optimization. Papers based on each of these projects are currently in preparation. The theoretical framework upon which the first three projects are based, Ecological Task Analysis, was also developed during the performance of this research, and is described in a previous report. A project concerned with modeling strategies in human control of a dynamic system was also completed during the performance of this research.

  6. An ICA-based method for the identification of optimal FMRI features and components using combined group-discriminative techniques

    PubMed Central

    Sui, Jing; Adali, Tülay; Pearlson, Godfrey D.; Calhoun, Vince D.

    2013-01-01

    Extraction of relevant features from multitask functional MRI (fMRI) data in order to identify potential biomarkers for disease, is an attractive goal. In this paper, we introduce a novel feature-based framework, which is sensitive and accurate in detecting group differences (e.g. controls vs. patients) by proposing three key ideas. First, we integrate two goal-directed techniques: coefficient-constrained independent component analysis (CC-ICA) and principal component analysis with reference (PCA-R), both of which improve sensitivity to group differences. Secondly, an automated artifact-removal method is developed for selecting components of interest derived from CC-ICA, with an average accuracy of 91%. Finally, we propose a strategy for optimal feature/component selection, aiming to identify optimal group-discriminative brain networks as well as the tasks within which these circuits are engaged. The group-discriminating performance is evaluated on 15 fMRI feature combinations (5 single features and 10 joint features) collected from 28 healthy control subjects and 25 schizophrenia patients. Results show that a feature from a sensorimotor task and a joint feature from a Sternberg working memory (probe) task and an auditory oddball (target) task are the top two feature combinations distinguishing groups. We identified three optimal features that best separate patients from controls, including brain networks consisting of temporal lobe, default mode and occipital lobe circuits, which when grouped together provide improved capability in classifying group membership. The proposed framework provides a general approach for selecting optimal brain networks which may serve as potential biomarkers of several brain diseases and thus has wide applicability in the neuroimaging research community. PMID:19457398

  7. Recognizable or Not: Towards Image Semantic Quality Assessment for Compression

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Dandan; Li, Houqiang

    2017-12-01

    Traditionally, image compression was optimized for the pixel-wise fidelity or the perceptual quality of the compressed images given a bit-rate budget. But recently, compressed images are more and more utilized for automatic semantic analysis tasks such as recognition and retrieval. For these tasks, we argue that the optimization target of compression is no longer perceptual quality, but the utility of the compressed images in the given automatic semantic analysis task. Accordingly, we propose to evaluate the quality of the compressed images neither at pixel level nor at perceptual level, but at semantic level. In this paper, we make preliminary efforts towards image semantic quality assessment (ISQA), focusing on the task of optical character recognition (OCR) from compressed images. We propose a full-reference ISQA measure by comparing the features extracted from text regions of original and compressed images. We then propose to integrate the ISQA measure into an image compression scheme. Experimental results show that our proposed ISQA measure is much better than PSNR and SSIM in evaluating the semantic quality of compressed images; accordingly, adopting our ISQA measure to optimize compression for OCR leads to significant bit-rate saving compared to using PSNR or SSIM. Moreover, we perform subjective test about text recognition from compressed images, and observe that our ISQA measure has high consistency with subjective recognizability. Our work explores new dimensions in image quality assessment, and demonstrates promising direction to achieve higher compression ratio for specific semantic analysis tasks.

  8. Impact of police body armour and equipment on mobility.

    PubMed

    Dempsey, Paddy C; Handcock, Phil J; Rehrer, Nancy J

    2013-11-01

    Body armour is used widely by law enforcement and other agencies but has received mixed reviews. This study examined the influence of stab resistant body armour (SRBA) and mandated accessories on physiological responses to, and the performance of, simulated mobility tasks. Fifty-two males (37 ± 9.2 yr, 180.7 ± 6.1 cm, 90.2 ± 11.6 kg, VO2max 50 ± 8.5 ml kg(-1) min(-1), BMI 27.6 ± 3.1, mean ± SD) completed a running VO2max test and task familiarisation. Two experimental sessions were completed (≥4 days in between) in a randomised counterbalanced order, one while wearing SRBA and appointments (loaded) and one without additional load (unloaded). During each session participants performed five mobility tasks: a balance task, an acceleration task that simulated exiting a vehicle, chin-ups, a grappling task, and a manoeuvrability task. A 5-min treadmill run (zero-incline at 13 km·h(-1), running start) was then completed. One min after the run the five mobility tasks were repeated. There was a significant decrease in performance during all tasks with loading (p < 0.001). Participants were off-balance longer; slower to complete the acceleration, grapple and mobility tasks; completed fewer chin-ups; and had greater physiological cost (↑ %HRmax, ↑ %VO2max, ↑ RER) and perceptual effort (↑ RPE) during the 5-min run. Mean performance decreases ranged from 13 to 42% while loaded, with further decreases of 6-16% noted after the 5-min run. Unloaded task performance was no different between phases. Wearing SRBA and appointments significantly reduced mobility during key task elements and resulted in greater physiological effort. These findings could have consequences for optimal function in the working environment and therefore officer and public safety. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  9. A collimator optimization method for quantitative imaging: application to Y-90 bremsstrahlung SPECT.

    PubMed

    Rong, Xing; Frey, Eric C

    2013-08-01

    Post-therapy quantitative 90Y bremsstrahlung single photon emission computed tomography (SPECT) has shown great potential to provide reliable activity estimates, which are essential for dose verification. Typically 90Y imaging is performed with high- or medium-energy collimators. However, the energy spectrum of 90Y bremsstrahlung photons is substantially different than typical for these collimators. In addition, dosimetry requires quantitative images, and collimators are not typically optimized for such tasks. Optimizing a collimator for 90Y imaging is both novel and potentially important. Conventional optimization methods are not appropriate for 90Y bremsstrahlung photons, which have a continuous and broad energy distribution. In this work, the authors developed a parallel-hole collimator optimization method for quantitative tasks that is particularly applicable to radionuclides with complex emission energy spectra. The authors applied the proposed method to develop an optimal collimator for quantitative 90Y bremsstrahlung SPECT in the context of microsphere radioembolization. To account for the effects of the collimator on both the bias and the variance of the activity estimates, the authors used the root mean squared error (RMSE) of the volume of interest activity estimates as the figure of merit (FOM). In the FOM, the bias due to the null space of the image formation process was taken in account. The RMSE was weighted by the inverse mass to reflect the application to dosimetry; for a different application, more relevant weighting could easily be adopted. The authors proposed a parameterization for the collimator that facilitates the incorporation of the important factors (geometric sensitivity, geometric resolution, and septal penetration fraction) determining collimator performance, while keeping the number of free parameters describing the collimator small (i.e., two parameters). To make the optimization results for quantitative 90Y bremsstrahlung SPECT more general, the authors simulated multiple tumors of various sizes in the liver. The authors realistically simulated human anatomy using a digital phantom and the image formation process using a previously validated and computationally efficient method for modeling the image-degrading effects including object scatter, attenuation, and the full collimator-detector response (CDR). The scatter kernels and CDR function tables used in the modeling method were generated using a previously validated Monte Carlo simulation code. The hole length, hole diameter, and septal thickness of the obtained optimal collimator were 84, 3.5, and 1.4 mm, respectively. Compared to a commercial high-energy general-purpose collimator, the optimal collimator improved the resolution and FOM by 27% and 18%, respectively. The proposed collimator optimization method may be useful for improving quantitative SPECT imaging for radionuclides with complex energy spectra. The obtained optimal collimator provided a substantial improvement in quantitative performance for the microsphere radioembolization task considered.

  10. Dynamic optimization of cargo movement by trucks in metropolitan areas with adjacent ports

    DOT National Transportation Integrated Search

    2002-06-01

    Today, in the trucking industry, dispatchers perform the tasks of cargo assignment, and driver scheduling. The growing number of containers processed at marine centers and the increasing traffic congestion in metropolitan areas adjacent to marine por...

  11. Framing of task performance strategies: effects on performance in a multiattribute dynamic decision making environment.

    PubMed

    Nygren, T E

    1997-09-01

    It is well documented that the way a static choice task is "framed" can dramatically alter choice behavior, often leading to observable preference reversals. This framing effect appears to result from perceived changes in the nature or location of a person's initial reference point, but it is not clear how framing effects might generalize to performance on dynamic decision making tasks that are characterized by high workload, time constraints, risk, or stress. A study was conducted to examine the hypothesis that framing can introduce affective components to the decision making process and can influence, either favorably (positive frame) or adversely (negative frame), the implementation and use of decision making strategies in dynamic high-workload environments. Results indicated that negative frame participants were significantly impaired in developing and employing a simple optimal decision strategy relative to a positive frame group. Discussion focuses on implications of these results for models of dynamic decision making.

  12. Fault tolerance of artificial neural networks with applications in critical systems

    NASA Technical Reports Server (NTRS)

    Protzel, Peter W.; Palumbo, Daniel L.; Arras, Michael K.

    1992-01-01

    This paper investigates the fault tolerance characteristics of time continuous recurrent artificial neural networks (ANN) that can be used to solve optimization problems. The principle of operations and performance of these networks are first illustrated by using well-known model problems like the traveling salesman problem and the assignment problem. The ANNs are then subjected to 13 simultaneous 'stuck at 1' or 'stuck at 0' faults for network sizes of up to 900 'neurons'. The effects of these faults is demonstrated and the cause for the observed fault tolerance is discussed. An application is presented in which a network performs a critical task for a real-time distributed processing system by generating new task allocations during the reconfiguration of the system. The performance degradation of the ANN under the presence of faults is investigated by large-scale simulations, and the potential benefits of delegating a critical task to a fault tolerant network are discussed.

  13. Steady-state global optimization of metabolic non-linear dynamic models through recasting into power-law canonical models

    PubMed Central

    2011-01-01

    Background Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task. PMID:21867520

  14. Framing matters: Effects of framing on older adults’ exploratory decision-making

    PubMed Central

    Cooper, Jessica A.; Blanco, Nathaniel; Maddox, W. Todd

    2016-01-01

    We examined framing effects on exploratory decision-making. In Experiment 1 we tested older and younger adults in two decision-making tasks separated by one week, finding that older adults’ decision-making performance was preserved when maximizing gains, but declined when minimizing losses. Computational modeling indicates that younger adults in both conditions, and older adults in gains-maximization, utilized a decreasing threshold strategy (which is optimal), but older adults in losses were better fit by a fixed-probability model of exploration. In Experiment 2 we examined within-subjects behavior in older and younger adults in the same exploratory decision-making task, but without a time separation between tasks. We replicated the older adult disadvantage in loss-minimization from Experiment 1, and found that the older adult deficit was significantly reduced when the loss-minimization task immediately followed the gains-maximization task. We conclude that older adults’ performance in exploratory decision-making is hindered when framed as loss-minimization, but that this deficit is attenuated when older adults can first develop a strategy in a gains-framed task. PMID:27977218

  15. Framing matters: Effects of framing on older adults' exploratory decision-making.

    PubMed

    Cooper, Jessica A; Blanco, Nathaniel J; Maddox, W Todd

    2017-02-01

    We examined framing effects on exploratory decision-making. In Experiment 1 we tested older and younger adults in two decision-making tasks separated by one week, finding that older adults' decision-making performance was preserved when maximizing gains, but it declined when minimizing losses. Computational modeling indicates that younger adults in both conditions, and older adults in gains maximization, utilized a decreasing threshold strategy (which is optimal), but older adults in losses were better fit by a fixed-probability model of exploration. In Experiment 2 we examined within-subject behavior in older and younger adults in the same exploratory decision-making task, but without a time separation between tasks. We replicated the older adult disadvantage in loss minimization from Experiment 1 and found that the older adult deficit was significantly reduced when the loss-minimization task immediately followed the gains-maximization task. We conclude that older adults' performance in exploratory decision-making is hindered when framed as loss minimization, but that this deficit is attenuated when older adults can first develop a strategy in a gains-framed task. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. Association between fine motor skills and binocular visual function in children with reading difficulties.

    PubMed

    Niechwiej-Szwedo, Ewa; Alramis, Fatimah; Christian, Lisa W

    2017-12-01

    Performance of fine motor skills (FMS) assessed by a clinical test battery has been associated with reading achievement in school-age children. However, the nature of this association remains to be established. The aim of this study was to assess FMS in children with reading difficulties using two experimental tasks, and to determine if performance is associated with reduced binocular function. We hypothesized that in comparison to an age- and sex-matched control group, children identified with reading difficulties will perform worse only on a motor task that has been shown to rely on binocular input. To test this hypothesis, motor performance was assessed using two tasks: bead-threading and peg-board in 19 children who were reading below expected grade and age-level. Binocular vision assessment included tests for stereoacuity, fusional vergence, amplitude of accommodation, and accommodative facility. In comparison to the control group, children with reading difficulties performed significantly worse on the bead-threading task. In contrast, performance on the peg-board task was similar in both groups. Accommodative facility was the only measure of binocular function significantly associated with motor performance. Findings from our exploratory study suggest that normal binocular vision may provide an important sensory input for the optimal development of FMS and reading. Given the small sample size tested in the current study, further investigation to assess the contribution of binocular vision to the development and performance of FMS and reading is warranted. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Preliminary pilot fMRI study of neuropostural optimization with a noninvasive asymmetric radioelectric brain stimulation protocol in functional dysmetria

    PubMed Central

    Mura, Marco; Castagna, Alessandro; Fontani, Vania; Rinaldi, Salvatore

    2012-01-01

    Purpose This study assessed changes in functional dysmetria (FD) and in brain activation observable by functional magnetic resonance imaging (fMRI) during a leg flexion-extension motor task following brain stimulation with a single radioelectric asymmetric conveyer (REAC) pulse, according to the precisely defined neuropostural optimization (NPO) protocol. Population and methods Ten healthy volunteers were assessed using fMRI conducted during a simple motor task before and immediately after delivery of a single REAC-NPO pulse. The motor task consisted of a flexion-extension movement of the legs with the knees bent. FD signs and brain activation patterns were compared before and after REAC-NPO. Results A single 250-millisecond REAC-NPO treatment alleviated FD, as evidenced by patellar asymmetry during a sit-up motion, and modulated activity patterns in the brain, particularly in the cerebellum, during the performance of the motor task. Conclusion Activity in brain areas involved in motor control and coordination, including the cerebellum, is altered by administration of a REAC-NPO treatment and this effect is accompanied by an alleviation of FD. PMID:22536071

  18. Does training under consistent mapping conditions lead to automatic attention attraction to targets in search tasks?

    PubMed

    Lefebvre, Christine; Cousineau, Denis; Larochelle, Serge

    2008-11-01

    Schneider and Shiffrin (1977) proposed that training under consistent stimulus-response mapping (CM) leads to automatic target detection in search tasks. Other theories, such as Treisman and Gelade's (1980) feature integration theory, consider target-distractor discriminability as the main determinant of search performance. The first two experiments pit these two principles against each other. The results show that CM training is neither necessary nor sufficient to achieve optimal search performance. Two other experiments examine whether CM trained targets, presented as distractors in unattended display locations, attract attention away from current targets. The results are again found to vary with target-distractor similarity. Overall, the present study strongly suggests that CM training does not invariably lead to automatic attention attraction in search tasks.

  19. Target detection in insects: optical, neural and behavioral optimizations.

    PubMed

    Gonzalez-Bellido, Paloma T; Fabian, Samuel T; Nordström, Karin

    2016-12-01

    Motion vision provides important cues for many tasks. Flying insects, for example, may pursue small, fast moving targets for mating or feeding purposes, even when these are detected against self-generated optic flow. Since insects are small, with size-constrained eyes and brains, they have evolved to optimize their optical, neural and behavioral target visualization solutions. Indeed, even if evolutionarily distant insects display different pursuit strategies, target neuron physiology is strikingly similar. Furthermore, the coarse spatial resolution of the insect compound eye might actually be beneficial when it comes to detection of moving targets. In conclusion, tiny insects show higher than expected performance in target visualization tasks. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Objective assessment of image quality. IV. Application to adaptive optics

    PubMed Central

    Barrett, Harrison H.; Myers, Kyle J.; Devaney, Nicholas; Dainty, Christopher

    2008-01-01

    The methodology of objective assessment, which defines image quality in terms of the performance of specific observers on specific tasks of interest, is extended to temporal sequences of images with random point spread functions and applied to adaptive imaging in astronomy. The tasks considered include both detection and estimation, and the observers are the optimal linear discriminant (Hotelling observer) and the optimal linear estimator (Wiener). A general theory of first- and second-order spatiotemporal statistics in adaptive optics is developed. It is shown that the covariance matrix can be rigorously decomposed into three terms representing the effect of measurement noise, random point spread function, and random nature of the astronomical scene. Figures of merit are developed, and computational methods are discussed. PMID:17106464

  1. Optimization of a Small-Scale Engine Using Plasma Enhanced Ignition

    DTIC Science & Technology

    2013-03-01

    PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 6. AUTHOR(S) 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT...systems were tested in the small engine and their effects on engine performance determined through comparison with a regular spark discharge (thermal...pulse plasma discharge system purchased from Plasmatronics LLC. Air fuel ratio (λ units are used in this report) sweeps were performed at several

  2. A model for the pilot's use of motion cues in roll-axis tracking tasks

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Junker, A. M.

    1977-01-01

    Simulated target-following and disturbance-regulation tasks were explored with subjects using visual-only and combined visual and motion cues. The effects of motion cues on task performance and pilot response behavior were appreciably different for the two task configurations and were consistent with data reported in earlier studies for similar task configurations. The optimal-control model for pilot/vehicle systems provided a task-independent framework for accounting for the pilot's use of motion cues. Specifically, the availability of motion cues was modeled by augmenting the set of perceptual variables to include position, rate, acceleration, and accleration-rate of the motion simulator, and results were consistent with the hypothesis of attention-sharing between visual and motion variables. This straightforward informational model allowed accurate model predictions of the effects of motion cues on a variety of response measures for both the target-following and disturbance-regulation tasks.

  3. Thermally Optimized Paradigm of Thermal Management (TOP-M)

    DTIC Science & Technology

    2017-07-18

    ELEMENT NUMBER 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 6. AUTHOR(S) 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8...19b. TELEPHONE NUMBER (Include area code) 18-07-2017 Final Technical Jul 2015 - Jul 2017 NICOP - Thermally Optimized Paradigm of Thermal Management ...The main goal of this research was to present a New Thermal Management Approach, which combines thermally aware Very/Ultra Large Scale Integration

  4. Differences in Visuo-Motor Control in Skilled vs. Novice Martial Arts Athletes during Sustained and Transient Attention Tasks: A Motor-Related Cortical Potential Study

    PubMed Central

    Sanchez-Lopez, Javier; Fernandez, Thalia; Silva-Pereyra, Juan; Martinez Mesa, Juan A.; Di Russo, Francesco

    2014-01-01

    Cognitive and motor processes are essential for optimal athletic performance. Individuals trained in different skills and sports may have specialized cognitive abilities and motor strategies related to the characteristics of the activity and the effects of training and expertise. Most studies have investigated differences in motor-related cortical potential (MRCP) during self-paced tasks in athletes but not in stimulus-related tasks. The aim of the present study was to identify the differences in performance and MRCP between skilled and novice martial arts athletes during two different types of tasks: a sustained attention task and a transient attention task. Behavioral and electrophysiological data from twenty-two martial arts athletes were obtained while they performed a continuous performance task (CPT) to measure sustained attention and a cued continuous performance task (c-CPT) to measure transient attention. MRCP components were analyzed and compared between groups. Electrophysiological data in the CPT task indicated larger prefrontal positive activity and greater posterior negativity distribution prior to a motor response in the skilled athletes, while novices showed a significantly larger response-related P3 after a motor response in centro-parietal areas. A different effect occurred in the c-CPT task in which the novice athletes showed strong prefrontal positive activity before a motor response and a large response-related P3, while in skilled athletes, the prefrontal activity was absent. We propose that during the CPT, skilled athletes were able to allocate two different but related processes simultaneously according to CPT demand, which requires controlled attention and controlled motor responses. On the other hand, in the c-CPT, skilled athletes showed better cue facilitation, which permitted a major economy of resources and “automatic” or less controlled responses to relevant stimuli. In conclusion, the present data suggest that motor expertise enhances neural flexibility and allows better adaptation of cognitive control to the requested task. PMID:24621480

  5. Differences in visuo-motor control in skilled vs. novice martial arts athletes during sustained and transient attention tasks: a motor-related cortical potential study.

    PubMed

    Sanchez-Lopez, Javier; Fernandez, Thalia; Silva-Pereyra, Juan; Martinez Mesa, Juan A; Di Russo, Francesco

    2014-01-01

    Cognitive and motor processes are essential for optimal athletic performance. Individuals trained in different skills and sports may have specialized cognitive abilities and motor strategies related to the characteristics of the activity and the effects of training and expertise. Most studies have investigated differences in motor-related cortical potential (MRCP) during self-paced tasks in athletes but not in stimulus-related tasks. The aim of the present study was to identify the differences in performance and MRCP between skilled and novice martial arts athletes during two different types of tasks: a sustained attention task and a transient attention task. Behavioral and electrophysiological data from twenty-two martial arts athletes were obtained while they performed a continuous performance task (CPT) to measure sustained attention and a cued continuous performance task (c-CPT) to measure transient attention. MRCP components were analyzed and compared between groups. Electrophysiological data in the CPT task indicated larger prefrontal positive activity and greater posterior negativity distribution prior to a motor response in the skilled athletes, while novices showed a significantly larger response-related P3 after a motor response in centro-parietal areas. A different effect occurred in the c-CPT task in which the novice athletes showed strong prefrontal positive activity before a motor response and a large response-related P3, while in skilled athletes, the prefrontal activity was absent. We propose that during the CPT, skilled athletes were able to allocate two different but related processes simultaneously according to CPT demand, which requires controlled attention and controlled motor responses. On the other hand, in the c-CPT, skilled athletes showed better cue facilitation, which permitted a major economy of resources and "automatic" or less controlled responses to relevant stimuli. In conclusion, the present data suggest that motor expertise enhances neural flexibility and allows better adaptation of cognitive control to the requested task.

  6. The role of mental rotation and memory scanning on the performance of laparoscopic skills: a study on the effect of camera rotational angle.

    PubMed

    Conrad, J; Shah, A H; Divino, C M; Schluender, S; Gurland, B; Shlasko, E; Szold, A

    2006-03-01

    The rotational angle of the laparoscopic image relative to the true horizon has an unknown influence on performance in laparoscopic procedures. This study evaluates the effect of increasing rotational angle on surgical performance. Surgical residents (group 1) (n = 6) and attending surgeons (group 2) (n = 4) were tested on two laparoscopic skills. The tasks consisted of passing a suture through an aperture, and laparoscopic knot tying. These tasks were assessed at 15 degrees intervals between 0 degrees and 90 degrees , on three consecutive repetitions. The participant's performance was evaluated based on the time required to complete the tasks and number of errors incurred. There was an increasing deterioration in suturing performance as the degree of image rotation was increased. Participants showed a statistically significant 20-120% progressive increase in time to completion of the tasks (p = 0.004), with error rates increasing from 10% to 30% (p = 0.04) as the angle increased from 0 degrees to 90 degrees. Knot-tying performance similarly showed a decrease in performance that was evident in the less experienced surgeons (p = 0.02) but with no obvious effect on the advanced laparoscopic surgeons. When evaluated independently and as a group, both novice and experienced laparoscopic surgeons showed significant prolongation to completion of suturing tasks with increased errors as the rotational angle increased. The knot-tying task shows that experienced surgeons may be able to overcome rotational effects to some extent. This is consistent with results from cognitive neuroscience research evaluating the processing of directional information in spatial motor tasks. It appears that these tasks utilize the time-consuming processes of mental rotation and memory scanning. Optimal performance during laparoscopic procedures requires that the rotation of the camera, and thus the image, be kept to a minimum to maintain a stable horizon. New technology that corrects the rotational angle may benefit the surgeon, decrease operating time, and help to prevent adverse outcomes.

  7. Improving Rural Emergency Medical Services (EMS) through transportation system enhancements Phase II : project brief.

    DOT National Transportation Integrated Search

    2015-12-01

    This study used the National EMS Information System (NEMSIS) South Dakota data to develop datadriven performance metrics for EMS. Researchers used the data for three tasks: geospatial analysis of EMS events, optimization of station locations, and ser...

  8. Productive and counterproductive job crafting: A daily diary study.

    PubMed

    Demerouti, Evangelia; Bakker, Arnold B; Halbesleben, Jonathon R B

    2015-10-01

    The present study aims to uncover the way daily job crafting influences daily job performance (i.e., task performance, altruism, and counterproductive work behavior). Job crafting was conceptualized as "seeking resources," "seeking challenges," and "reducing demands" and viewed as strategies individuals use to optimize their job characteristics. We hypothesized that daily job crafting relates to daily job demands and resources (work pressure and autonomy), which consequently relate to daily work engagement and exhaustion and ultimately to job performance. A sample of 95 employees filled in a quantitative diary for 5 consecutive working days (n occasions = 475). We predicted and found that daily seeking resources was positively associated with daily task performance because daily autonomy and work engagement increased. In contrast, daily reducing demands was detrimental for daily task performance and altruism, because employees lower their daily workload and consequently their engagement and exhaustion, respectively. Only daily seeking challenges was positively (rather than negatively) associated with daily counterproductive behavior. We conclude that employee job crafting can have both beneficial and detrimental effects on job performance. (c) 2015 APA, all rights reserved).

  9. Design and Analysis of Self-Adapted Task Scheduling Strategies in Wireless Sensor Networks

    PubMed Central

    Guo, Wenzhong; Xiong, Naixue; Chao, Han-Chieh; Hussain, Sajid; Chen, Guolong

    2011-01-01

    In a wireless sensor network (WSN), the usage of resources is usually highly related to the execution of tasks which consume a certain amount of computing and communication bandwidth. Parallel processing among sensors is a promising solution to provide the demanded computation capacity in WSNs. Task allocation and scheduling is a typical problem in the area of high performance computing. Although task allocation and scheduling in wired processor networks has been well studied in the past, their counterparts for WSNs remain largely unexplored. Existing traditional high performance computing solutions cannot be directly implemented in WSNs due to the limitations of WSNs such as limited resource availability and the shared communication medium. In this paper, a self-adapted task scheduling strategy for WSNs is presented. First, a multi-agent-based architecture for WSNs is proposed and a mathematical model of dynamic alliance is constructed for the task allocation problem. Then an effective discrete particle swarm optimization (PSO) algorithm for the dynamic alliance (DPSO-DA) with a well-designed particle position code and fitness function is proposed. A mutation operator which can effectively improve the algorithm’s ability of global search and population diversity is also introduced in this algorithm. Finally, the simulation results show that the proposed solution can achieve significant better performance than other algorithms. PMID:22163971

  10. Dynamic Task Performance, Cohesion, and Communications in Human Groups.

    PubMed

    Giraldo, Luis Felipe; Passino, Kevin M

    2016-10-01

    In the study of the behavior of human groups, it has been observed that there is a strong interaction between the cohesiveness of the group, its performance when the group has to solve a task, and the patterns of communication between the members of the group. Developing mathematical and computational tools for the analysis and design of task-solving groups that are not only cohesive but also perform well is of importance in social sciences, organizational management, and engineering. In this paper, we model a human group as a dynamical system whose behavior is driven by a task optimization process and the interaction between subsystems that represent the members of the group interconnected according to a given communication network. These interactions are described as attractions and repulsions among members. We show that the dynamics characterized by the proposed mathematical model are qualitatively consistent with those observed in real-human groups, where the key aspect is that the attraction patterns in the group and the commitment to solve the task are not static but change over time. Through a theoretical analysis of the system we provide conditions on the parameters that allow the group to have cohesive behaviors, and Monte Carlo simulations are used to study group dynamics for different sets of parameters, communication topologies, and tasks to solve.

  11. OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE--A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnis Judzis

    2004-07-01

    This document details the progress to date on the ''OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE--A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING'' contract for the quarter starting April 2004 through June 2004. The DOE and TerraTek continue to wait for Novatek on the optimization portion of the testing program (they are completely rebuilding their fluid hammer). The latest indication is that the Novatek tool would be ready for retesting only 4Q 2004 or later. Smith International's hammer was tested in April of 2004 (2Q 2004 report). Accomplishments included the following: (1) TerraTek re-tested the ''optimized'' fluid hammermore » provided by Smith International during April 2004. Many improvements in mud hammer rates of penetration were noted over Phase 1 benchmark testing from November 2002. (2) Shell Exploration and Production in The Hague was briefed on various drilling performance projects including Task 8 ''Cutter Impact Testing''. Shell interest and willingness to assist in the test matrix as an Industry Advisor is appreciated. (3) TerraTek participated in a DOE/NETL Review meeting at Morgantown on April 15, 2004. The discussions were very helpful and a program related to the Mud Hammer optimization project was noted--Terralog modeling work on percussion tools. (4) Terralog's Dr. Gang Han witnessed some of the full-scale optimization testing of the Smith International hammer in order to familiarize him with downhole tools. TerraTek recommends that modeling first start with single cutters/inserts and progress in complexity. (5) The final equipment problem on the impact testing task was resolved through the acquisition of a high data rate laser based displacement instrument. (6) TerraTek provided Novatek much engineering support for the future re-testing of their optimized tool. Work was conducted on slip ring [electrical] specifications and tool collar sealing in the testing vessel with a reconfigured flow system on Novatek's collar.« less

  12. Collimator optimization and collimator-detector response compensation in myocardial perfusion SPECT using the ideal observer with and without model mismatch and an anthropomorphic model observer

    NASA Astrophysics Data System (ADS)

    Ghaly, Michael; Links, Jonathan M.; Frey, Eric C.

    2016-03-01

    The collimator is the primary factor that determines the spatial resolution and noise tradeoff in myocardial perfusion SPECT images. In this paper, the goal was to find the collimator that optimizes the image quality in terms of a perfusion defect detection task. Since the optimal collimator could depend on the level of approximation of the collimator-detector response (CDR) compensation modeled in reconstruction, we performed this optimization for the cases of modeling the full CDR (including geometric, septal penetration and septal scatter responses), the geometric CDR, or no model of the CDR. We evaluated the performance on the detection task using three model observers. Two observers operated on data in the projection domain: the Ideal Observer (IO) and IO with Model-Mismatch (IO-MM). The third observer was an anthropomorphic Channelized Hotelling Observer (CHO), which operated on reconstructed images. The projection-domain observers have the advantage that they are computationally less intensive. The IO has perfect knowledge of the image formation process, i.e. it has a perfect model of the CDR. The IO-MM takes into account the mismatch between the true (complete and accurate) model and an approximate model, e.g. one that might be used in reconstruction. We evaluated the utility of these projection domain observers in optimizing instrumentation parameters. We investigated a family of 8 parallel-hole collimators, spanning a wide range of resolution and sensitivity tradeoffs, using a population of simulated projection (for the IO and IO-MM) and reconstructed (for the CHO) images that included background variability. We simulated anterolateral and inferior perfusion defects with variable extents and severities. The area under the ROC curve was estimated from the IO, IO-MM, and CHO test statistics and served as the figure-of-merit. The optimal collimator for the IO had a resolution of 9-11 mm FWHM at 10 cm, which is poorer resolution than typical collimators used for MPS. When the IO-MM and CHO used a geometric or no model of the CDR, the optimal collimator shifted toward higher resolution than that obtained using the IO and the CHO with full CDR modeling. With the optimal collimator, the IO-MM and CHO using geometric modeling gave similar performance to full CDR modeling. Collimators with poorer resolution were optimal when CDR modeling was used. The agreement of rankings between the IO-MM and CHO confirmed that the IO-MM is useful for optimization tasks when model mismatch is present due to its substantially reduced computational burden compared to the CHO.

  13. Optimal service distribution in WSN service system subject to data security constraints.

    PubMed

    Wu, Zhao; Xiong, Naixue; Huang, Yannong; Gu, Qiong

    2014-08-04

    Services composition technology provides a flexible approach to building Wireless Sensor Network (WSN) Service Applications (WSA) in a service oriented tasking system for WSN. Maintaining the data security of WSA is one of the most important goals in sensor network research. In this paper, we consider a WSN service oriented tasking system in which the WSN Services Broker (WSB), as the resource management center, can map the service request from user into a set of atom-services (AS) and send them to some independent sensor nodes (SN) for parallel execution. The distribution of ASs among these SNs affects the data security as well as the reliability and performance of WSA because these SNs can be of different and independent specifications. By the optimal service partition into the ASs and their distribution among SNs, the WSB can provide the maximum possible service reliability and/or expected performance subject to data security constraints. This paper proposes an algorithm of optimal service partition and distribution based on the universal generating function (UGF) and the genetic algorithm (GA) approach. The experimental analysis is presented to demonstrate the feasibility of the suggested algorithm.

  14. Optimal Service Distribution in WSN Service System Subject to Data Security Constraints

    PubMed Central

    Wu, Zhao; Xiong, Naixue; Huang, Yannong; Gu, Qiong

    2014-01-01

    Services composition technology provides a flexible approach to building Wireless Sensor Network (WSN) Service Applications (WSA) in a service oriented tasking system for WSN. Maintaining the data security of WSA is one of the most important goals in sensor network research. In this paper, we consider a WSN service oriented tasking system in which the WSN Services Broker (WSB), as the resource management center, can map the service request from user into a set of atom-services (AS) and send them to some independent sensor nodes (SN) for parallel execution. The distribution of ASs among these SNs affects the data security as well as the reliability and performance of WSA because these SNs can be of different and independent specifications. By the optimal service partition into the ASs and their distribution among SNs, the WSB can provide the maximum possible service reliability and/or expected performance subject to data security constraints. This paper proposes an algorithm of optimal service partition and distribution based on the universal generating function (UGF) and the genetic algorithm (GA) approach. The experimental analysis is presented to demonstrate the feasibility of the suggested algorithm. PMID:25093346

  15. Global Design Optimization for Fluid Machinery Applications

    NASA Technical Reports Server (NTRS)

    Shyy, Wei; Papila, Nilay; Tucker, Kevin; Vaidyanathan, Raj; Griffin, Lisa

    2000-01-01

    Recent experiences in utilizing the global optimization methodology, based on polynomial and neural network techniques for fluid machinery design are summarized. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. Another advantage is that these methods do not need to calculate the sensitivity of each design variable locally. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables and methods for predicting the model performance. Examples of applications selected from rocket propulsion components including a supersonic turbine and an injector element and a turbulent flow diffuser are used to illustrate the usefulness of the global optimization method.

  16. Improving the Performance of an Auditory Brain-Computer Interface Using Virtual Sound Sources by Shortening Stimulus Onset Asynchrony

    PubMed Central

    Sugi, Miho; Hagimoto, Yutaka; Nambu, Isao; Gonzalez, Alejandro; Takei, Yoshinori; Yano, Shohei; Hokari, Haruhide; Wada, Yasuhiro

    2018-01-01

    Recently, a brain-computer interface (BCI) using virtual sound sources has been proposed for estimating user intention via electroencephalogram (EEG) in an oddball task. However, its performance is still insufficient for practical use. In this study, we examine the impact that shortening the stimulus onset asynchrony (SOA) has on this auditory BCI. While very short SOA might improve its performance, sound perception and task performance become difficult, and event-related potentials (ERPs) may not be induced if the SOA is too short. Therefore, we carried out behavioral and EEG experiments to determine the optimal SOA. In the experiments, participants were instructed to direct attention to one of six virtual sounds (target direction). We used eight different SOA conditions: 200, 300, 400, 500, 600, 700, 800, and 1,100 ms. In the behavioral experiment, we recorded participant behavioral responses to target direction and evaluated recognition performance of the stimuli. In all SOA conditions, recognition accuracy was over 85%, indicating that participants could recognize the target stimuli correctly. Next, using a silent counting task in the EEG experiment, we found significant differences between target and non-target sound directions in all but the 200-ms SOA condition. When we calculated an identification accuracy using Fisher discriminant analysis (FDA), the SOA could be shortened by 400 ms without decreasing the identification accuracies. Thus, improvements in performance (evaluated by BCI utility) could be achieved. On average, higher BCI utilities were obtained in the 400 and 500-ms SOA conditions. Thus, auditory BCI performance can be optimized for both behavioral and neurophysiological responses by shortening the SOA. PMID:29535602

  17. An Investigation of Generalized Differential Evolution Metaheuristic for Multiobjective Optimal Crop-Mix Planning Decision

    PubMed Central

    Olugbara, Oludayo

    2014-01-01

    This paper presents an annual multiobjective crop-mix planning as a problem of concurrent maximization of net profit and maximization of crop production to determine an optimal cropping pattern. The optimal crop production in a particular planting season is a crucial decision making task from the perspectives of economic management and sustainable agriculture. A multiobjective optimal crop-mix problem is formulated and solved using the generalized differential evolution 3 (GDE3) metaheuristic to generate a globally optimal solution. The performance of the GDE3 metaheuristic is investigated by comparing its results with the results obtained using epsilon constrained and nondominated sorting genetic algorithms—being two representatives of state-of-the-art in evolutionary optimization. The performance metrics of additive epsilon, generational distance, inverted generational distance, and spacing are considered to establish the comparability. In addition, a graphical comparison with respect to the true Pareto front for the multiobjective optimal crop-mix planning problem is presented. Empirical results generally show GDE3 to be a viable alternative tool for solving a multiobjective optimal crop-mix planning problem. PMID:24883369

  18. An investigation of generalized differential evolution metaheuristic for multiobjective optimal crop-mix planning decision.

    PubMed

    Adekanmbi, Oluwole; Olugbara, Oludayo; Adeyemo, Josiah

    2014-01-01

    This paper presents an annual multiobjective crop-mix planning as a problem of concurrent maximization of net profit and maximization of crop production to determine an optimal cropping pattern. The optimal crop production in a particular planting season is a crucial decision making task from the perspectives of economic management and sustainable agriculture. A multiobjective optimal crop-mix problem is formulated and solved using the generalized differential evolution 3 (GDE3) metaheuristic to generate a globally optimal solution. The performance of the GDE3 metaheuristic is investigated by comparing its results with the results obtained using epsilon constrained and nondominated sorting genetic algorithms-being two representatives of state-of-the-art in evolutionary optimization. The performance metrics of additive epsilon, generational distance, inverted generational distance, and spacing are considered to establish the comparability. In addition, a graphical comparison with respect to the true Pareto front for the multiobjective optimal crop-mix planning problem is presented. Empirical results generally show GDE3 to be a viable alternative tool for solving a multiobjective optimal crop-mix planning problem.

  19. Modulation of competing memory systems by distraction.

    PubMed

    Foerde, Karin; Knowlton, Barbara J; Poldrack, Russell A

    2006-08-01

    Different forms of learning and memory depend on functionally and anatomically separable neural circuits [Squire, L. R. (1992) Psychol. Rev. 99, 195-231]. Declarative memory relies on a medial temporal lobe system, whereas habit learning relies on the striatum [Cohen, N. J. & Eichenbaum, H. (1993) Memory, Amnesia, and the Hippocampal System (MIT Press, Cambridge, MA)]. How these systems are engaged to optimize learning and behavior is not clear. Here, we present results from functional neuroimaging showing that the presence of a demanding secondary task during learning modulates the degree to which subjects solve a problem using either declarative memory or habit learning. Dual-task conditions did not reduce accuracy but reduced the amount of declarative learning about the task. Medial temporal lobe activity was correlated with task performance and declarative knowledge after learning under single-task conditions, whereas performance was correlated with striatal activity after dual-task learning conditions. These results demonstrate a fundamental difference in these memory systems in their sensitivity to concurrent distraction. The results are consistent with the notion that declarative and habit learning compete to mediate task performance, and they suggest that the presence of distraction can bias this competition. These results have implications for learning in multitask situations, suggesting that, even if distraction does not decrease the overall level of learning, it can result in the acquisition of knowledge that can be applied less flexibly in new situations.

  20. Memory monitoring by animals and humans

    NASA Technical Reports Server (NTRS)

    Smith, J. D.; Shields, W. E.; Allendoerfer, K. R.; Washburn, D. A.; Rumbaugh, D. M. (Principal Investigator)

    1998-01-01

    The authors asked whether animals and humans would use similarly an uncertain response to escape indeterminate memories. Monkeys and humans performed serial probe recognition tasks that produced differential memory difficulty across serial positions (e.g., primacy and recency effects). Participants were given an escape option that let them avoid any trials they wished and receive a hint to the trial's answer. Across species, across tasks, and even across conspecifics with sharper or duller memories, monkeys and humans used the escape option selectively when more indeterminate memory traces were probed. Their pattern of escaping always mirrored the pattern of their primary memory performance across serial positions. Signal-detection analyses confirm the similarity of the animals' and humans' performances. Optimality analyses assess their efficiency. Several aspects of monkeys' performance suggest the cognitive sophistication of their decisions to escape.

  1. Sleep to the beat: A nap favours consolidation of timing.

    PubMed

    Verweij, Ilse M; Onuki, Yoshiyuki; Van Someren, Eus J W; Van der Werf, Ysbrand D

    2016-06-01

    Growing evidence suggests that sleep is important for procedural learning, but few studies have investigated the effect of sleep on the temporal aspects of motor skill learning. We assessed the effect of a 90-min day-time nap on learning a motor timing task, using 2 adaptations of a serial interception sequence learning (SISL) task. Forty-two right-handed participants performed the task before and after a 90-min period of sleep or wake. Electroencephalography (EEG) was recorded throughout. The motor task consisted of a sequential spatial pattern and was performed according to 2 different timing conditions, that is, either following a sequential or a random temporal pattern. The increase in accuracy was compared between groups using a mixed linear regression model. Within the sleep group, performance improvement was modeled based on sleep characteristics, including spindle- and slow-wave density. The sleep group, but not the wake group, showed improvement in the random temporal, but especially and significantly more strongly in the sequential temporal condition. None of the sleep characteristics predicted improvement on either general of the timing conditions. In conclusion, a daytime nap improves performance on a timing task. We show that performance on the task with a sequential timing sequence benefits more from sleep than motor timing. More important, the temporal sequence did not benefit initial learning, because differences arose only after an offline period and specifically when this period contained sleep. Sleep appears to aid in the extraction of regularities for optimal subsequent performance. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  2. Being selective at the plate: processing dependence between perceptual variables relates to hitting goals and performance.

    PubMed

    Gray, Rob

    2013-08-01

    Performance of a skill that involves acting on a goal object (e.g., a ball to be hit) can influence one's judgment of the size and speed of that object. The present study examined how these action-specific effects are affected when the goal of the actor is varied and they are free to choose between alternative actions. In Experiment 1, expert baseball players were asked to perform three different directional hitting tasks in a batting simulation and make interleaved perceptual judgments about three ball parameters (speed, plate crossing location, and size). Perceived ball size was largest (and perceived speed was slowest) when the ball crossing location was optimal for the particular hitting task the batter was performing (e.g., an "outside" pitch for opposite-field hitting). The magnitude of processing dependency between variables (speed vs. location and size vs. location) was positively correlated with batting performance. In Experiment 2, the action-specific effects observed in Experiment 1 were mimicked by systematically changing the ball diameter in the simulation as a function of plate crossing location. The number of swing initiations was greater when ball size was larger, and batters were more successful in the hitting task for which the larger pitches were optimal (e.g., greater number of pull hits than opposite-field hits when "inside" pitches were larger). These findings suggest attentional accentuation of goal-relevant targets underlies action-related changes in perception and are consistent with an action selection role for these effects. 2013 APA, all rights reserved

  3. Numerical aerodynamic simulation facility. Preliminary study extension

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The production of an optimized design of key elements of the candidate facility was the primary objective of this report. This was accomplished by effort in the following tasks: (1) to further develop, optimize and describe the function description of the custom hardware; (2) to delineate trade off areas between performance, reliability, availability, serviceability, and programmability; (3) to develop metrics and models for validation of the candidate systems performance; (4) to conduct a functional simulation of the system design; (5) to perform a reliability analysis of the system design; and (6) to develop the software specifications to include a user level high level programming language, a correspondence between the programming language and instruction set and outline the operation system requirements.

  4. Fitness for duty: A 3 minute version of the Psychomotor Vigilance Test predicts fatigue related declines in luggage screening performance

    PubMed Central

    Basner, Mathias; Rubinstein, Joshua

    2011-01-01

    Objective To evaluate the ability of a 3-min Psychomotor Vigilance Test (PVT) to predict fatigue related performance decrements on a simulated luggage screening task (SLST). Methods Thirty-six healthy non-professional subjects (mean age 30.8 years, 20 female) participated in a 4 day laboratory protocol including a 34 hour period of total sleep deprivation with PVT and SLST testing every 2 hours. Results Eleven and 20 lapses (355 ms threshold) on the PVT optimally divided SLST performance into high, medium, and low performance bouts with significantly decreasing threat detection performance A′. Assignment to the different SLST performance groups replicated homeostatic and circadian patterns during total sleep deprivation. Conclusions The 3 min PVT was able to predict performance on a simulated luggage screening task. Fitness-for-duty feasibility should now be tested in professional screeners and operational environments. PMID:21912278

  5. Fitness for duty: a 3-minute version of the Psychomotor Vigilance Test predicts fatigue-related declines in luggage-screening performance.

    PubMed

    Basner, Mathias; Rubinstein, Joshua

    2011-10-01

    To evaluate the ability of a 3-minute Psychomotor Vigilance Test (PVT) to predict fatigue-related performance decrements on a simulated luggage-screening task (SLST). Thirty-six healthy nonprofessional subjects (mean age = 30.8 years, 20 women) participated in a 4-day laboratory protocol including a 34-hour period of total sleep deprivation with PVT and SLST testing every 2 hours. Eleven and 20 lapses (355-ms threshold) on the PVT optimally divided SLST performance into high-, medium-, and low-performance bouts with significantly decreasing threat detection performance A'. Assignment to the different SLST performance groups replicated homeostatic and circadian patterns during total sleep deprivation. The 3-minute PVT was able to predict performance on a simulated luggage-screening task. Fitness-for-duty feasibility should now be tested in professional screeners and operational environments.

  6. Performance and robustness of optimal fractional fuzzy PID controllers for pitch control of a wind turbine using chaotic optimization algorithms.

    PubMed

    Asgharnia, Amirhossein; Shahnazi, Reza; Jamali, Ali

    2018-05-11

    The most studied controller for pitch control of wind turbines is proportional-integral-derivative (PID) controller. However, due to uncertainties in wind turbine modeling and wind speed profiles, the need for more effective controllers is inevitable. On the other hand, the parameters of PID controller usually are unknown and should be selected by the designer which is neither a straightforward task nor optimal. To cope with these drawbacks, in this paper, two advanced controllers called fuzzy PID (FPID) and fractional-order fuzzy PID (FOFPID) are proposed to improve the pitch control performance. Meanwhile, to find the parameters of the controllers the chaotic evolutionary optimization methods are used. Using evolutionary optimization methods not only gives us the unknown parameters of the controllers but also guarantees the optimality based on the chosen objective function. To improve the performance of the evolutionary algorithms chaotic maps are used. All the optimization procedures are applied to the 2-mass model of 5-MW wind turbine model. The proposed optimal controllers are validated using simulator FAST developed by NREL. Simulation results demonstrate that the FOFPID controller can reach to better performance and robustness while guaranteeing fewer fatigue damages in different wind speeds in comparison to FPID, fractional-order PID (FOPID) and gain-scheduling PID (GSPID) controllers. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  7. The use of kernel local Fisher discriminant analysis for the channelization of the Hotelling model observer

    NASA Astrophysics Data System (ADS)

    Wen, Gezheng; Markey, Mia K.

    2015-03-01

    It is resource-intensive to conduct human studies for task-based assessment of medical image quality and system optimization. Thus, numerical model observers have been developed as a surrogate for human observers. The Hotelling observer (HO) is the optimal linear observer for signal-detection tasks, but the high dimensionality of imaging data results in a heavy computational burden. Channelization is often used to approximate the HO through a dimensionality reduction step, but how to produce channelized images without losing significant image information remains a key challenge. Kernel local Fisher discriminant analysis (KLFDA) uses kernel techniques to perform supervised dimensionality reduction, which finds an embedding transformation that maximizes betweenclass separability and preserves within-class local structure in the low-dimensional manifold. It is powerful for classification tasks, especially when the distribution of a class is multimodal. Such multimodality could be observed in many practical clinical tasks. For example, primary and metastatic lesions may both appear in medical imaging studies, but the distributions of their typical characteristics (e.g., size) may be very different. In this study, we propose to use KLFDA as a novel channelization method. The dimension of the embedded manifold (i.e., the result of KLFDA) is a counterpart to the number of channels in the state-of-art linear channelization. We present a simulation study to demonstrate the potential usefulness of KLFDA for building the channelized HOs (CHOs) and generating reliable decision statistics for clinical tasks. We show that the performance of the CHO with KLFDA channels is comparable to that of the benchmark CHOs.

  8. The effects of gamelike features and test location on cognitive test performance and participant enjoyment

    PubMed Central

    Skinner, Andy; Woods, Andy T.; Lawrence, Natalia S.; Munafò, Marcus

    2016-01-01

    Computerised cognitive assessments are a vital tool in the behavioural sciences, but participants often view them as effortful and unengaging. One potential solution is to add gamelike elements to these tasks in order to make them more intrinsically enjoyable, and some researchers have posited that a more engaging task might produce higher quality data. This assumption, however, remains largely untested. We investigated the effects of gamelike features and test location on the data and enjoyment ratings from a simple cognitive task. We tested three gamified variants of the Go-No-Go task, delivered both in the laboratory and online. In the first version of the task participants were rewarded with points for performing optimally. The second version of the task was framed as a cowboy shootout. The third version was a standard Go-No-Go task, used as a control condition. We compared reaction time, accuracy and subjective measures of enjoyment and engagement between task variants and study location. We found points to be a highly suitable game mechanic for gamified cognitive testing because they did not disrupt the validity of the data collected but increased participant enjoyment. However, we found no evidence that gamelike features could increase engagement to the point where participant performance improved. We also found that while participants enjoyed the cowboy themed task, the difficulty of categorising the gamelike stimuli adversely affected participant performance, increasing No-Go error rates by 28% compared to the non-game control. Responses collected online vs. in the laboratory had slightly longer reaction times but were otherwise very similar, supporting other findings that online crowdsourcing is an acceptable method of data collection for this type of research. PMID:27441120

  9. Noise properties and task-based evaluation of diffraction-enhanced imaging

    PubMed Central

    Brankov, Jovan G.; Saiz-Herranz, Alejandro; Wernick, Miles N.

    2014-01-01

    Abstract. Diffraction-enhanced imaging (DEI) is an emerging x-ray imaging method that simultaneously yields x-ray attenuation and refraction images and holds great promise for soft-tissue imaging. The DEI has been mainly studied using synchrotron sources, but efforts have been made to transition the technology to more practical implementations using conventional x-ray sources. The main technical challenge of this transition lies in the relatively lower x-ray flux obtained from conventional sources, leading to photon-limited data contaminated by Poisson noise. Several issues that must be understood in order to design and optimize DEI imaging systems with respect to noise performance are addressed. Specifically, we: (a) develop equations describing the noise properties of DEI images, (b) derive the conditions under which the DEI algorithm is statistically optimal, (c) characterize the imaging performance that can be obtained as measured by task-based metrics, and (d) consider image-processing steps that may be employed to mitigate noise effects. PMID:26158056

  10. Task-based lens design with application to digital mammography

    NASA Astrophysics Data System (ADS)

    Chen, Liying; Barrett, Harrison H.

    2005-01-01

    Recent advances in model observers that predict human perceptual performance now make it possible to optimize medical imaging systems for human task performance. We illustrate the procedure by considering the design of a lens for use in an optically coupled digital mammography system. The channelized Hotelling observer is used to model human performance, and the channels chosen are differences of Gaussians. The task performed by the model observer is detection of a lesion at a random but known location in a clustered lumpy background mimicking breast tissue. The entire system is simulated with a Monte Carlo application according to physics principles, and the main system component under study is the imaging lens that couples a fluorescent screen to a CCD detector. The signal-to-noise ratio (SNR) of the channelized Hotelling observer is used to quantify this detectability of the simulated lesion (signal) on the simulated mammographic background. Plots of channelized Hotelling SNR versus signal location for various lens apertures, various working distances, and various focusing places are presented. These plots thus illustrate the trade-off between coupling efficiency and blur in a task-based manner. In this way, the channelized Hotelling SNR is used as a merit function for lens design.

  11. Task-based lens design, with application to digital mammography

    NASA Astrophysics Data System (ADS)

    Chen, Liying

    Recent advances in model observers that predict human perceptual performance now make it possible to optimize medical imaging systems for human task performance. We illustrate the procedure by considering the design of a lens for use in an optically coupled digital mammography system. The channelized Hotelling observer is used to model human performance, and the channels chosen are differences of Gaussians (DOGs). The task performed by the model observer is detection of a lesion at a random but known location in a clustered lumpy background mimicking breast tissue. The entire system is simulated with a Monte Carlo application according to the physics principles, and the main system component under study is the imaging lens that couples a fluorescent screen to a CCD detector. The SNR of the channelized Hotelling observer is used to quantify the detectability of the simulated lesion (signal) upon the simulated mammographic background. In this work, plots of channelized Hotelling SNR vs. signal location for various lens apertures, various working distances, and various focusing places are shown. These plots thus illustrate the trade-off between coupling efficiency and blur in a task-based manner. In this way, the channelized Hotelling SNR is used as a merit function for lens design.

  12. Blue-Enriched White Light Enhances Physiological Arousal But Not Behavioral Performance during Simulated Driving at Early Night

    PubMed Central

    Rodríguez-Morilla, Beatriz; Madrid, Juan A.; Molina, Enrique; Correa, Angel

    2017-01-01

    Vigilance usually deteriorates over prolonged driving at non-optimal times of day. Exposure to blue-enriched light has shown to enhance arousal, leading to behavioral benefits in some cognitive tasks. However, the cognitive effects of long-wavelength light have been less studied and its effects on driving performance remained to be addressed. We tested the effects of a blue-enriched white light (BWL) and a long-wavelength orange light (OL) vs. a control condition of dim light on subjective, physiological and behavioral measures at 21:45 h. Neurobehavioral tests included the Karolinska Sleepiness Scale and subjective mood scale, recording of distal-proximal temperature gradient (DPG, as index of physiological arousal), accuracy in simulated driving and reaction time in the auditory psychomotor vigilance task. The results showed that BWL decreased the DPG (reflecting enhanced arousal), while it did not improve reaction time or driving performance. Instead, blue light produced larger driving errors than OL, while performance in OL was stable along time on task. These data suggest that physiological arousal induced by light does not necessarily imply cognitive improvement. Indeed, excessive arousal might deteriorate accuracy in complex tasks requiring precision, such as driving. PMID:28690558

  13. Optimization of Skill Retention in the U.S. Army through Initial Training Analysis and Design. Volume 1.

    DTIC Science & Technology

    1983-05-01

    observed end-of-course scores for tasks .- trained to criterion. e MGA software was calibrated to provide retention estimates at two levels of...exceed the MGA estimates. Thirty-five out of forty, or 87.5,o0 of the tasks met this expectation. . * For these first trial data, MGA software predicts...Objective: The objective of this effort was to perform an operational test of the capability of MGA Skill Training and Retention (STAR©) software to

  14. Imaging Tasks Scheduling for High-Altitude Airship in Emergency Condition Based on Energy-Aware Strategy

    PubMed Central

    Zhimeng, Li; Chuan, He; Dishan, Qiu; Jin, Liu; Manhao, Ma

    2013-01-01

    Aiming to the imaging tasks scheduling problem on high-altitude airship in emergency condition, the programming models are constructed by analyzing the main constraints, which take the maximum task benefit and the minimum energy consumption as two optimization objectives. Firstly, the hierarchy architecture is adopted to convert this scheduling problem into three subproblems, that is, the task ranking, value task detecting, and energy conservation optimization. Then, the algorithms are designed for the sub-problems, and the solving results are corresponding to feasible solution, efficient solution, and optimization solution of original problem, respectively. This paper makes detailed introduction to the energy-aware optimization strategy, which can rationally adjust airship's cruising speed based on the distribution of task's deadline, so as to decrease the total energy consumption caused by cruising activities. Finally, the application results and comparison analysis show that the proposed strategy and algorithm are effective and feasible. PMID:23864822

  15. Nursing performance under high workload: a diary study on the moderating role of selection, optimization and compensation strategies.

    PubMed

    Baethge, Anja; Müller, Andreas; Rigotti, Thomas

    2016-03-01

    The aim of this study was to investigate whether selective optimization with compensation constitutes an individualized action strategy for nurses wanting to maintain job performance under high workload. High workload is a major threat to healthcare quality and performance. Selective optimization with compensation is considered to enhance the efficient use of intra-individual resources and, therefore, is expected to act as a buffer against the negative effects of high workload. The study applied a diary design. Over five consecutive workday shifts, self-report data on workload was collected at three randomized occasions during each shift. Self-reported job performance was assessed in the evening. Self-reported selective optimization with compensation was assessed prior to the diary reporting. Data were collected in 2010. Overall, 136 nurses from 10 German hospitals participated. Selective optimization with compensation was assessed with a nine-item scale that was specifically developed for nursing. The NASA-TLX scale indicating the pace of task accomplishment was used to measure workload. Job performance was assessed with one item each concerning performance quality and forgetting of intentions. There was a weaker negative association between workload and both indicators of job performance in nurses with a high level of selective optimization with compensation, compared with nurses with a low level. Considering the separate strategies, selection and compensation turned out to be effective. The use of selective optimization with compensation is conducive to nurses' job performance under high workload levels. This finding is in line with calls to empower nurses' individual decision-making. © 2015 John Wiley & Sons Ltd.

  16. Task-based statistical image reconstruction for high-quality cone-beam CT

    NASA Astrophysics Data System (ADS)

    Dang, Hao; Webster Stayman, J.; Xu, Jennifer; Zbijewski, Wojciech; Sisniega, Alejandro; Mow, Michael; Wang, Xiaohui; Foos, David H.; Aygun, Nafi; Koliatsos, Vassilis E.; Siewerdsen, Jeffrey H.

    2017-11-01

    Task-based analysis of medical imaging performance underlies many ongoing efforts in the development of new imaging systems. In statistical image reconstruction, regularization is often formulated in terms to encourage smoothness and/or sharpness (e.g. a linear, quadratic, or Huber penalty) but without explicit formulation of the task. We propose an alternative regularization approach in which a spatially varying penalty is determined that maximizes task-based imaging performance at every location in a 3D image. We apply the method to model-based image reconstruction (MBIR—viz., penalized weighted least-squares, PWLS) in cone-beam CT (CBCT) of the head, focusing on the task of detecting a small, low-contrast intracranial hemorrhage (ICH), and we test the performance of the algorithm in the context of a recently developed CBCT prototype for point-of-care imaging of brain injury. Theoretical predictions of local spatial resolution and noise are computed via an optimization by which regularization (specifically, the quadratic penalty strength) is allowed to vary throughout the image to maximize local task-based detectability index ({{d}\\prime} ). Simulation studies and test-bench experiments were performed using an anthropomorphic head phantom. Three PWLS implementations were tested: conventional (constant) penalty; a certainty-based penalty derived to enforce constant point-spread function, PSF; and the task-based penalty derived to maximize local detectability at each location. Conventional (constant) regularization exhibited a fairly strong degree of spatial variation in {{d}\\prime} , and the certainty-based method achieved uniform PSF, but each exhibited a reduction in detectability compared to the task-based method, which improved detectability up to ~15%. The improvement was strongest in areas of high attenuation (skull base), where the conventional and certainty-based methods tended to over-smooth the data. The task-driven reconstruction method presents a promising regularization method in MBIR by explicitly incorporating task-based imaging performance as the objective. The results demonstrate improved ICH conspicuity and support the development of high-quality CBCT systems.

  17. Fog computing job scheduling optimization based on bees swarm

    NASA Astrophysics Data System (ADS)

    Bitam, Salim; Zeadally, Sherali; Mellouk, Abdelhamid

    2018-04-01

    Fog computing is a new computing architecture, composed of a set of near-user edge devices called fog nodes, which collaborate together in order to perform computational services such as running applications, storing an important amount of data, and transmitting messages. Fog computing extends cloud computing by deploying digital resources at the premise of mobile users. In this new paradigm, management and operating functions, such as job scheduling aim at providing high-performance, cost-effective services requested by mobile users and executed by fog nodes. We propose a new bio-inspired optimization approach called Bees Life Algorithm (BLA) aimed at addressing the job scheduling problem in the fog computing environment. Our proposed approach is based on the optimized distribution of a set of tasks among all the fog computing nodes. The objective is to find an optimal tradeoff between CPU execution time and allocated memory required by fog computing services established by mobile users. Our empirical performance evaluation results demonstrate that the proposal outperforms the traditional particle swarm optimization and genetic algorithm in terms of CPU execution time and allocated memory.

  18. Influence of Sequential vs. Simultaneous Dual-Task Exercise Training on Cognitive Function in Older Adults

    PubMed Central

    Tait, Jamie L.; Duckham, Rachel L.; Milte, Catherine M.; Main, Luana C.; Daly, Robin M.

    2017-01-01

    Emerging research indicates that exercise combined with cognitive training may improve cognitive function in older adults. Typically these programs have incorporated sequential training, where exercise and cognitive training are undertaken separately. However, simultaneous or dual-task training, where cognitive and/or motor training are performed simultaneously with exercise, may offer greater benefits. This review summary provides an overview of the effects of combined simultaneous vs. sequential training on cognitive function in older adults. Based on the available evidence, there are inconsistent findings with regard to the cognitive benefits of sequential training in comparison to cognitive or exercise training alone. In contrast, simultaneous training interventions, particularly multimodal exercise programs in combination with secondary tasks regulated by sensory cues, have significantly improved cognition in both healthy older and clinical populations. However, further research is needed to determine the optimal characteristics of a successful simultaneous training program for optimizing cognitive function in older people. PMID:29163146

  19. NRA8-21 Cycle 2 RBCC Turbopump Risk Reduction

    NASA Technical Reports Server (NTRS)

    Ferguson, Thomas V.; Williams, Morgan; Marcu, Bogdan

    2004-01-01

    This project was composed of three sub-tasks. The objective of the first task was to use the CFD code INS3D to generate both on- and off-design predictions for the consortium optimized impeller flowfield. The results of the flow simulations are given in the first section. The objective of the second task was to construct a turbomachinery testing database comprised of measurements made on several different impellers, an inducer and a diffuser. The data was in the form of static pressure measurements as well as laser velocimeter measurements of velocities and flow angles within the stated components. Several databases with this information were created for these components. The third subtask objective was two-fold: first, to validate the Enigma CFD code for pump diffuser analysis, and secondly, to perform steady and unsteady analyses on some wide flow range diffuser concepts using Enigma. The code was validated using the consortium optimized impeller database and then applied to two different concepts for wide flow diffusers.

  20. Trading off switch costs and stimulus availability benefits: An investigation of voluntary task-switching behavior in a predictable dynamic multitasking environment.

    PubMed

    Mittelstädt, Victor; Miller, Jeff; Kiesel, Andrea

    2018-03-09

    In the present study, we introduce a novel, self-organized task-switching paradigm that can be used to study more directly the determinants of switching. Instead of instructing participants to randomly switch between tasks, as in the classic voluntary task-switching paradigm (Arrington & Logan, 2004), we instructed participants to optimize their task performance in a voluntary task-switching environment in which the stimulus associated with the previously selected task appeared in each trial after a delay. Importantly, the stimulus onset asynchrony (SOA) increased further with each additional repetition of this task, whereas the stimulus needed for a task switch was always immediately available. We conducted two experiments with different SOA increments (i.e., Exp. 1a = 50 ms, Exp. 1b = 33 ms) to see whether this procedure would induce switching behavior, and we explored how people trade off switch costs against the increasing availability of the stimulus needed for a task repetition. We observed that participants adapted their behavior to the different task environments (i.e., SOA increments) and that participants switched tasks when the SOA in task switches approximately matched the switch costs. Moreover, correlational analyses indicated relations between individual switch costs and individual switch rates across participants. Together, these results demonstrate that participants were sensitive to the increased availability of switch stimuli in deciding whether to switch or to repeat, which in turn demonstrates flexible adaptive task selection behavior. We suggest that performance limitations in task switching interact with the task environment to influence switching behavior.

  1. ConvAn: a convergence analyzing tool for optimization of biochemical networks.

    PubMed

    Kostromins, Andrejs; Mozga, Ivars; Stalidzans, Egils

    2012-01-01

    Dynamic models of biochemical networks usually are described as a system of nonlinear differential equations. In case of optimization of models for purpose of parameter estimation or design of new properties mainly numerical methods are used. That causes problems of optimization predictability as most of numerical optimization methods have stochastic properties and the convergence of the objective function to the global optimum is hardly predictable. Determination of suitable optimization method and necessary duration of optimization becomes critical in case of evaluation of high number of combinations of adjustable parameters or in case of large dynamic models. This task is complex due to variety of optimization methods, software tools and nonlinearity features of models in different parameter spaces. A software tool ConvAn is developed to analyze statistical properties of convergence dynamics for optimization runs with particular optimization method, model, software tool, set of optimization method parameters and number of adjustable parameters of the model. The convergence curves can be normalized automatically to enable comparison of different methods and models in the same scale. By the help of the biochemistry adapted graphical user interface of ConvAn it is possible to compare different optimization methods in terms of ability to find the global optima or values close to that as well as the necessary computational time to reach them. It is possible to estimate the optimization performance for different number of adjustable parameters. The functionality of ConvAn enables statistical assessment of necessary optimization time depending on the necessary optimization accuracy. Optimization methods, which are not suitable for a particular optimization task, can be rejected if they have poor repeatability or convergence properties. The software ConvAn is freely available on www.biosystems.lv/convan. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  2. Task-based modeling and optimization of a cone-beam CT scanner for musculoskeletal imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prakash, P.; Zbijewski, W.; Gang, G. J.

    2011-10-15

    Purpose: This work applies a cascaded systems model for cone-beam CT imaging performance to the design and optimization of a system for musculoskeletal extremity imaging. The model provides a quantitative guide to the selection of system geometry, source and detector components, acquisition techniques, and reconstruction parameters. Methods: The model is based on cascaded systems analysis of the 3D noise-power spectrum (NPS) and noise-equivalent quanta (NEQ) combined with factors of system geometry (magnification, focal spot size, and scatter-to-primary ratio) and anatomical background clutter. The model was extended to task-based analysis of detectability index (d') for tasks ranging in contrast and frequencymore » content, and d' was computed as a function of system magnification, detector pixel size, focal spot size, kVp, dose, electronic noise, voxel size, and reconstruction filter to examine trade-offs and optima among such factors in multivariate analysis. The model was tested quantitatively versus the measured NPS and qualitatively in cadaver images as a function of kVp, dose, pixel size, and reconstruction filter under conditions corresponding to the proposed scanner. Results: The analysis quantified trade-offs among factors of spatial resolution, noise, and dose. System magnification (M) was a critical design parameter with strong effect on spatial resolution, dose, and x-ray scatter, and a fairly robust optimum was identified at M {approx} 1.3 for the imaging tasks considered. The results suggested kVp selection in the range of {approx}65-90 kVp, the lower end (65 kVp) maximizing subject contrast and the upper end maximizing NEQ (90 kVp). The analysis quantified fairly intuitive results--e.g., {approx}0.1-0.2 mm pixel size (and a sharp reconstruction filter) optimal for high-frequency tasks (bone detail) compared to {approx}0.4 mm pixel size (and a smooth reconstruction filter) for low-frequency (soft-tissue) tasks. This result suggests a specific protocol for 1 x 1 (full-resolution) projection data acquisition followed by full-resolution reconstruction with a sharp filter for high-frequency tasks along with 2 x 2 binning reconstruction with a smooth filter for low-frequency tasks. The analysis guided selection of specific source and detector components implemented on the proposed scanner. The analysis also quantified the potential benefits and points of diminishing return in focal spot size, reduced electronic noise, finer detector pixels, and low-dose limits of detectability. Theoretical results agreed quantitatively with the measured NPS and qualitatively with evaluation of cadaver images by a musculoskeletal radiologist. Conclusions: A fairly comprehensive model for 3D imaging performance in cone-beam CT combines factors of quantum noise, system geometry, anatomical background, and imaging task. The analysis provided a valuable, quantitative guide to design, optimization, and technique selection for a musculoskeletal extremities imaging system under development.« less

  3. GPU computing in medical physics: a review.

    PubMed

    Pratx, Guillem; Xing, Lei

    2011-05-01

    The graphics processing unit (GPU) has emerged as a competitive platform for computing massively parallel problems. Many computing applications in medical physics can be formulated as data-parallel tasks that exploit the capabilities of the GPU for reducing processing times. The authors review the basic principles of GPU computing as well as the main performance optimization techniques, and survey existing applications in three areas of medical physics, namely image reconstruction, dose calculation and treatment plan optimization, and image processing.

  4. Can Subjects be Guided to Optimal Decisions The Use of a Real-Time Training Intervention Model

    DTIC Science & Technology

    2016-06-01

    execution of the task and may then be analyzed to determine if there is correlation between designated factors (scores, proportion of time in each...state with their decision performance in real time could allow training systems to be designed to tailor training to the individual decision maker...release; distribution is unlimited CAN SUBJECTS BE GUIDED TO OPTIMAL DECISIONS? THE USE OF A REAL- TIME TRAINING INTERVENTION MODEL by Travis D

  5. Object-Oriented Multi-Disciplinary Design, Analysis, and Optimization Tool

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi

    2011-01-01

    An Object-Oriented Optimization (O3) tool was developed that leverages existing tools and practices, and allows the easy integration and adoption of new state-of-the-art software. At the heart of the O3 tool is the Central Executive Module (CEM), which can integrate disparate software packages in a cross platform network environment so as to quickly perform optimization and design tasks in a cohesive, streamlined manner. This object-oriented framework can integrate the analysis codes for multiple disciplines instead of relying on one code to perform the analysis for all disciplines. The CEM was written in FORTRAN and the script commands for each performance index were submitted through the use of the FORTRAN Call System command. In this CEM, the user chooses an optimization methodology, defines objective and constraint functions from performance indices, and provides starting and side constraints for continuous as well as discrete design variables. The structural analysis modules such as computations of the structural weight, stress, deflection, buckling, and flutter and divergence speeds have been developed and incorporated into the O3 tool to build an object-oriented Multidisciplinary Design, Analysis, and Optimization (MDAO) tool.

  6. The Effectiveness of Neurofeedback Training in Algorithmic Thinking Skills Enhancement.

    PubMed

    Plerou, Antonia; Vlamos, Panayiotis; Triantafillidis, Chris

    2017-01-01

    Although research on learning difficulties are overall in an advanced stage, studies related to algorithmic thinking difficulties are limited, since interest in this field has been recently raised. In this paper, an interactive evaluation screener enhanced with neurofeedback elements, referring to algorithmic tasks solving evaluation, is proposed. The effect of HCI, color, narration and neurofeedback elements effect was evaluated in the case of algorithmic tasks assessment. Results suggest the enhanced performance in the case of neurofeedback trained group in terms of total correct and optimal algorithmic tasks solution. Furthermore, findings suggest that skills, concerning the way that an algorithm is conceived, designed, applied and evaluated are essentially improved.

  7. Attentional load and attentional boost: a review of data and theory.

    PubMed

    Swallow, Khena M; Jiang, Yuhong V

    2013-01-01

    Both perceptual and cognitive processes are limited in capacity. As a result, attention is selective, prioritizing items and tasks that are important for adaptive behavior. However, a number of recent behavioral and neuroimaging studies suggest that, at least under some circumstances, increasing attention to one task can enhance performance in a second task (e.g., the attentional boost effect). Here we review these findings and suggest a new theoretical framework, the dual-task interaction model, that integrates these findings with current views of attentional selection. To reconcile the attentional boost effect with the effects of attentional load, we suggest that temporal selection results in a temporally specific enhancement across modalities, tasks, and spatial locations. Moreover, the effects of temporal selection may be best observed when the attentional system is optimally tuned to the temporal dynamics of incoming stimuli. Several avenues of research motivated by the dual-task interaction model are then discussed.

  8. Attentional Load and Attentional Boost: A Review of Data and Theory

    PubMed Central

    Swallow, Khena M.; Jiang, Yuhong V.

    2013-01-01

    Both perceptual and cognitive processes are limited in capacity. As a result, attention is selective, prioritizing items and tasks that are important for adaptive behavior. However, a number of recent behavioral and neuroimaging studies suggest that, at least under some circumstances, increasing attention to one task can enhance performance in a second task (e.g., the attentional boost effect). Here we review these findings and suggest a new theoretical framework, the dual-task interaction model, that integrates these findings with current views of attentional selection. To reconcile the attentional boost effect with the effects of attentional load, we suggest that temporal selection results in a temporally specific enhancement across modalities, tasks, and spatial locations. Moreover, the effects of temporal selection may be best observed when the attentional system is optimally tuned to the temporal dynamics of incoming stimuli. Several avenues of research motivated by the dual-task interaction model are then discussed. PMID:23730294

  9. Asynchronous decision making in a memorized paddle pressing task

    NASA Astrophysics Data System (ADS)

    Dankert, James R.; Olson, Byron; Si, Jennie

    2008-12-01

    This paper presents a method for asynchronous decision making using recorded neural data in a binary decision task. This is a demonstration of a technique for developing motor cortical neural prosthetics that do not rely on external cued timing information. The system presented in this paper uses support vector machines and leaky integrate-and-fire elements to predict directional paddle presses. In addition to the traditional metrics of accuracy, asynchronous systems must also optimize the time needed to make a decision. The system presented is able to predict paddle presses with a median accuracy of 88% and all decisions are made before the time of the actual paddle press. An alternative bit rate measure of performance is defined to show that the system proposed here is able to perform the task with the same efficiency as the rats.

  10. Arctic cognition: a study of cognitive performance in summer and winter at 69 degrees N

    NASA Technical Reports Server (NTRS)

    Brennen, T.; Martinussen, M.; Hansen, B. O.; Hjemdal, O.

    1999-01-01

    Evidence has accumulated over the past 15 years that affect in humans is cyclical. In winter there is a tendency to depression, with remission in summer, and this effect is stronger at higher latitudes. In order to determine whether human cognition is similarly rhythmical, this study investigated the cognitive processes of 100 participants living at 69 degrees N. Participants were tested in summer and winter on a range of cognitive tasks, including verbal memory, attention and simple reaction time tasks. The seasonally counterbalanced design and the very northerly latitude of this study provide optimal conditions for detecting impaired cognitive performance in winter, and the conclusion is negative: of five tasks with seasonal effects, four had disadvantages in summer. Like the menstrual cycle, the circannual cycle appears to influence mood but not cognition.

  11. Size effects on the touchpad, touchscreen, and keyboard tasks of netbooks.

    PubMed

    Lai, Chih-Chun; Wu, Chih-Fu

    2012-10-01

    The size of a netbook plays an important role in its success. Somehow, the viewing area on screen and ability to type fast were traded off for portability. To further investigate, this study compared the performances of different-sized touchpads, touchscreens, and keyboards of four-sized netbooks for five application tasks. Consequently, the 7" netbook was significantly slower than larger netbooks in all the tasks except the 8.9" netbook touchpad (successive selecting and clicking) or keyboard tasks. Differences were non-significant for the operating times among the 8.9", 10.1", and 11.6" netbooks in all the tasks except between the 8.9" and 11.6" netbooks in keyboards tasks. For error rates, device-type effects rather than size effects were significant. Gender effects were not significant for operating times in all the tasks but for error rates in touchscreen (multi-direction touching) and keyboard tasks. Considering size effects, the 10.1" netbooks seemed to optimally balance between portability and productivity.

  12. Mental workload and motor performance dynamics during practice of reaching movements under various levels of task difficulty.

    PubMed

    Shuggi, Isabelle M; Oh, Hyuk; Shewokis, Patricia A; Gentili, Rodolphe J

    2017-09-30

    The assessment of mental workload can inform attentional resource allocation during task performance that is essential for understanding the underlying principles of human cognitive-motor behavior. While many studies have focused on mental workload in relation to human performance, a modest body of work has examined it in a motor practice/learning context without considering individual variability. Thus, this work aimed to examine mental workload by employing the NASA TLX as well as the changes in motor performance resulting from the practice of a novel reaching task. Two groups of participants practiced a reaching task at a high and low nominal difficulty during which a group-level analysis assessed the mental workload, motor performance and motor improvement dynamics. A secondary cluster analysis was also conducted to identify specific individual patterns of cognitive-motor responses. Overall, both group- and cluster-level analyses revealed that: (i) all participants improved their performance throughout motor practice, and (ii) an increase in mental workload was associated with a reduction of the quality of motor performance along with a slower rate of motor improvement. The results are discussed in the context of the optimal challenge point framework and in particular it is proposed that under the experimental conditions employed here, functional task difficulty: (i) would possibly depend on an individuals' information processing capabilities, and (ii) could be indexed by the level of mental workload which, when excessively heightened can decrease the quality of performance and more generally result in delayed motor improvements. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  13. Task driven optimal leg trajectories in insect-scale legged microrobots

    NASA Astrophysics Data System (ADS)

    Doshi, Neel; Goldberg, Benjamin; Jayaram, Kaushik; Wood, Robert

    Origami inspired layered manufacturing techniques and 3D-printing have enabled the development of highly articulated legged robots at the insect-scale, including the 1.43g Harvard Ambulatory MicroRobot (HAMR). Research on these platforms has expanded its focus from manufacturing aspects to include design optimization and control for application-driven tasks. Consequently, the choice of gait selection, body morphology, leg trajectory, foot design, etc. have become areas of active research. HAMR has two controlled degrees-of-freedom per leg, making it an ideal candidate for exploring leg trajectory. We will discuss our work towards optimizing HAMR's leg trajectories for two different tasks: climbing using electroadhesives and level ground running (5-10 BL/s). These tasks demonstrate the ability of single platform to adapt to vastly different locomotive scenarios: quasi-static climbing with controlled ground contact, and dynamic running with un-controlled ground contact. We will utilize trajectory optimization methods informed by existing models and experimental studies to determine leg trajectories for each task. We also plan to discuss how task specifications and choice of objective function have contributed to the shape of these optimal leg trajectories.

  14. Involvement of Spearman's g in conceptualisation versus execution of complex tasks.

    PubMed

    Carroll, Ellen L; Bright, Peter

    2016-10-01

    Strong correlations between measures of fluid intelligence (or Spearman's g) and working memory are widely reported in the literature, but there is considerable controversy concerning the nature of underlying mechanisms driving this relationship. In the four experiments presented here we consider the role of response conflict and task complexity in the context of real-time task execution demands (Experiments 1-3) and also address recent evidence that g confers an advantage at the level of task conceptualisation rather than (or in addition to) task execution (Experiment 4). We observed increased sensitivity of measured fluid intelligence to task performance in the presence (vs. the absence) of response conflict, and this relationship remained when task complexity was reduced. Performance-g correlations were also observed in the absence of response conflict, but only in the context of high task complexity. Further, we present evidence that differences in conceptualisation or 'modelling' of task instructions prior to execution had an important mediating effect on observed correlations, but only when the task encompassed a strong element of response inhibition. Our results suggest that individual differences in ability reflect, in large part, variability in the efficiency with which the relational complexity of task constraints are held in mind. It follows that fluid intelligence may support successful task execution through the construction of effective action plans via optimal allocation of limited resources. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  15. CPU-GPU hybrid accelerating the Zuker algorithm for RNA secondary structure prediction applications.

    PubMed

    Lei, Guoqing; Dou, Yong; Wan, Wen; Xia, Fei; Li, Rongchun; Ma, Meng; Zou, Dan

    2012-01-01

    Prediction of ribonucleic acid (RNA) secondary structure remains one of the most important research areas in bioinformatics. The Zuker algorithm is one of the most popular methods of free energy minimization for RNA secondary structure prediction. Thus far, few studies have been reported on the acceleration of the Zuker algorithm on general-purpose processors or on extra accelerators such as Field Programmable Gate-Array (FPGA) and Graphics Processing Units (GPU). To the best of our knowledge, no implementation combines both CPU and extra accelerators, such as GPUs, to accelerate the Zuker algorithm applications. In this paper, a CPU-GPU hybrid computing system that accelerates Zuker algorithm applications for RNA secondary structure prediction is proposed. The computing tasks are allocated between CPU and GPU for parallel cooperate execution. Performance differences between the CPU and the GPU in the task-allocation scheme are considered to obtain workload balance. To improve the hybrid system performance, the Zuker algorithm is optimally implemented with special methods for CPU and GPU architecture. Speedup of 15.93× over optimized multi-core SIMD CPU implementation and performance advantage of 16% over optimized GPU implementation are shown in the experimental results. More than 14% of the sequences are executed on CPU in the hybrid system. The system combining CPU and GPU to accelerate the Zuker algorithm is proven to be promising and can be applied to other bioinformatics applications.

  16. Advanced obstacle avoidance for a laser based wheelchair using optimised Bayesian neural networks.

    PubMed

    Trieu, Hoang T; Nguyen, Hung T; Willey, Keith

    2008-01-01

    In this paper we present an advanced method of obstacle avoidance for a laser based intelligent wheelchair using optimized Bayesian neural networks. Three neural networks are designed for three separate sub-tasks: passing through a door way, corridor and wall following and general obstacle avoidance. The accurate usable accessible space is determined by including the actual wheelchair dimensions in a real-time map used as inputs to each networks. Data acquisitions are performed separately to collect the patterns required for specified sub-tasks. Bayesian frame work is used to determine the optimal neural network structure in each case. Then these networks are trained under the supervision of Bayesian rule. Experiment results showed that compare to the VFH algorithm our neural networks navigated a smoother path following a near optimum trajectory.

  17. Efficient estimation of ideal-observer performance in classification tasks involving high-dimensional complex backgrounds

    PubMed Central

    Park, Subok; Clarkson, Eric

    2010-01-01

    The Bayesian ideal observer is optimal among all observers and sets an absolute upper bound for the performance of any observer in classification tasks [Van Trees, Detection, Estimation, and Modulation Theory, Part I (Academic, 1968).]. Therefore, the ideal observer should be used for objective image quality assessment whenever possible. However, computation of ideal-observer performance is difficult in practice because this observer requires the full description of unknown, statistical properties of high-dimensional, complex data arising in real life problems. Previously, Markov-chain Monte Carlo (MCMC) methods were developed by Kupinski et al. [J. Opt. Soc. Am. A 20, 430(2003) ] and by Park et al. [J. Opt. Soc. Am. A 24, B136 (2007) and IEEE Trans. Med. Imaging 28, 657 (2009) ] to estimate the performance of the ideal observer and the channelized ideal observer (CIO), respectively, in classification tasks involving non-Gaussian random backgrounds. However, both algorithms had the disadvantage of long computation times. We propose a fast MCMC for real-time estimation of the likelihood ratio for the CIO. Our simulation results show that our method has the potential to speed up ideal-observer performance in tasks involving complex data when efficient channels are used for the CIO. PMID:19884916

  18. Evaluating Suit Fit Using Performance Degradation

    NASA Technical Reports Server (NTRS)

    Margerum, Sarah E.; Cowley, Matthew; Harvill, Lauren; Benson, Elizabeth; Rajulu, Sudhakar

    2012-01-01

    The Mark III planetary technology demonstrator space suit can be tailored to an individual by swapping the modular components of the suit, such as the arms, legs, and gloves, as well as adding or removing sizing inserts in key areas. A method was sought to identify the transition from an ideal suit fit to a bad fit and how to quantify this breakdown using a metric of mobility-based human performance data. To this end, the degradation of the range of motion of the elbow and wrist of the suit as a function of suit sizing modifications was investigated to attempt to improve suit fit. The sizing range tested spanned optimal and poor fit and was adjusted incrementally in order to compare each joint angle across five different sizing configurations. Suited range of motion data were collected using a motion capture system for nine isolated and functional tasks utilizing the elbow and wrist joints. A total of four subjects were tested with motions involving both arms simultaneously as well as the right arm by itself. Findings indicated that no single joint drives the performance of the arm as a function of suit size; instead it is based on the interaction of multiple joints along a limb. To determine a size adjustment range where an individual can operate the suit at an acceptable level, a performance detriment limit was set. This user-selected limit reveals the task-dependent tolerance of the suit fit around optimal size. For example, the isolated joint motion indicated that the suit can deviate from optimal by as little as -0.6 in to -2.6 in before experiencing a 10% performance drop in the wrist or elbow joint. The study identified a preliminary method to quantify the impact of size on performance and developed a new way to gauge tolerances around optimal size.

  19. Technology forecasting for space communication. Task one report: Cost and weight tradeoff studies for EOS and TDRS

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Weight and cost optimized EOS communication links are determined for 2.25, 7.25, 14.5, 21, and 60 GHz systems and for a 10.6 micron homodyne detection laser system. EOS to ground links are examined for 556, 834, and 1112 km EOS orbits, with ground terminals at the Network Test and Tracking Facility and at Goldstone. Optimized 21 GHz and 10.6 micron links are also examined. For the EOS to Tracking and Data Relay Satellite to ground link, signal-to-noise ratios of the uplink and downlink are also optimized for minimum overall cost or spaceborne weight. Finally, the optimized 21 GHz EOS to ground link is determined for various precipitation rates. All system performance parameters and mission dependent constraints are presented, as are the system cost and weight functional dependencies. The features and capabilities of the computer program to perform the foregoing analyses are described.

  20. Development of a novel optimization tool for electron linacs inspired by artificial intelligence techniques in video games

    NASA Astrophysics Data System (ADS)

    Meier, E.; Biedron, S. G.; LeBlanc, G.; Morgan, M. J.

    2011-03-01

    This paper reports the results of an advanced algorithm for the optimization of electron beam parameters in Free Electron Laser (FEL) Linacs. In the novel approach presented in this paper, the system uses state of the art developments in video games to mimic an operator's decisions to perform an optimization task when no prior knowledge, other than constraints on the actuators is available. The system was tested for the simultaneous optimization of the energy spread and the transmission of the Australian Synchrotron Linac. The proposed system successfully increased the transmission of the machine from 90% to 97% and decreased the energy spread of the beam from 1.04% to 0.91%. Results of a control experiment performed at the new FERMI@Elettra FEL is also reported, suggesting the adaptability of the scheme for beam-based control.

  1. Display/control requirements for automated VTOL aircraft

    NASA Technical Reports Server (NTRS)

    Hoffman, W. C.; Kleinman, D. L.; Young, L. R.

    1976-01-01

    A systematic design methodology for pilot displays in advanced commercial VTOL aircraft was developed and refined. The analyst is provided with a step-by-step procedure for conducting conceptual display/control configurations evaluations for simultaneous monitoring and control pilot tasks. The approach consists of three phases: formulation of information requirements, configuration evaluation, and system selection. Both the monitoring and control performance models are based upon the optimal control model of the human operator. Extensions to the conventional optimal control model required in the display design methodology include explicit optimization of control/monitoring attention; simultaneous monitoring and control performance predictions; and indifference threshold effects. The methodology was applied to NASA's experimental CH-47 helicopter in support of the VALT program. The CH-47 application examined the system performance of six flight conditions. Four candidate configurations are suggested for evaluation in pilot-in-the-loop simulations and eventual flight tests.

  2. Self-generated strategic behavior in an ecological shopping task.

    PubMed

    Bottari, Carolina; Wai Shun, Priscilla Lam; Dorze, Guylaine Le; Gosselin, Nadia; Dawson, Deirdre

    2014-01-01

    OBJECTIVES. The use of cognitive strategies optimizes performance in complex everyday tasks such as shopping. This exploratory study examined the cognitive strategies people with traumatic brain injury (TBI) effectively use in an unstructured, real-world situation. METHOD. A behavioral analysis of the self-generated strategic behaviors of 5 people with severe TBI using videotaped sessions of an ecological shopping task (Instrumental Activities of Daily Living Profile) was performed. RESULTS. All participants used some form of cognitive strategy in an unstructured real-world shopping task, although the number, type, and degree of effectiveness of the strategies in leading to goal attainment varied. The most independent person used the largest number and a broader repertoire of self-generated strategies. CONCLUSION. These results provide initial evidence that occupational therapists should examine the use of self-generated cognitive strategies in real-world contexts as a potential means of guiding therapy aimed at improving independence in everyday activities for people with TBI. Copyright © 2014 by the American Occupational Therapy Association, Inc.

  3. Temporal Comparison Between NIRS and EEG Signals During a Mental Arithmetic Task Evaluated with Self-Organizing Maps.

    PubMed

    Oyama, Katsunori; Sakatani, Kaoru

    2016-01-01

    Simultaneous monitoring of brain activity with near-infrared spectroscopy and electroencephalography allows spatiotemporal reconstruction of the hemodynamic response regarding the concentration changes in oxyhemoglobin and deoxyhemoglobin that are associated with recorded brain activity such as cognitive functions. However, the accuracy of state estimation during mental arithmetic tasks is often different depending on the length of the segment for sampling of NIRS and EEG signals. This study compared the results of a self-organizing map and ANOVA, which were both used to assess the accuracy of state estimation. We conducted an experiment with a mental arithmetic task performed by 10 participants. The lengths of the segment in each time frame for observation of NIRS and EEG signals were compared with the 30-s, 1-min, and 2-min segment lengths. The optimal segment lengths were different for NIRS and EEG signals in the case of classification of feature vectors into the states of performing a mental arithmetic task and being at rest.

  4. Future applications of associative processor systems to operational KSC systems for optimizing cost and enhancing performance characteristics

    NASA Technical Reports Server (NTRS)

    Perkinson, J. A.

    1974-01-01

    The application of associative memory processor equipment to conventional host processors type systems is discussed. Efforts were made to demonstrate how such application relieves the task burden of conventional systems, and enhance system speed and efficiency. Data cover comparative theoretical performance analysis, demonstration of expanded growth capabilities, and demonstrations of actual hardware in simulated environment.

  5. WE-EF-207-03: Design and Optimization of a CBCT Head Scanner for Detection of Acute Intracranial Hemorrhage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, J; Sisniega, A; Zbijewski, W

    Purpose: To design a dedicated x-ray cone-beam CT (CBCT) system suitable to deployment at the point-of-care and offering reliable detection of acute intracranial hemorrhage (ICH), traumatic brain injury (TBI), stroke, and other head and neck injuries. Methods: A comprehensive task-based image quality model was developed to guide system design and optimization of a prototype head scanner suitable to imaging of acute TBI and ICH. Previously reported models were expanded to include the effects of x-ray scatter correction necessary for detection of low contrast ICH and the contribution of bit depth (digitization noise) to imaging performance. Task-based detectablity index provided themore » objective function for optimization of system geometry, x-ray source, detector type, anti-scatter grid, and technique at 10–25 mGy dose. Optimal characteristics were experimentally validated using a custom head phantom with 50 HU contrast ICH inserts imaged on a CBCT imaging bench allowing variation of system geometry, focal spot size, detector, grid selection, and x-ray technique. Results: The model guided selection of system geometry with a nominal source-detector distance 1100 mm and optimal magnification of 1.50. Focal spot size ∼0.6 mm was sufficient for spatial resolution requirements in ICH detection. Imaging at 90 kVp yielded the best tradeoff between noise and contrast. The model provided quantitation of tradeoffs between flat-panel and CMOS detectors with respect to electronic noise, field of view, and readout speed required for imaging of ICH. An anti-scatter grid was shown to provide modest benefit in conjunction with post-acquisition scatter correction. Images of the head phantom demonstrate visualization of millimeter-scale simulated ICH. Conclusions: Performance consistent with acute TBI and ICH detection is feasible with model-based system design and robust artifact correction in a dedicated head CBCT system. Further improvements can be achieved with incorporation of model-based iterative reconstruction techniques also within the scope of the task-based optimization framework. David Foos and Xiaohui Wang are employees of Carestream Health.« less

  6. Predictive Cache Modeling and Analysis

    DTIC Science & Technology

    2011-11-01

    metaheuristic /bin-packing algorithm to optimize task placement based on task communication characterization. Our previous work on task allocation showed...Cache Miss Minimization Technology To efficiently explore combinations and discover nearly-optimal task-assignment algorithms , we extended to our...it was possible to use our algorithmic techniques to decrease network bandwidth consumption by ~25%. In this effort, we adapted these existing

  7. Muscle function in glenohumeral joint stability during lifting task.

    PubMed

    Blache, Yoann; Begon, Mickaël; Michaud, Benjamin; Desmoulins, Landry; Allard, Paul; Dal Maso, Fabien

    2017-01-01

    Ensuring glenohumeral stability during repetitive lifting tasks is a key factor to reduce the risk of shoulder injuries. Nevertheless, the literature reveals some lack concerning the assessment of the muscles that ensure glenohumeral stability during specific lifting tasks. Therefore, the purpose of this study was to assess the stabilization function of shoulder muscles during a lifting task. Kinematics and muscle electromyograms (n = 9) were recorded from 13 healthy adults during a bi-manual lifting task performed from the hip to the shoulder level. A generic upper-limb OpenSim model was implemented to simulate glenohumeral stability and instability by performing static optimizations with and without glenohumeral stability constraints. This procedure enabled to compute the level of shoulder muscle activity and forces in the two conditions. Without the stability constraint, the simulated movement was unstable during 74%±16% of the time. The force of the supraspinatus was significantly increased of 107% (p<0.002) when the glenohumeral stability constraint was implemented. The increased supraspinatus force led to greater compressive force (p<0.001) and smaller shear force (p<0.001), which contributed to improved glenohumeral stability. It was concluded that the supraspinatus may be the main contributor to glenohumeral stability during lifting task.

  8. Muscle function in glenohumeral joint stability during lifting task

    PubMed Central

    Begon, Mickaël; Michaud, Benjamin; Desmoulins, Landry; Allard, Paul

    2017-01-01

    Ensuring glenohumeral stability during repetitive lifting tasks is a key factor to reduce the risk of shoulder injuries. Nevertheless, the literature reveals some lack concerning the assessment of the muscles that ensure glenohumeral stability during specific lifting tasks. Therefore, the purpose of this study was to assess the stabilization function of shoulder muscles during a lifting task. Kinematics and muscle electromyograms (n = 9) were recorded from 13 healthy adults during a bi-manual lifting task performed from the hip to the shoulder level. A generic upper-limb OpenSim model was implemented to simulate glenohumeral stability and instability by performing static optimizations with and without glenohumeral stability constraints. This procedure enabled to compute the level of shoulder muscle activity and forces in the two conditions. Without the stability constraint, the simulated movement was unstable during 74%±16% of the time. The force of the supraspinatus was significantly increased of 107% (p<0.002) when the glenohumeral stability constraint was implemented. The increased supraspinatus force led to greater compressive force (p<0.001) and smaller shear force (p<0.001), which contributed to improved glenohumeral stability. It was concluded that the supraspinatus may be the main contributor to glenohumeral stability during lifting task. PMID:29244838

  9. Non-negative Matrix Factorization and Co-clustering: A Promising Tool for Multi-tasks Bearing Fault Diagnosis

    NASA Astrophysics Data System (ADS)

    Shen, Fei; Chen, Chao; Yan, Ruqiang

    2017-05-01

    Classical bearing fault diagnosis methods, being designed according to one specific task, always pay attention to the effectiveness of extracted features and the final diagnostic performance. However, most of these approaches suffer from inefficiency when multiple tasks exist, especially in a real-time diagnostic scenario. A fault diagnosis method based on Non-negative Matrix Factorization (NMF) and Co-clustering strategy is proposed to overcome this limitation. Firstly, some high-dimensional matrixes are constructed using the Short-Time Fourier Transform (STFT) features, where the dimension of each matrix equals to the number of target tasks. Then, the NMF algorithm is carried out to obtain different components in each dimension direction through optimized matching, such as Euclidean distance and divergence distance. Finally, a Co-clustering technique based on information entropy is utilized to realize classification of each component. To verity the effectiveness of the proposed approach, a series of bearing data sets were analysed in this research. The tests indicated that although the diagnostic performance of single task is comparable to traditional clustering methods such as K-mean algorithm and Guassian Mixture Model, the accuracy and computational efficiency in multi-tasks fault diagnosis are improved.

  10. TU-EF-204-03: Task-Based KV and MAs Optimization for Radiation Dose Reduction in CT: From FBP to Statistical Model-Based Iterative Reconstruction (MBIR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gomez-Cardona, D; Li, K; Lubner, M G

    Purpose: The introduction of the highly nonlinear MBIR algorithm to clinical CT systems has made CNR an invalid metric for kV optimization. The purpose of this work was to develop a task-based framework to unify kV and mAs optimization for both FBP- and MBIR-based CT systems. Methods: The kV-mAs optimization was formulated as a constrained minimization problem: to select kV and mAs to minimize dose under the constraint of maintaining the detection performance as clinically prescribed. To experimentally solve this optimization problem, exhaustive measurements of detectability index (d’) for a hepatic lesion detection task were performed at 15 different mAmore » levels and 4 kV levels using an anthropomorphic phantom. The measured d’ values were used to generate an iso-detectability map; similarly, dose levels recorded at different kV-mAs combinations were used to generate an iso-dose map. The iso-detectability map was overlaid on top of the iso-dose map so that for a prescribed detectability level d’, the optimal kV-mA can be determined from the crossing between the d’ contour and the dose contour that corresponds to the minimum dose. Results: Taking d’=16 as an example: the kV-mAs combinations on the measured iso-d’ line of MBIR are 80–150 (3.8), 100–140 (6.6), 120–150 (11.3), and 140–160 (17.2), where values in the parentheses are measured dose values. As a Result, the optimal kV was 80 and optimal mA was 150. In comparison, the optimal kV and mA for FBP were 100 and 500, which corresponded to a dose level of 24 mGy. Results of in vivo animal experiments were consistent with the phantom results. Conclusion: A new method to optimize kV and mAs selection has been developed. This method is applicable to both linear and nonlinear CT systems such as those using MBIR. Additional dose savings can be achieved by combining MBIR with this method. This work was partially supported by an NIH grant R01CA169331 and GE Healthcare. K. Li, D. Gomez-Cardona, M. G. Lubner: Nothing to disclose. P. J. Pickhardt: Co-founder, VirtuoCTC, LLC Stockholder, Cellectar Biosciences, Inc. G.-H. Chen: Research funded, GE Healthcare; Research funded, Siemens AX.« less

  11. Performing a reaching task with one arm while adapting to a visuomotor rotation with the other can lead to complete transfer of motor learning across the arms

    PubMed Central

    Lei, Yuming; Binder, Jeffrey R.

    2015-01-01

    The extent to which motor learning is generalized across the limbs is typically very limited. Here, we investigated how two motor learning hypotheses could be used to enhance the extent of interlimb transfer. According to one hypothesis, we predicted that reinforcement of successful actions by providing binary error feedback regarding task success or failure, in addition to terminal error feedback, during initial training would increase the extent of interlimb transfer following visuomotor adaptation (experiment 1). According to the other hypothesis, we predicted that performing a reaching task repeatedly with one arm without providing performance feedback (which prevented learning the task with this arm), while concurrently adapting to a visuomotor rotation with the other arm, would increase the extent of transfer (experiment 2). Results indicate that providing binary error feedback, compared with continuous visual feedback that provided movement direction and amplitude information, had no influence on the extent of transfer. In contrast, repeatedly performing (but not learning) a specific task with one arm while visuomotor adaptation occurred with the other arm led to nearly complete transfer. This suggests that the absence of motor instances associated with specific effectors and task conditions is the major reason for limited interlimb transfer and that reinforcement of successful actions during initial training is not beneficial for interlimb transfer. These findings indicate crucial contributions of effector- and task-specific motor instances, which are thought to underlie (a type of) model-free learning, to optimal motor learning and interlimb transfer. PMID:25632082

  12. Optimized Algorithms for Prediction Within Robotic Tele-Operative Interfaces

    NASA Technical Reports Server (NTRS)

    Martin, Rodney A.; Wheeler, Kevin R.; Allan, Mark B.; SunSpiral, Vytas

    2010-01-01

    Robonaut, the humanoid robot developed at the Dexterous Robotics Labo ratory at NASA Johnson Space Center serves as a testbed for human-rob ot collaboration research and development efforts. One of the recent efforts investigates how adjustable autonomy can provide for a safe a nd more effective completion of manipulation-based tasks. A predictiv e algorithm developed in previous work was deployed as part of a soft ware interface that can be used for long-distance tele-operation. In this work, Hidden Markov Models (HMM?s) were trained on data recorded during tele-operation of basic tasks. In this paper we provide the d etails of this algorithm, how to improve upon the methods via optimization, and also present viable alternatives to the original algorithmi c approach. We show that all of the algorithms presented can be optim ized to meet the specifications of the metrics shown as being useful for measuring the performance of the predictive methods. 1

  13. Attention control learning in the decision space using state estimation

    NASA Astrophysics Data System (ADS)

    Gharaee, Zahra; Fatehi, Alireza; Mirian, Maryam S.; Nili Ahmadabadi, Majid

    2016-05-01

    The main goal of this paper is modelling attention while using it in efficient path planning of mobile robots. The key challenge in concurrently aiming these two goals is how to make an optimal, or near-optimal, decision in spite of time and processing power limitations, which inherently exist in a typical multi-sensor real-world robotic application. To efficiently recognise the environment under these two limitations, attention of an intelligent agent is controlled by employing the reinforcement learning framework. We propose an estimation method using estimated mixture-of-experts task and attention learning in perceptual space. An agent learns how to employ its sensory resources, and when to stop observing, by estimating its perceptual space. In this paper, static estimation of the state space in a learning task problem, which is examined in the WebotsTM simulator, is performed. Simulation results show that a robot learns how to achieve an optimal policy with a controlled cost by estimating the state space instead of continually updating sensory information.

  14. Iowa Gambling Task with non-clinical participants: effects of using real + virtual cards and additional trials

    PubMed Central

    Overman, William H.; Pierce, Allison

    2013-01-01

    Performance on the Iowa Gambling Task (IGT) in clinical populations can be interpreted only in relation to established baseline performance in normal populations. As in all comparisons of assessment tools, the normal baseline must reflect performance under conditions in which subjects can function at their best levels. In this review, we show that a number of variables enhance IGT performance in non-clinical participants. First, optimal performance is produced by having participants turn over real cards while viewing virtual cards on a computer screen. The use of only virtual cards results in significantly lower performance than the combination of real + virtual cards. Secondly, administration of more than 100 trials also enhances performance. When using the real/virtual card procedure, performance is shown to significantly increase from early adolescence through young adulthood. Under these conditions young (mean age 19 years) and older (mean age 59 years) adults perform equally. Females, as a group, score lower than males because females tend to choose cards from high-frequency-of-gain Deck B. Groups of females with high or low gonadal hormones perform equally. Concurrent tasks, e.g., presentation of aromas, decrease performance in males. Age and gender effects are discussed in terms of a dynamic between testosterone and orbital prefrontal cortex. PMID:24376431

  15. Energy aware swarm optimization with intercluster search for wireless sensor network.

    PubMed

    Thilagavathi, Shanmugasundaram; Geetha, Bhavani Gnanasambandan

    2015-01-01

    Wireless sensor networks (WSNs) are emerging as a low cost popular solution for many real-world challenges. The low cost ensures deployment of large sensor arrays to perform military and civilian tasks. Generally, WSNs are power constrained due to their unique deployment method which makes replacement of battery source difficult. Challenges in WSN include a well-organized communication platform for the network with negligible power utilization. In this work, an improved binary particle swarm optimization (PSO) algorithm with modified connected dominating set (CDS) based on residual energy is proposed for discovery of optimal number of clusters and cluster head (CH). Simulations show that the proposed BPSO-T and BPSO-EADS perform better than LEACH- and PSO-based system in terms of energy savings and QOS.

  16. The Relevance of Sex Differences in Performance Fatigability

    PubMed Central

    Hunter, Sandra K.

    2016-01-01

    Performance fatigability differs between men and women for a range of fatiguing tasks. Women are usually less fatigable than men and this is most widely described for isometric fatiguing contractions, and some dynamic tasks. The sex difference in fatigability is specific to the task demands so that one mechanism is not universal, including any sex differences in skeletal muscle physiology, muscle perfusion and voluntary activation. However, there are substantial knowledge gaps about the task dependency of the sex differences in fatigability, the involved mechanisms and the relevance to clinical populations and with advanced age. The knowledge gaps are in part due to the significant deficits in the number of women included in performance fatigability studies despite a gradual increase in the inclusion of women over the last 20 years. Therefore, this review 1) provides a rationale for the limited knowledge about sex differences in performance fatigability, 2) summarizes the current knowledge on sex differences in fatigability and the potential mechanisms across a range of tasks, 3) highlights emerging areas of opportunity in clinical populations, and 4) suggests strategies to close the knowledge gap and understanding the relevance of sex differences in performance fatigability. The limited understanding about sex differences in fatigability in healthy and clinical populations, presents as a field ripe with opportunity for high impact studies. Such studies will inform on the limitations of men and women during athletic endeavors, ergonomic tasks and daily activities. Because fatigability is required for effective neuromuscular adaptation, sex differences in fatigability studies will also inform on optimal strategies for training and rehabilitation in both men and women. PMID:27015385

  17. Optimal Planning and Problem-Solving

    NASA Technical Reports Server (NTRS)

    Clemet, Bradley; Schaffer, Steven; Rabideau, Gregg

    2008-01-01

    CTAEMS MDP Optimal Planner is a problem-solving software designed to command a single spacecraft/rover, or a team of spacecraft/rovers, to perform the best action possible at all times according to an abstract model of the spacecraft/rover and its environment. It also may be useful in solving logistical problems encountered in commercial applications such as shipping and manufacturing. The planner reasons around uncertainty according to specified probabilities of outcomes using a plan hierarchy to avoid exploring certain kinds of suboptimal actions. Also, planned actions are calculated as the state-action space is expanded, rather than afterward, to reduce by an order of magnitude the processing time and memory used. The software solves planning problems with actions that can execute concurrently, that have uncertain duration and quality, and that have functional dependencies on others that affect quality. These problems are modeled in a hierarchical planning language called C_TAEMS, a derivative of the TAEMS language for specifying domains for the DARPA Coordinators program. In realistic environments, actions often have uncertain outcomes and can have complex relationships with other tasks. The planner approaches problems by considering all possible actions that may be taken from any state reachable from a given, initial state, and from within the constraints of a given task hierarchy that specifies what tasks may be performed by which team member.

  18. Dynamic whole-body robotic manipulation

    NASA Astrophysics Data System (ADS)

    Abe, Yeuhi; Stephens, Benjamin; Murphy, Michael P.; Rizzi, Alfred A.

    2013-05-01

    The creation of dynamic manipulation behaviors for high degree of freedom, mobile robots will allow them to accomplish increasingly difficult tasks in the field. We are investigating how the coordinated use of the body, legs, and integrated manipulator, on a mobile robot, can improve the strength, velocity, and workspace when handling heavy objects. We envision that such a capability would aid in a search and rescue scenario when clearing obstacles from a path or searching a rubble pile quickly. Manipulating heavy objects is especially challenging because the dynamic forces are high and a legged system must coordinate all its degrees of freedom to accomplish tasks while maintaining balance. To accomplish these types of manipulation tasks, we use trajectory optimization techniques to generate feasible open-loop behaviors for our 28 dof quadruped robot (BigDog) by planning trajectories in a 13 dimensional space. We apply the Covariance Matrix Adaptation (CMA) algorithm to solve for trajectories that optimize task performance while also obeying important constraints such as torque and velocity limits, kinematic limits, and center of pressure location. These open-loop behaviors are then used to generate desired feed-forward body forces and foot step locations, which enable tracking on the robot. Some hardware results for cinderblock throwing are demonstrated on the BigDog quadruped platform augmented with a human-arm-like manipulator. The results are analogous to how a human athlete maximizes distance in the discus event by performing a precise sequence of choreographed steps.

  19. NEEMO 14: Evaluation of Human Performance for Rover, Cargo Lander, Crew Lander, and Exploration Tasks in Simulated Partial Gravity

    NASA Technical Reports Server (NTRS)

    Chappell, Steven P.; Abercromby, Andrew F.; Gernhardt, Michael L.

    2011-01-01

    The ultimate success of future human space exploration missions is dependent on the ability to perform extravehicular activity (EVA) tasks effectively, efficiently, and safely, whether those tasks represent a nominal mode of operation or a contingency capability. To optimize EVA systems for the best human performance, it is critical to study the effects of varying key factors such as suit center of gravity (CG), suit mass, and gravity level. During the 2-week NASA Extreme Environment Mission Operations (NEEMO) 14 mission, four crewmembers performed a series of EVA tasks under different simulated EVA suit configurations and used full-scale mockups of a Space Exploration Vehicle (SEV) rover and lander. NEEMO is an underwater spaceflight analog that allows a true mission-like operational environment and uses buoyancy effects and added weight to simulate different gravity levels. Quantitative and qualitative data collected during NEEMO 14, as well as from spacesuit tests in parabolic flight and with overhead suspension, are being used to directly inform ongoing hardware and operations concept development of the SEV, exploration EVA systems, and future EVA suits. OBJECTIVE: To compare human performance across different weight and CG configurations. METHODS: Four subjects were weighed out to simulate reduced gravity and wore either a specially designed rig to allow adjustment of CG or a PLSS mockup. Subjects completed tasks including level ambulation, incline/decline ambulation, standing from the kneeling and prone position, picking up objects, shoveling, ladder climbing, incapacitated crewmember handling, and small and large payload transfer. Subjective compensation, exertion, task acceptability, and duration data as well as photo and video were collected. RESULTS: There appear to be interactions between CG, weight, and task. CGs nearest the subject s natural CG are the most predictable in terms of acceptable performance across tasks. Future research should focus on understanding the interactions between CG, mass, and subject differences.

  20. Weak value amplification considered harmful

    NASA Astrophysics Data System (ADS)

    Ferrie, Christopher; Combes, Joshua

    2014-03-01

    We show using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of parameter estimation and signal detection. We show that using all data and considering the joint distribution of all measurement outcomes yields the optimal estimator. Moreover, we show estimation using the maximum likelihood technique with weak values as small as possible produces better performance for quantum metrology. In doing so, we identify the optimal experimental arrangement to be the one which reveals the maximal eigenvalue of the square of system observables. We also show these conclusions do not change in the presence of technical noise.

  1. The balanced mind: the variability of task-unrelated thoughts predicts error monitoring

    PubMed Central

    Allen, Micah; Smallwood, Jonathan; Christensen, Joanna; Gramm, Daniel; Rasmussen, Beinta; Jensen, Christian Gaden; Roepstorff, Andreas; Lutz, Antoine

    2013-01-01

    Self-generated thoughts unrelated to ongoing activities, also known as “mind-wandering,” make up a substantial portion of our daily lives. Reports of such task-unrelated thoughts (TUTs) predict both poor performance on demanding cognitive tasks and blood-oxygen-level-dependent (BOLD) activity in the default mode network (DMN). However, recent findings suggest that TUTs and the DMN can also facilitate metacognitive abilities and related behaviors. To further understand these relationships, we examined the influence of subjective intensity, ruminative quality, and variability of mind-wandering on response inhibition and monitoring, using the Error Awareness Task (EAT). We expected to replicate links between TUT and reduced inhibition, and explored whether variance in TUT would predict improved error monitoring, reflecting a capacity to balance between internal and external cognition. By analyzing BOLD responses to subjective probes and the EAT, we dissociated contributions of the DMN, executive, and salience networks to task performance. While both response inhibition and online TUT ratings modulated BOLD activity in the medial prefrontal cortex (mPFC) of the DMN, the former recruited a more dorsal area implying functional segregation. We further found that individual differences in mean TUTs strongly predicted EAT stop accuracy, while TUT variability specifically predicted levels of error awareness. Interestingly, we also observed co-activation of salience and default mode regions during error awareness, supporting a link between monitoring and TUTs. Altogether our results suggest that although TUT is detrimental to task performance, fluctuations in attention between self-generated and external task-related thought is a characteristic of individuals with greater metacognitive monitoring capacity. Achieving a balance between internally and externally oriented thought may thus aid individuals in optimizing their task performance. PMID:24223545

  2. Optimizing a mobile robot control system using GPU acceleration

    NASA Astrophysics Data System (ADS)

    Tuck, Nat; McGuinness, Michael; Martin, Fred

    2012-01-01

    This paper describes our attempt to optimize a robot control program for the Intelligent Ground Vehicle Competition (IGVC) by running computationally intensive portions of the system on a commodity graphics processing unit (GPU). The IGVC Autonomous Challenge requires a control program that performs a number of different computationally intensive tasks ranging from computer vision to path planning. For the 2011 competition our Robot Operating System (ROS) based control system would not run comfortably on the multicore CPU on our custom robot platform. The process of profiling the ROS control program and selecting appropriate modules for porting to run on a GPU is described. A GPU-targeting compiler, Bacon, is used to speed up development and help optimize the ported modules. The impact of the ported modules on overall performance is discussed. We conclude that GPU optimization can free a significant amount of CPU resources with minimal effort for expensive user-written code, but that replacing heavily-optimized library functions is more difficult, and a much less efficient use of time.

  3. Who are the real bird brains? Qualitative differences in behavioral flexibility between dogs (Canis familiaris) and pigeons (Columba livia).

    PubMed

    Laude, Jennifer R; Pattison, Kristina F; Rayburn-Reeves, Rebecca M; Michler, Daniel M; Zentall, Thomas R

    2016-01-01

    Pigeons given a simultaneous spatial discrimination reversal, in which a single reversal occurs at the midpoint of each session, consistently show anticipation prior to the reversal as well as perseveration after the reversal, suggesting that they use a less effective cue (time or trial number into the session) than what would be optimal to maximize reinforcement (local feedback from the most recent trials). In contrast, rats (Rattus norvegicus) and humans show near-optimal reversal learning on this task. To determine whether this is a general characteristic of mammals, in the present research, pigeons (Columba livia) and dogs (Canis familiaris) were tested with a simultaneous spatial discrimination mid-session reversal. Overall, dogs performed the task more poorly than pigeons. Interestingly, both pigeons and dogs employed what resembled a timing strategy. However, dogs showed greater perseverative errors, suggesting that they may have relatively poorer working memory and inhibitory control with this task. The greater efficiency shown by pigeons with this task suggests they are better able to time and use the feedback from their preceding choice as the basis of their future choice, highlighting what may be a qualitative difference between the species.

  4. Visual-search models for location-known detection tasks

    NASA Astrophysics Data System (ADS)

    Gifford, H. C.; Karbaschi, Z.; Banerjee, K.; Das, M.

    2017-03-01

    Lesion-detection studies that analyze a fixed target position are generally considered predictive of studies involving lesion search, but the extent of the correlation often goes untested. The purpose of this work was to develop a visual-search (VS) model observer for location-known tasks that, coupled with previous work on localization tasks, would allow efficient same-observer assessments of how search and other task variations can alter study outcomes. The model observer featured adjustable parameters to control the search radius around the fixed lesion location and the minimum separation between suspicious locations. Comparisons were made against human observers, a channelized Hotelling observer and a nonprewhitening observer with eye filter in a two-alternative forced-choice study with simulated lumpy background images containing stationary anatomical and quantum noise. These images modeled single-pinhole nuclear medicine scans with different pinhole sizes. When the VS observer's search radius was optimized with training images, close agreement was obtained with human-observer results. Some performance differences between the humans could be explained by varying the model observer's separation parameter. The range of optimal pinhole sizes identified by the VS observer was in agreement with the range determined with the channelized Hotelling observer.

  5. Genetic learning in rule-based and neural systems

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.

    1993-01-01

    The design of neural networks and fuzzy systems can involve complex, nonlinear, and ill-conditioned optimization problems. Often, traditional optimization schemes are inadequate or inapplicable for such tasks. Genetic Algorithms (GA's) are a class of optimization procedures whose mechanics are based on those of natural genetics. Mathematical arguments show how GAs bring substantial computational leverage to search problems, without requiring the mathematical characteristics often necessary for traditional optimization schemes (e.g., modality, continuity, availability of derivative information, etc.). GA's have proven effective in a variety of search tasks that arise in neural networks and fuzzy systems. This presentation begins by introducing the mechanism and theoretical underpinnings of GA's. GA's are then related to a class of rule-based machine learning systems called learning classifier systems (LCS's). An LCS implements a low-level production-system that uses a GA as its primary rule discovery mechanism. This presentation illustrates how, despite its rule-based framework, an LCS can be thought of as a competitive neural network. Neural network simulator code for an LCS is presented. In this context, the GA is doing more than optimizing and objective function. It is searching for an ecology of hidden nodes with limited connectivity. The GA attempts to evolve this ecology such that effective neural network performance results. The GA is particularly well adapted to this task, given its naturally-inspired basis. The LCS/neural network analogy extends itself to other, more traditional neural networks. Conclusions to the presentation discuss the implications of using GA's in ecological search problems that arise in neural and fuzzy systems.

  6. Reverse alignment "mirror image" visualization as a laparoscopic training tool improves task performance.

    PubMed

    Dunnican, Ward J; Singh, T Paul; Ata, Ashar; Bendana, Emma E; Conlee, Thomas D; Dolce, Charles J; Ramakrishnan, Rakesh

    2010-06-01

    Reverse alignment (mirror image) visualization is a disconcerting situation occasionally faced during laparoscopic operations. This occurs when the camera faces back at the surgeon in the opposite direction from which the surgeon's body and instruments are facing. Most surgeons will attempt to optimize trocar and camera placement to avoid this situation. The authors' objective was to determine whether the intentional use of reverse alignment visualization during laparoscopic training would improve performance. A standard box trainer was configured for reverse alignment, and 34 medical students and junior surgical residents were randomized to train with either forward alignment (DIRECT) or reverse alignment (MIRROR) visualization. Enrollees were tested on both modalities before and after a 4-week structured training program specific to their modality. Student's t test was used to determine differences in task performance between the 2 groups. Twenty-one participants completed the study (10 DIRECT, 11 MIRROR). There were no significant differences in performance time between DIRECT or MIRROR participants during forward or reverse alignment initial testing. At final testing, DIRECT participants had improved times only in forward alignment performance; they demonstrated no significant improvement in reverse alignment performance. MIRROR participants had significant time improvement in both forward and reverse alignment performance at final testing. Reverse alignment imaging for laparoscopic training improves task performance for both reverse alignment and forward alignment tasks. This may be translated into improved performance in the operating room when faced with reverse alignment situations. Minimal lab training can account for drastic adaptation to this environment.

  7. Human problem solving performance in a fault diagnosis task

    NASA Technical Reports Server (NTRS)

    Rouse, W. B.

    1978-01-01

    It is proposed that humans in automated systems will be asked to assume the role of troubleshooter or problem solver and that the problems which they will be asked to solve in such systems will not be amenable to rote solution. The design of visual displays for problem solving in such situations is considered, and the results of two experimental investigations of human problem solving performance in the diagnosis of faults in graphically displayed network problems are discussed. The effects of problem size, forced-pacing, computer aiding, and training are considered. Results indicate that human performance deviates from optimality as problem size increases. Forced-pacing appears to cause the human to adopt fairly brute force strategies, as compared to those adopted in self-paced situations. Computer aiding substantially lessens the number of mistaken diagnoses by performing the bookkeeping portions of the task.

  8. Adaptive Integration and Optimization of Automated and Neural Processing Systems - Establishing Neural and Behavioral Benchmarks of Optimized Performance

    DTIC Science & Technology

    2012-07-01

    detection only condition followed either face detection only or dual task, thus ensuring that participants were practiced in face detection before...1 ARMY RSCH LABORATORY – HRED RDRL HRM C A DAVISON 320 MANSCEN LOOP STE 115 FORT LEONARD WOOD MO 65473 2 ARMY RSCH LABORATORY...HRED RDRL HRM DI T DAVIS J HANSBERGER BLDG 5400 RM C242 REDSTONE ARSENAL AL 35898-7290 1 ARMY RSCH LABORATORY – HRED RDRL HRS

  9. Solving Optimization Problems with Spreadsheets

    ERIC Educational Resources Information Center

    Beigie, Darin

    2017-01-01

    Spreadsheets provide a rich setting for first-year algebra students to solve problems. Individual spreadsheet cells play the role of variables, and creating algebraic expressions for a spreadsheet to perform a task allows students to achieve a glimpse of how mathematics is used to program a computer and solve problems. Classic optimization…

  10. Corpus-Based Optimization of Language Models Derived from Unification Grammars

    NASA Technical Reports Server (NTRS)

    Rayner, Manny; Hockey, Beth Ann; James, Frankie; Bratt, Harry; Bratt, Elizabeth O.; Gawron, Mark; Goldwater, Sharon; Dowding, John; Bhagat, Amrita

    2000-01-01

    We describe a technique which makes it feasible to improve the performance of a language model derived from a manually constructed unification grammar, using low-quality untranscribed speech data and a minimum of human annotation. The method is on a medium-vocabulary spoken language command and control task.

  11. Resting-state connectivity predicts visuo-motor skill learning.

    PubMed

    Manuel, Aurélie L; Guggisberg, Adrian G; Thézé, Raphaël; Turri, Francesco; Schnider, Armin

    2018-08-01

    Spontaneous brain activity at rest is highly organized even when the brain is not explicitly engaged in a task. Functional connectivity (FC) in the alpha frequency band (α, 8-12 Hz) during rest is associated with improved performance on various cognitive and motor tasks. In this study we explored how FC is associated with visuo-motor skill learning and offline consolidation. We tested two hypotheses by which resting-state FC might achieve its impact on behavior: preparing the brain for an upcoming task or consolidating training gains. Twenty-four healthy participants were assigned to one of two groups: The experimental group (n = 12) performed a computerized mirror-drawing task. The control group (n = 12) performed a similar task but with concordant cursor direction. High-density 156-channel resting-state EEG was recorded before and after learning. Subjects were tested for offline consolidation 24h later. The Experimental group improved during training and showed offline consolidation. Increased α-FC between the left superior parietal cortex and the rest of the brain before training and decreased α-FC in the same region after training predicted learning. Resting-state FC following training did not predict offline consolidation and none of these effects were present in controls. These findings indicate that resting-state alpha-band FC is primarily implicated in providing optimal neural resources for upcoming tasks. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Optimal processor assignment for pipeline computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Simha, Rahul; Choudhury, Alok N.; Narahari, Bhagirath

    1991-01-01

    The availability of large scale multitasked parallel architectures introduces the following processor assignment problem for pipelined computations. Given a set of tasks and their precedence constraints, along with their experimentally determined individual responses times for different processor sizes, find an assignment of processor to tasks. Two objectives are of interest: minimal response given a throughput requirement, and maximal throughput given a response time requirement. These assignment problems differ considerably from the classical mapping problem in which several tasks share a processor; instead, it is assumed that a large number of processors are to be assigned to a relatively small number of tasks. Efficient assignment algorithms were developed for different classes of task structures. For a p processor system and a series parallel precedence graph with n constituent tasks, an O(np2) algorithm is provided that finds the optimal assignment for the response time optimization problem; it was found that the assignment optimizing the constrained throughput in O(np2log p) time. Special cases of linear, independent, and tree graphs are also considered.

  13. Design study of wind turbines 50 kW to 3000 kW for electric utility applications: Analysis and design

    NASA Technical Reports Server (NTRS)

    1976-01-01

    In the conceptual design task, several feasible wind generator systems (WGS) configurations were evaluated, and the concept offering the lowest energy cost potential and minimum technical risk for utility applications was selected. In the optimization task, the selected concept was optimized utilizing a parametric computer program prepared for this purpose. In the preliminary design task, the optimized selected concept was designed and analyzed in detail. The utility requirements evaluation task examined the economic, operational, and institutional factors affecting the WGS in a utility environment, and provided additional guidance for the preliminary design effort. Results of the conceptual design task indicated that a rotor operating at constant speed, driving an AC generator through a gear transmission is the most cost effective WGS configuration. The optimization task results led to the selection of a 500 kW rating for the low power WGS and a 1500 kW rating for the high power WGS.

  14. Sort-Mid tasks scheduling algorithm in grid computing.

    PubMed

    Reda, Naglaa M; Tawfik, A; Marzok, Mohamed A; Khamis, Soheir M

    2015-11-01

    Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan.

  15. Sort-Mid tasks scheduling algorithm in grid computing

    PubMed Central

    Reda, Naglaa M.; Tawfik, A.; Marzok, Mohamed A.; Khamis, Soheir M.

    2014-01-01

    Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan. PMID:26644937

  16. Evaluation of CNN as anthropomorphic model observer

    NASA Astrophysics Data System (ADS)

    Massanes, Francesc; Brankov, Jovan G.

    2017-03-01

    Model observers (MO) are widely used in medical imaging to act as surrogates of human observers in task-based image quality evaluation, frequently towards optimization of reconstruction algorithms. In this paper, we explore the use of convolutional neural networks (CNN) to be used as MO. We will compare CNN MO to alternative MO currently being proposed and used such as the relevance vector machine based MO and channelized Hotelling observer (CHO). As the success of the CNN, and other deep learning approaches, is rooted in large data sets availability, which is rarely the case in medical imaging systems task-performance evaluation, we will evaluate CNN performance on both large and small training data sets.

  17. Glass Waste Forms for Oak Ridge Tank Wastes: Fiscal Year 1998 Report for Task Plan SR-16WT-31, Task B

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, M.K.

    1999-05-10

    Using ORNL information on the characterization of the tank waste sludges, SRTC performed extensive bench-scale vitrification studies using simulants. Several glass systems were tested to ensure the optimum glass composition (based on the glass liquidus temperature, viscosity and durability) is determined. This optimum composition will balance waste loading, melt temperature, waste form performance and disposal requirements. By optimizing the glass composition, a cost savings can be realized during vitrification of the waste. The preferred glass formulation was selected from the bench-scale studies and recommended to ORNL for further testing with samples of actual OR waste tank sludges.

  18. An Acute Bout of Exercise Improves the Cognitive Performance of Older Adults.

    PubMed

    Johnson, Liam; Addamo, Patricia K; Selva Raj, Isaac; Borkoles, Erika; Wyckelsma, Victoria; Cyarto, Elizabeth; Polman, Remco C

    2016-10-01

    There is evidence that an acute bout of exercise confers cognitive benefits, but it is largely unknown what the optimal mode and duration of exercise is and how cognitive performance changes over time after exercise. We compared the cognitive performance of 31 older adults using the Stroop test before, immediately after, and at 30 and 60 min after a 10 and 30 min aerobic or resistance exercise session. Heart rate and feelings of arousal were also measured before, during, and after exercise. We found that, independent of mode or duration of exercise, the participants improved in the Stroop Inhibition task immediately postexercise. We did not find that exercise influenced the performance of the Stroop Color or Stroop Word Interference tasks. Our findings suggest that an acute bout of exercise can improve cognitive performance and, in particular, the more complex executive functioning of older adults.

  19. A comparison of human performance in figural and navigational versions of the traveling salesman problem.

    PubMed

    Blaser, R E; Wilber, Julie

    2013-11-01

    Performance on a typical pen-and-paper (figural) version of the Traveling Salesman Problem was compared to performance on a room-sized navigational version of the same task. Nine configurations were designed to examine the use of the nearest-neighbor (NN), cluster approach, and convex-hull strategies. Performance decreased with an increasing number of nodes internal to the hull, and improved when the NN strategy produced the optimal path. There was no overall difference in performance between figural and navigational task modalities. However, there was an interaction between modality and configuration, with evidence that participants relied more heavily on the NN strategy in the figural condition. Our results suggest that participants employed similar, but not identical, strategies when solving figural and navigational versions of the problem. Surprisingly, there was no evidence that participants favored global strategies in the figural version and local strategies in the navigational version.

  20. Predicting explorative motor learning using decision-making and motor noise.

    PubMed

    Chen, Xiuli; Mohr, Kieran; Galea, Joseph M

    2017-04-01

    A fundamental problem faced by humans is learning to select motor actions based on noisy sensory information and incomplete knowledge of the world. Recently, a number of authors have asked whether this type of motor learning problem might be very similar to a range of higher-level decision-making problems. If so, participant behaviour on a high-level decision-making task could be predictive of their performance during a motor learning task. To investigate this question, we studied performance during an explorative motor learning task and a decision-making task which had a similar underlying structure with the exception that it was not subject to motor (execution) noise. We also collected an independent measurement of each participant's level of motor noise. Our analysis showed that explorative motor learning and decision-making could be modelled as the (approximately) optimal solution to a Partially Observable Markov Decision Process bounded by noisy neural information processing. The model was able to predict participant performance in motor learning by using parameters estimated from the decision-making task and the separate motor noise measurement. This suggests that explorative motor learning can be formalised as a sequential decision-making process that is adjusted for motor noise, and raises interesting questions regarding the neural origin of explorative motor learning.

  1. Predicting explorative motor learning using decision-making and motor noise

    PubMed Central

    Galea, Joseph M.

    2017-01-01

    A fundamental problem faced by humans is learning to select motor actions based on noisy sensory information and incomplete knowledge of the world. Recently, a number of authors have asked whether this type of motor learning problem might be very similar to a range of higher-level decision-making problems. If so, participant behaviour on a high-level decision-making task could be predictive of their performance during a motor learning task. To investigate this question, we studied performance during an explorative motor learning task and a decision-making task which had a similar underlying structure with the exception that it was not subject to motor (execution) noise. We also collected an independent measurement of each participant’s level of motor noise. Our analysis showed that explorative motor learning and decision-making could be modelled as the (approximately) optimal solution to a Partially Observable Markov Decision Process bounded by noisy neural information processing. The model was able to predict participant performance in motor learning by using parameters estimated from the decision-making task and the separate motor noise measurement. This suggests that explorative motor learning can be formalised as a sequential decision-making process that is adjusted for motor noise, and raises interesting questions regarding the neural origin of explorative motor learning. PMID:28437451

  2. Orbital prefrontal cortex is required for object-in-place scene memory but not performance of a strategy implementation task.

    PubMed

    Baxter, Mark G; Gaffan, David; Kyriazis, Diana A; Mitchell, Anna S

    2007-10-17

    The orbital prefrontal cortex is thought to be involved in behavioral flexibility in primates, and human neuroimaging studies have identified orbital prefrontal activation during episodic memory encoding. The goal of the present study was to ascertain whether deficits in strategy implementation and episodic memory that occur after ablation of the entire prefrontal cortex can be ascribed to damage to the orbital prefrontal cortex. Rhesus monkeys were preoperatively trained on two behavioral tasks, the performance of both of which is severely impaired by the disconnection of frontal cortex from inferotemporal cortex. In the strategy implementation task, monkeys were required to learn about two categories of objects, each associated with a different strategy that had to be performed to obtain food reward. The different strategies had to be applied flexibly to optimize the rate of reward delivery. In the scene memory task, monkeys learned 20 new object-in-place discrimination problems in each session. Monkeys were tested on both tasks before and after bilateral ablation of orbital prefrontal cortex. These lesions impaired new scene learning but had no effect on strategy implementation. This finding supports a role for the orbital prefrontal cortex in memory but places limits on the involvement of orbital prefrontal cortex in the representation and implementation of behavioral goals and strategies.

  3. Perception of Self-Motion and Regulation of Walking Speed in Young-Old Adults.

    PubMed

    Lalonde-Parsi, Marie-Jasmine; Lamontagne, Anouk

    2015-07-01

    Whether a reduced perception of self-motion contributes to poor walking speed adaptations in older adults is unknown. In this study, speed discrimination thresholds (perceptual task) and walking speed adaptations (walking task) were compared between young (19-27 years) and young-old individuals (63-74 years), and the relationship between the performance on the two tasks was examined. Participants were evaluated while viewing a virtual corridor in a helmet-mounted display. Speed discrimination thresholds were determined using a staircase procedure. Walking speed modulation was assessed on a self-paced treadmill while exposed to different self-motion speeds ranging from 0.25 to 2 times the participants' comfortable speed. For each speed, participants were instructed to match the self-motion speed described by the moving corridor. On the walking task, participants displayed smaller walking speed errors at comfortable walking speeds compared with slower of faster speeds. The young-old adults presented larger speed discrimination thresholds (perceptual experiment) and larger walking speed errors (walking experiment) compared with young adults. Larger walking speed errors were associated with higher discrimination thresholds. The enhanced performance on the walking task at comfortable speed suggests that intersensory calibration processes are influenced by experience, hence optimized for frequently encountered conditions. The altered performance of the young-old adults on the perceptual and walking tasks, as well as the relationship observed between the two tasks, suggest that a poor perception of visual motion information may contribute to the poor walking speed adaptations that arise with aging.

  4. Diagnosis of multiple sclerosis from EEG signals using nonlinear methods.

    PubMed

    Torabi, Ali; Daliri, Mohammad Reza; Sabzposhan, Seyyed Hojjat

    2017-12-01

    EEG signals have essential and important information about the brain and neural diseases. The main purpose of this study is classifying two groups of healthy volunteers and Multiple Sclerosis (MS) patients using nonlinear features of EEG signals while performing cognitive tasks. EEG signals were recorded when users were doing two different attentional tasks. One of the tasks was based on detecting a desired change in color luminance and the other task was based on detecting a desired change in direction of motion. EEG signals were analyzed in two ways: EEG signals analysis without rhythms decomposition and EEG sub-bands analysis. After recording and preprocessing, time delay embedding method was used for state space reconstruction; embedding parameters were determined for original signals and their sub-bands. Afterwards nonlinear methods were used in feature extraction phase. To reduce the feature dimension, scalar feature selections were done by using T-test and Bhattacharyya criteria. Then, the data were classified using linear support vector machines (SVM) and k-nearest neighbor (KNN) method. The best combination of the criteria and classifiers was determined for each task by comparing performances. For both tasks, the best results were achieved by using T-test criterion and SVM classifier. For the direction-based and the color-luminance-based tasks, maximum classification performances were 93.08 and 79.79% respectively which were reached by using optimal set of features. Our results show that the nonlinear dynamic features of EEG signals seem to be useful and effective in MS diseases diagnosis.

  5. A Convex Formulation for Learning a Shared Predictive Structure from Multiple Tasks

    PubMed Central

    Chen, Jianhui; Tang, Lei; Liu, Jun; Ye, Jieping

    2013-01-01

    In this paper, we consider the problem of learning from multiple related tasks for improved generalization performance by extracting their shared structures. The alternating structure optimization (ASO) algorithm, which couples all tasks using a shared feature representation, has been successfully applied in various multitask learning problems. However, ASO is nonconvex and the alternating algorithm only finds a local solution. We first present an improved ASO formulation (iASO) for multitask learning based on a new regularizer. We then convert iASO, a nonconvex formulation, into a relaxed convex one (rASO). Interestingly, our theoretical analysis reveals that rASO finds a globally optimal solution to its nonconvex counterpart iASO under certain conditions. rASO can be equivalently reformulated as a semidefinite program (SDP), which is, however, not scalable to large datasets. We propose to employ the block coordinate descent (BCD) method and the accelerated projected gradient (APG) algorithm separately to find the globally optimal solution to rASO; we also develop efficient algorithms for solving the key subproblems involved in BCD and APG. The experiments on the Yahoo webpages datasets and the Drosophila gene expression pattern images datasets demonstrate the effectiveness and efficiency of the proposed algorithms and confirm our theoretical analysis. PMID:23520249

  6. The effects of arousal reappraisal on stress responses, performance and attention.

    PubMed

    Sammy, Nadine; Anstiss, Paul A; Moore, Lee J; Freeman, Paul; Wilson, Mark R; Vine, Samuel J

    2017-11-01

    This study examined the effects of arousal reappraisal on cardiovascular responses, demand and resource evaluations, self-confidence, performance and attention under pressurized conditions. A recent study by Moore et al. [2015. Reappraising threat: How to optimize performance under pressure. Journal of Sport and Exercise Psychology, 37(3), 339-343. doi: 10.1123/jsep.2014-0186 ] suggested that arousal reappraisal is beneficial to the promotion of challenge states and leads to improvements in single-trial performance. This study aimed to further the work of Moore and colleagues (2015) by examining the effects of arousal reappraisal on cardiovascular responses, demand and resource evaluations, self-confidence, performance and attention in a multi-trial pressurized performance situation. Participants were randomly assigned to either an arousal reappraisal intervention or control condition, and completed a pressurized dart throwing task. The intervention encouraged participants to view their physiological arousal as facilitative rather than debilitative to performance. Measures of cardiovascular reactivity, demand and resource evaluations, self-confidence, task performance and attention were recorded. The reappraisal group displayed more favorable cardiovascular reactivity and reported higher resource evaluations and higher self-confidence than the control group but no task performance or attention effects were detected. These findings demonstrate the strength of arousal reappraisal in promoting adaptive stress responses, perceptions of resources and self-confidence.

  7. Biogeography-based combinatorial strategy for efficient autonomous underwater vehicle motion planning and task-time management

    NASA Astrophysics Data System (ADS)

    Zadeh, S. M.; Powers, D. M. W.; Sammut, K.; Yazdani, A. M.

    2016-12-01

    Autonomous Underwater Vehicles (AUVs) are capable of spending long periods of time for carrying out various underwater missions and marine tasks. In this paper, a novel conflict-free motion planning framework is introduced to enhance underwater vehicle's mission performance by completing maximum number of highest priority tasks in a limited time through a large scale waypoint cluttered operating field, and ensuring safe deployment during the mission. The proposed combinatorial route-path planner model takes the advantages of the Biogeography-Based Optimization (BBO) algorithm toward satisfying objectives of both higher-lower level motion planners and guarantees maximization of the mission productivity for a single vehicle operation. The performance of the model is investigated under different scenarios including the particular cost constraints in time-varying operating fields. To show the reliability of the proposed model, performance of each motion planner assessed separately and then statistical analysis is undertaken to evaluate the total performance of the entire model. The simulation results indicate the stability of the contributed model and its feasible application for real experiments.

  8. Advanced automation for in-space vehicle processing

    NASA Technical Reports Server (NTRS)

    Sklar, Michael; Wegerif, D.

    1990-01-01

    The primary objective of this 3-year planned study is to assure that the fully evolved Space Station Freedom (SSF) can support automated processing of exploratory mission vehicles. Current study assessments show that required extravehicular activity (EVA) and to some extent intravehicular activity (IVA) manpower requirements for required processing tasks far exceeds the available manpower. Furthermore, many processing tasks are either hazardous operations or they exceed EVA capability. Thus, automation is essential for SSF transportation node functionality. Here, advanced automation represents the replacement of human performed tasks beyond the planned baseline automated tasks. Both physical tasks such as manipulation, assembly and actuation, and cognitive tasks such as visual inspection, monitoring and diagnosis, and task planning are considered. During this first year of activity both the Phobos/Gateway Mars Expedition and Lunar Evolution missions proposed by the Office of Exploration have been evaluated. A methodology for choosing optimal tasks to be automated has been developed. Processing tasks for both missions have been ranked on the basis of automation potential. The underlying concept in evaluating and describing processing tasks has been the use of a common set of 'Primitive' task descriptions. Primitive or standard tasks have been developed both for manual or crew processing and automated machine processing.

  9. Costs of task allocation with local feedback: Effects of colony size and extra workers in social insects and other multi-agent systems.

    PubMed

    Radeva, Tsvetomira; Dornhaus, Anna; Lynch, Nancy; Nagpal, Radhika; Su, Hsin-Hao

    2017-12-01

    Adaptive collective systems are common in biology and beyond. Typically, such systems require a task allocation algorithm: a mechanism or rule-set by which individuals select particular roles. Here we study the performance of such task allocation mechanisms measured in terms of the time for individuals to allocate to tasks. We ask: (1) Is task allocation fundamentally difficult, and thus costly? (2) Does the performance of task allocation mechanisms depend on the number of individuals? And (3) what other parameters may affect their efficiency? We use techniques from distributed computing theory to develop a model of a social insect colony, where workers have to be allocated to a set of tasks; however, our model is generalizable to other systems. We show, first, that the ability of workers to quickly assess demand for work in tasks they are not currently engaged in crucially affects whether task allocation is quickly achieved or not. This indicates that in social insect tasks such as thermoregulation, where temperature may provide a global and near instantaneous stimulus to measure the need for cooling, for example, it should be easy to match the number of workers to the need for work. In other tasks, such as nest repair, it may be impossible for workers not directly at the work site to know that this task needs more workers. We argue that this affects whether task allocation mechanisms are under strong selection. Second, we show that colony size does not affect task allocation performance under our assumptions. This implies that when effects of colony size are found, they are not inherent in the process of task allocation itself, but due to processes not modeled here, such as higher variation in task demand for smaller colonies, benefits of specialized workers, or constant overhead costs. Third, we show that the ratio of the number of available workers to the workload crucially affects performance. Thus, workers in excess of those needed to complete all tasks improve task allocation performance. This provides a potential explanation for the phenomenon that social insect colonies commonly contain inactive workers: these may be a 'surplus' set of workers that improves colony function by speeding up optimal allocation of workers to tasks. Overall our study shows how limitations at the individual level can affect group level outcomes, and suggests new hypotheses that can be explored empirically.

  10. Costs of task allocation with local feedback: Effects of colony size and extra workers in social insects and other multi-agent systems

    PubMed Central

    Dornhaus, Anna; Su, Hsin-Hao

    2017-01-01

    Adaptive collective systems are common in biology and beyond. Typically, such systems require a task allocation algorithm: a mechanism or rule-set by which individuals select particular roles. Here we study the performance of such task allocation mechanisms measured in terms of the time for individuals to allocate to tasks. We ask: (1) Is task allocation fundamentally difficult, and thus costly? (2) Does the performance of task allocation mechanisms depend on the number of individuals? And (3) what other parameters may affect their efficiency? We use techniques from distributed computing theory to develop a model of a social insect colony, where workers have to be allocated to a set of tasks; however, our model is generalizable to other systems. We show, first, that the ability of workers to quickly assess demand for work in tasks they are not currently engaged in crucially affects whether task allocation is quickly achieved or not. This indicates that in social insect tasks such as thermoregulation, where temperature may provide a global and near instantaneous stimulus to measure the need for cooling, for example, it should be easy to match the number of workers to the need for work. In other tasks, such as nest repair, it may be impossible for workers not directly at the work site to know that this task needs more workers. We argue that this affects whether task allocation mechanisms are under strong selection. Second, we show that colony size does not affect task allocation performance under our assumptions. This implies that when effects of colony size are found, they are not inherent in the process of task allocation itself, but due to processes not modeled here, such as higher variation in task demand for smaller colonies, benefits of specialized workers, or constant overhead costs. Third, we show that the ratio of the number of available workers to the workload crucially affects performance. Thus, workers in excess of those needed to complete all tasks improve task allocation performance. This provides a potential explanation for the phenomenon that social insect colonies commonly contain inactive workers: these may be a ‘surplus’ set of workers that improves colony function by speeding up optimal allocation of workers to tasks. Overall our study shows how limitations at the individual level can affect group level outcomes, and suggests new hypotheses that can be explored empirically. PMID:29240763

  11. Simulation-based training in flexible fibreoptic intubation: A randomised study.

    PubMed

    Nilsson, Philip M; Russell, Lene; Ringsted, Charlotte; Hertz, Peter; Konge, Lars

    2015-09-01

    Flexible fibreoptic intubation (FOI) is a key element in difficult airway management. Training of FOI skills is an important part of the anaesthesiology curriculum. Simulation-based training has been shown to be effective when learning FOI, but the optimal structure of the training is debated. The aspect of dividing the training into segments (part-task training) or assembling into one piece (whole-task training) has not been studied. The aims of this study were to compare the effect of training the motor skills of FOI as part-task training or as whole-task training and to relate the performance levels achieved by the novices to the standard of performance of experienced FOI practitioners. A randomised controlled study. Centre for Clinical Education, University of Copenhagen and the Capital Region of Denmark, between January and April 2013. Twenty-three anaesthesia residents in their first year of training in anaesthesiology with no experience in FOI, and 10 anaesthesia consultants experienced in FOI. The novices to FOI were allocated randomly to receive either part-task or whole-task training of FOI on virtual reality simulators. Procedures were subsequently trained on a manikin and assessed by an experienced anaesthesiologist. The experienced group was assessed in the same manner with no prior simulation-based training. The primary outcome measure was the score of performance on testing FOI skills on a manikin. A positive learning effect was observed in both the part-task training group and the whole-task training group. There was no statistically significant difference in final performance scores of the two novice groups (P = 0.61). Furthermore, both groups of novices were able to improve their skill level significantly by the end of manikin training to levels comparable to the experienced anaesthesiologists. Part-task training did not prove more effective than whole-task training when training novices in FOI skills. FOI is very suitable for simulation-based training and segmentation of the procedure during training is not necessary.

  12. Measuring Motivation and Reward-Related Decision Making in the Rodent Operant Touchscreen System.

    PubMed

    Heath, Christopher J; Phillips, Benjamin U; Bussey, Timothy J; Saksida, Lisa M

    2016-01-04

    This unit is designed to facilitate implementation of the fixed and progressive ratio paradigms and the effort-related choice task in the rodent touchscreen apparatus to permit direct measurement of motivation and reward-related decision making in this equipment. These protocols have been optimized for use in the mouse and reliably yield stable performance levels that can be enhanced or suppressed by systemic pharmacological manipulation. Instructions are also provided for the adjustment of task parameters to permit use in mouse models of neurodegenerative disease. These tasks expand the utility of the rodent touchscreen apparatus beyond the currently available battery of cognitive assessment paradigms. Copyright © 2016 John Wiley & Sons, Inc.

  13. Individual differences in strategic flight management and scheduling

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher D.; Raby, Mireille

    1991-01-01

    A group of 30 instrument-rated pilots was made to fly simulator approaches to three airports under conditions of low, medium, and high workload conditions. An analysis is presently conducted of the difference in discrete task scheduling between the group of 10 highest and 10 lowest performing pilots in the sample; this categorization was based on the mean of various flight-profile measures. The two groups were found to differ from each other only in terms of the time when specific events were conducted, and of the optimality of scheduling for certain high-priority tasks. These results are assessed in view of the relative independence of task-management skills from aircraft-control skills.

  14. Pulse Detonation Rocket Engine Research at NASA Marshall

    NASA Technical Reports Server (NTRS)

    Morris, Christopher I.

    2003-01-01

    Pulse detonation rocket engines (PDREs) offer potential performance improvements over conventional designs, but represent a challenging modeling task. A quasi 1-D, finite-rate chemistry CFD model for a PDRE is described and implemented. A parametric study of the effect of blowdown pressure ratio on the performance of an optimized, fixed PDRE nozzle configuration is reported. The results are compared to a steady-state rocket system using similar modeling assumptions.

  15. Optical Quality, Threshold Target Identification, and Military Target Task Performance After Advanced Keratorefractive Surgery

    DTIC Science & Technology

    2011-05-01

    WFG) photorefractive keratectomy (PRK), WFG laser in situ keratomileusis ( LASIK ), wavefront optimized (WFO) PRK or WFO LASIK (56 in each group...design will enable comparison to preoperative performance as well as comparisons between treatment groups. Military, Refractive Surgery, PRK, LASIK ...randomized treatment trial we will enroll 224 nearsighted soldiers to WFG photorefractive keratectomy (PRK), WFG LASIK , WFO PRK or WFO LASIK (56 in

  16. Geometric subspace methods and time-delay embedding for EEG artifact removal and classification.

    PubMed

    Anderson, Charles W; Knight, James N; O'Connor, Tim; Kirby, Michael J; Sokolov, Artem

    2006-06-01

    Generalized singular-value decomposition is used to separate multichannel electroencephalogram (EEG) into components found by optimizing a signal-to-noise quotient. These components are used to filter out artifacts. Short-time principal components analysis of time-delay embedded EEG is used to represent windowed EEG data to classify EEG according to which mental task is being performed. Examples are presented of the filtering of various artifacts and results are shown of classification of EEG from five mental tasks using committees of decision trees.

  17. Integrated source and channel encoded digital communications system design study

    NASA Technical Reports Server (NTRS)

    Huth, G. K.

    1974-01-01

    Studies on the digital communication system for the direct communication links from ground to space shuttle and the links involving the Tracking and Data Relay Satellite (TDRS). Three main tasks were performed:(1) Channel encoding/decoding parameter optimization for forward and reverse TDRS links,(2)integration of command encoding/decoding and channel encoding/decoding; and (3) modulation coding interface study. The general communication environment is presented to provide the necessary background for the tasks and to provide an understanding of the implications of the results of the studies.

  18. High Speed Civil Transport Design Using Collaborative Optimization and Approximate Models

    NASA Technical Reports Server (NTRS)

    Manning, Valerie Michelle

    1999-01-01

    The design of supersonic aircraft requires complex analysis in multiple disciplines, posing, a challenge for optimization methods. In this thesis, collaborative optimization, a design architecture developed to solve large-scale multidisciplinary design problems, is applied to the design of supersonic transport concepts. Collaborative optimization takes advantage of natural disciplinary segmentation to facilitate parallel execution of design tasks. Discipline-specific design optimization proceeds while a coordinating mechanism ensures progress toward an optimum and compatibility between disciplinary designs. Two concepts for supersonic aircraft are investigated: a conventional delta-wing design and a natural laminar flow concept that achieves improved performance by exploiting properties of supersonic flow to delay boundary layer transition. The work involves the development of aerodynamics and structural analyses, and integration within a collaborative optimization framework. It represents the most extensive application of the method to date.

  19. Aerospace engineering design by systematic decomposition and multilevel optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Barthelemy, J. F. M.; Giles, G. L.

    1984-01-01

    A method for systematic analysis and optimization of large engineering systems, by decomposition of a large task into a set of smaller subtasks that is solved concurrently is described. The subtasks may be arranged in hierarchical levels. Analyses are carried out in each subtask using inputs received from other subtasks, and are followed by optimizations carried out from the bottom up. Each optimization at the lower levels is augmented by analysis of its sensitivity to the inputs received from other subtasks to account for the couplings among the subtasks in a formal manner. The analysis and optimization operations alternate iteratively until they converge to a system design whose performance is maximized with all constraints satisfied. The method, which is still under development, is tentatively validated by test cases in structural applications and an aircraft configuration optimization.

  20. Low-Complexity Discriminative Feature Selection From EEG Before and After Short-Term Memory Task.

    PubMed

    Behzadfar, Neda; Firoozabadi, S Mohammad P; Badie, Kambiz

    2016-10-01

    A reliable and unobtrusive quantification of changes in cortical activity during short-term memory task can be used to evaluate the efficacy of interfaces and to provide real-time user-state information. In this article, we investigate changes in electroencephalogram signals in short-term memory with respect to the baseline activity. The electroencephalogram signals have been analyzed using 9 linear and nonlinear/dynamic measures. We applied statistical Wilcoxon examination and Davis-Bouldian criterion to select optimal discriminative features. The results show that among the features, the permutation entropy significantly increased in frontal lobe and the occipital second lower alpha band activity decreased during memory task. These 2 features reflect the same mental task; however, their correlation with memory task varies in different intervals. In conclusion, it is suggested that the combination of the 2 features would improve the performance of memory based neurofeedback systems. © EEG and Clinical Neuroscience Society (ECNS) 2016.

  1. Different slopes for different folks: alpha and delta EEG power predict subsequent video game learning rate and improvements in cognitive control tasks.

    PubMed

    Mathewson, Kyle E; Basak, Chandramallika; Maclin, Edward L; Low, Kathy A; Boot, Walter R; Kramer, Arthur F; Fabiani, Monica; Gratton, Gabriele

    2012-12-01

    We hypothesized that control processes, as measured using electrophysiological (EEG) variables, influence the rate of learning of complex tasks. Specifically, we measured alpha power, event-related spectral perturbations (ERSPs), and event-related brain potentials during early training of the Space Fortress task, and correlated these measures with subsequent learning rate and performance in transfer tasks. Once initial score was partialled out, the best predictors were frontal alpha power and alpha and delta ERSPs, but not P300. By combining these predictors, we could explain about 50% of the learning rate variance and 10%-20% of the variance in transfer to other tasks using only pretraining EEG measures. Thus, control processes, as indexed by alpha and delta EEG oscillations, can predict learning and skill improvements. The results are of potential use to optimize training regimes. Copyright © 2012 Society for Psychophysiological Research.

  2. Vibrotactile grasping force and hand aperture feedback for myoelectric forearm prosthesis users.

    PubMed

    Witteveen, Heidi J B; Rietman, Hans S; Veltink, Peter H

    2015-06-01

    User feedback about grasping force and hand aperture is very important in object handling with myoelectric forearm prostheses but is lacking in current prostheses. Vibrotactile feedback increases the performance of healthy subjects in virtual grasping tasks, but no extensive validation on potential users has been performed. Investigate the performance of upper-limb loss subjects in grasping tasks with vibrotactile stimulation, providing hand aperture, and grasping force feedback. Cross-over trial. A total of 10 subjects with upper-limb loss performed virtual grasping tasks while perceiving vibrotactile feedback. Hand aperture feedback was provided through an array of coin motors and grasping force feedback through a single miniature stimulator or an array of coin motors. Objects with varying sizes and weights had to be grasped by a virtual hand. Percentages correctly applied hand apertures and correct grasping force levels were all higher for the vibrotactile feedback condition compared to the no-feedback condition. With visual feedback, the results were always better compared to the vibrotactile feedback condition. Task durations were comparable for all feedback conditions. Vibrotactile grasping force and hand aperture feedback improves grasping performance of subjects with upper-limb loss. However, it should be investigated whether this is of additional value in daily-life tasks. This study is a first step toward the implementation of sensory vibrotactile feedback for users of myoelectric forearm prostheses. Grasping force feedback is crucial for optimal object handling, and hand aperture feedback is essential for reduction of required visual attention. Grasping performance with feedback is evaluated for the potential users. © The International Society for Prosthetics and Orthotics 2014.

  3. The effects of a mid-task break on the brain connectome in healthy participants: A resting-state functional MRI study.

    PubMed

    Sun, Yu; Lim, Julian; Dai, Zhongxiang; Wong, KianFoong; Taya, Fumihiko; Chen, Yu; Li, Junhua; Thakor, Nitish; Bezerianos, Anastasios

    2017-05-15

    Although rest breaks are commonly administered as a countermeasure to reduce mental fatigue and boost cognitive performance, the effects of taking a break on behavior are not consistent. Moreover, our understanding of the underlying neural mechanisms of rest breaks and how they modulate mental fatigue is still rudimentary. In this study, we investigated the effects of receiving a rest break on the topological properties of brain connectivity networks via a two-session experimental paradigm, in which one session comprised four successive blocks of a mentally demanding visual selective attention task (No-rest session), whereas the other contained a rest break between the second and third task blocks (Rest session). Functional brain networks were constructed using resting-state functional MRI data recorded from 20 healthy adults before and after the performance of the task blocks. Behaviorally, subjects displayed robust time-on-task (TOT) declines, as reflected by increasingly slower reaction time as the test progressed and lower post-task self-reported ratings of engagement. However, we did not find a significant effect on task performance due to administering a mid-task break. Compared to pre-task measurements, post-task functional brain networks demonstrated an overall decrease of optimal small-world properties together with lower global efficiency. Specifically, we found TOT-related reduced nodal efficiency in brain regions that mainly resided in the subcortical areas. More interestingly, a significant block-by-session interaction was revealed in local efficiency, attributing to a significant post-task decline in No-rest session and a preserved local efficiency when a mid-task break opportunity was introduced in the Rest session. Taken together, these findings augment our understanding of how the resting brain reorganizes following the accumulation of prolonged task, suggest dissociable processes between the neural mechanisms of fatigue and recovery, and provide some of the first quantitative insights into the cognitive neuroscience of work and rest. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. A Rational Analysis of the Selection Task as Optimal Data Selection.

    ERIC Educational Resources Information Center

    Oaksford, Mike; Chater, Nick

    1994-01-01

    Experimental data on human reasoning in hypothesis-testing tasks is reassessed in light of a Bayesian model of optimal data selection in inductive hypothesis testing. The rational analysis provided by the model suggests that reasoning in such tasks may be rational rather than subject to systematic bias. (SLD)

  5. Posture-Motor and Posture-Ideomotor Dual-Tasking: A Putative Marker of Psychomotor Retardation and Depressive Rumination in Patients With Major Depressive Disorder.

    PubMed

    Aftanas, Lyubomir I; Bazanova, Olga M; Novozhilova, Nataliya V

    2018-01-01

    Background: Recent studies have demonstrated that the assessment of postural performance may be a potentially reliable and objective marker of the psychomotor retardation (PMR) in the major depressive disorder (MDD). One of the important facets of MDD-related PMR is reflected in disrupted central mechanisms of psychomotor control, heavily influenced by compelling maladaptive depressive rumination. In view of this we designed a research paradigm that included sequential execution of simple single-posture task followed by more challenging divided attention posture tasks, involving concurring motor and ideomotor workloads. Another difficulty dimension assumed executing of all the tasks with eyes open (EO) (easy) and closed (EC) (difficult) conditions. We aimed at investigating the interplay between the severity of MDD, depressive rumination, and efficiency of postural performance. Methods: Compared with 24 age- and body mass index-matched healthy controls (HCs), 26 patients with MDD sequentially executed three experimental tasks: (1) single-posture task of maintaining a quiet stance (ST), (2) actual posture-motor dual task (AMT); and (3) mental/imaginary posture-motor dual task (MMT). All the tasks were performed in the EO and the EC conditions. The primary dependent variable was the amount of kinetic energy ( E ) expended for the center of pressure deviations (CoPDs), whereas the absolute divided attention cost index showed energy cost to the dual-tasking vs. the single-posture task according to the formula: Δ E = ( E Dual-task - E Single-task ). Results: The signs of PMR in the MDD group were objectively indexed by deficient posture control in the EC condition along with overall slowness of fine motor and ideomotor activity. Another important and probably more challenging feature of the findings was that the posture deficit manifested in the ST condition was substantially and significantly attenuated in the MMT and AMT performance dual-tasking activity. A multiple linear regression analysis evidenced further that the dual-tasking energy cost (i.e., Δ E ) significantly predicted clinical scores of severity of MDD and depressive rumination. Conclusion: The findings allow to suggest that execution of concurrent actual or imaginary fine motor task with closed visual input deallocates attentional resources from compelling maladaptive depressive rumination thereby attenuating severity of absolute dual-tasking energy costs for balance maintenance in patients with MDD. Significance: Quantitative assessment of PMR through measures of the postural performance in dual-tasking may be useful to capture the negative impact of past depressive episodes, optimize the personalized treatment selection, and improve the understanding of the pathophysiological mechanisms underlying MDD.

  6. Recreation Embedded State Tuning for Optimal Readiness and Effectiveness (RESTORE)

    NASA Technical Reports Server (NTRS)

    Pope, Alan T.; Prinzel, Lawrence J., III

    2005-01-01

    Physiological self-regulation training is a behavioral medicine intervention that has demonstrated capability to improve psychophysiological coping responses to stressful experiences and to foster optimal behavioral and cognitive performance. Once developed, these psychophysiological skills require regular practice for maintenance. A concomitant benefit of these physiologically monitored practice sessions is the opportunity to track crew psychophysiological responses to the challenges of the practice task in order to detect shifts in adaptability that may foretell performance degradation. Long-duration missions will include crew recreation periods that will afford physiological self-regulation training opportunities. However, to promote adherence to the regimen, the practice experience that occupies their recreation time must be perceived by the crew as engaging and entertaining throughout repeated reinforcement sessions on long-duration missions. NASA biocybernetic technologies and publications have developed a closed-loop concept that involves adjusting or modulating (cybernetic, for governing) a person's task environment based upon a comparison of that person's physiological responses (bio-) with a training or performance criterion. This approach affords the opportunity to deliver physiological self-regulation training in an entertaining and motivating fashion and can also be employed to create a conditioned association between effective performance state and task execution behaviors, while enabling tracking of individuals psychophysiological status over time in the context of an interactive task challenge. This paper describes the aerospace spin-off technologies in this training application area as well as the current spin-back application of the technologies to long-duration missions - the Recreation Embedded State Tuning for Optimal Readiness and Effectiveness (RESTORE) concept. The RESTORE technology is designed to provide a physiological self-regulation training countermeasure for maintaining and reinforcing cognitive readiness, resilience under psychological stress, and effective mood states in long-duration crews. The technology consists of a system for delivering physiological self-regulation training and for tracking crew central and autonomic nervous system function; the system interface is designed to be experienced as engaging and entertaining throughout repeated training sessions on long-duration missions. Consequently, this self-management technology has threefold capability for recreation, behavioral health problem prophylaxis and remediation, and psychophysiological assay. The RESTORE concept aims to reduce the risk of future manned exploration missions by enhancing the capability of individual crewmembers to self-regulate cognitive states through recreation-embedded training protocols to effectively deal with the psychological toll of long-duration space flight.

  7. Advanced, Low/Zero Emission Boiler Design and Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Babcock /Wilcox; Illinois State Geological; Worley Parsons

    2007-06-30

    In partnership with the U.S. Department of Energy's National Energy Technology Laboratory, B&W and Air Liquide are developing and optimizing the oxy-combustion process for retrofitting existing boilers as well as new plants. The main objectives of the project is to: (1) demonstrate the feasibility of the oxy-combustion technology with flue gas recycle in a 5-million Btu/hr coal-fired pilot boiler, (2) measure its performances in terms of emissions and boiler efficiency while selecting the right oxygen injection and flue gas recycle strategies, and (3) perform technical and economic feasibility studies for application of the technology in demonstration and commercial scale boilers.more » This document summarizes the work performed during the period of performance of the project (Oct 2002 to June 2007). Detailed technical results are reported in corresponding topical reports that are attached as an appendix to this report. Task 1 (Site Preparation) has been completed in 2003. The experimental pilot-scale O{sub 2}/CO{sub 2} combustion tests of Task 2 (experimental test performance) has been completed in Q2 2004. Process simulation and cost assessment of Task 3 (Techno-Economic Study) has been completed in Q1 2005. The topical report on Task 3 has been finalized and submitted to DOE in Q3 2005. The calculations of Task 4 (Retrofit Recommendation and Preliminary Design of a New Generation Boiler) has been completed in 2004. In Task 6 (engineering study on retrofit applications), the engineering study on 25MW{sub e} unit has been completed in Q2, 2008 along with the corresponding cost assessment. In Task 7 (evaluation of new oxy-fuel power plants concepts), based on the design basis document prepared in 2005, the design and cost estimate of the Air Separation Units, the boiler islands and the CO{sub 2} compression and trains have been completed, for both super and ultra-supercritical case study. Final report of Task-7 is published by DOE in Oct 2007.« less

  8. Haptic-Based Perception-Empathy Biofeedback Enhances Postural Motor Learning During High-Cognitive Load Task in Healthy Older Adults

    PubMed Central

    Yasuda, Kazuhiro; Saichi, Kenta; Iwata, Hiroyasu

    2018-01-01

    Falls and fall-induced injuries are major global public health problems, and sensory input impairment in older adults results in significant limitations in feedback-type postural control. A haptic-based biofeedback (BF) system can be used for augmenting somatosensory input in older adults, and the application of this BF system can increase the objectivity of the feedback and encourage comparison with that provided by a trainer. Nevertheless, an optimal BF system that focuses on interpersonal feedback for balance training in older adults has not been proposed. Thus, we proposed a haptic-based perception-empathy BF system that provides information regarding the older adult's center-of-foot pressure pattern to the trainee and trainer for refining the motor learning effect. The first objective of this study was to examine the effect of this balance training regimen in healthy older adults performing a postural learning task. Second, this study aimed to determine whether BF training required high cognitive load to clarify its practicability in real-life settings. Twenty older adults were assigned to two groups: BF and control groups. Participants in both groups tried balance training in the single-leg stance while performing a cognitive task (i.e., serial subtraction task). Retention was tested 24 h later. Testing comprised balance performance measures (i.e., 95% confidence ellipse area and mean velocity of sway) and dual-task performance (number of responses and correct answers). Measurements of postural control using a force plate revealed that the stability of the single-leg stance was significantly lower in the BF group than in the control group during the balance task. The BF group retained the improvement in the 95% confidence ellipse area 24 h after the retention test. Results of dual-task performance during the balance task were not different between the two groups. These results confirmed the potential benefit of the proposed balance training regimen in designing successful motor learning programs for preventing falls in older adults. PMID:29868597

  9. Haptic-Based Perception-Empathy Biofeedback Enhances Postural Motor Learning During High-Cognitive Load Task in Healthy Older Adults.

    PubMed

    Yasuda, Kazuhiro; Saichi, Kenta; Iwata, Hiroyasu

    2018-01-01

    Falls and fall-induced injuries are major global public health problems, and sensory input impairment in older adults results in significant limitations in feedback-type postural control. A haptic-based biofeedback (BF) system can be used for augmenting somatosensory input in older adults, and the application of this BF system can increase the objectivity of the feedback and encourage comparison with that provided by a trainer. Nevertheless, an optimal BF system that focuses on interpersonal feedback for balance training in older adults has not been proposed. Thus, we proposed a haptic-based perception-empathy BF system that provides information regarding the older adult's center-of-foot pressure pattern to the trainee and trainer for refining the motor learning effect. The first objective of this study was to examine the effect of this balance training regimen in healthy older adults performing a postural learning task. Second, this study aimed to determine whether BF training required high cognitive load to clarify its practicability in real-life settings. Twenty older adults were assigned to two groups: BF and control groups. Participants in both groups tried balance training in the single-leg stance while performing a cognitive task (i.e., serial subtraction task). Retention was tested 24 h later. Testing comprised balance performance measures (i.e., 95% confidence ellipse area and mean velocity of sway) and dual-task performance (number of responses and correct answers). Measurements of postural control using a force plate revealed that the stability of the single-leg stance was significantly lower in the BF group than in the control group during the balance task. The BF group retained the improvement in the 95% confidence ellipse area 24 h after the retention test. Results of dual-task performance during the balance task were not different between the two groups. These results confirmed the potential benefit of the proposed balance training regimen in designing successful motor learning programs for preventing falls in older adults.

  10. Increases in Emotional Intelligence After an Online Training Program Are Associated With Better Decision-Making on the Iowa Gambling Task.

    PubMed

    Alkozei, Anna; Smith, Ryan; Demers, Lauren A; Weber, Mareen; Berryhill, Sarah M; Killgore, William D S

    2018-01-01

    Higher levels of emotional intelligence have been associated with better inter and intrapersonal functioning. In the present study, 59 healthy men and women were randomized into either a three-week online training program targeted to improve emotional intelligence ( n = 29), or a placebo control training program targeted to improve awareness of nonemotional aspects of the environment ( n = 30). Compared to placebo, participants in the emotional intelligence training group showed increased performance on the total emotional intelligence score of the Mayer-Salovey-Caruso Emotional Intelligence Test, a performance measure of emotional intelligence, as well as subscales of perceiving emotions and facilitating thought. Moreover, after emotional intelligence training, but not after placebo training, individuals displayed the ability to arrive at optimal performance faster (i.e., they showed a faster learning rate) during an emotion-guided decision-making task (i.e., the Iowa Gambling Task). More specifically, although both groups showed similar performance at the start of the Iowa Gambling Task from pre- to posttraining, the participants in the emotional intelligence training group learned to choose more advantageous than disadvantageous decks than those in the placebo training group by the time they reached the "hunch" period of the task (i.e., the point in the task when implicit task learning is thought to have occurred). Greater total improvements in performance on the Iowa Gambling Task from pre- to posttraining in the emotional intelligence training group were also positively correlated with pre- to posttraining changes in Mayer-Salovey-Caruso Emotional Intelligence Test scores, in particular with changes in the ability to perceive emotions. The present study provides preliminary evidence that emotional intelligence can be trained with the help of an online training program targeted at adults; it also suggests that changes in emotional intelligence, as a result of such a program, can lead to improved emotion-guided decision-making.

  11. Multi-objective AGV scheduling in an FMS using a hybrid of genetic algorithm and particle swarm optimization.

    PubMed

    Mousavi, Maryam; Yap, Hwa Jen; Musa, Siti Nurmaya; Tahriri, Farzad; Md Dawal, Siti Zawiah

    2017-01-01

    Flexible manufacturing system (FMS) enhances the firm's flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs). An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and hybrid GA-PSO) to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs' battery charge. Assessment of the numerical examples' scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software.

  12. Multi-objective AGV scheduling in an FMS using a hybrid of genetic algorithm and particle swarm optimization

    PubMed Central

    Yap, Hwa Jen; Musa, Siti Nurmaya; Tahriri, Farzad; Md Dawal, Siti Zawiah

    2017-01-01

    Flexible manufacturing system (FMS) enhances the firm’s flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs). An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and hybrid GA-PSO) to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs’ battery charge. Assessment of the numerical examples’ scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software. PMID:28263994

  13. Modeling and design of a cone-beam CT head scanner using task-based imaging performance optimization

    NASA Astrophysics Data System (ADS)

    Xu, J.; Sisniega, A.; Zbijewski, W.; Dang, H.; Stayman, J. W.; Wang, X.; Foos, D. H.; Aygun, N.; Koliatsos, V. E.; Siewerdsen, J. H.

    2016-04-01

    Detection of acute intracranial hemorrhage (ICH) is important for diagnosis and treatment of traumatic brain injury, stroke, postoperative bleeding, and other head and neck injuries. This paper details the design and development of a cone-beam CT (CBCT) system developed specifically for the detection of low-contrast ICH in a form suitable for application at the point of care. Recognizing such a low-contrast imaging task to be a major challenge in CBCT, the system design began with a rigorous analysis of task-based detectability including critical aspects of system geometry, hardware configuration, and artifact correction. The imaging performance model described the three-dimensional (3D) noise-equivalent quanta using a cascaded systems model that included the effects of scatter, scatter correction, hardware considerations of complementary metal-oxide semiconductor (CMOS) and flat-panel detectors (FPDs), and digitization bit depth. The performance was analyzed with respect to a low-contrast (40-80 HU), medium-frequency task representing acute ICH detection. The task-based detectability index was computed using a non-prewhitening observer model. The optimization was performed with respect to four major design considerations: (1) system geometry (including source-to-detector distance (SDD) and source-to-axis distance (SAD)); (2) factors related to the x-ray source (including focal spot size, kVp, dose, and tube power); (3) scatter correction and selection of an antiscatter grid; and (4) x-ray detector configuration (including pixel size, additive electronics noise, field of view (FOV), and frame rate, including both CMOS and a-Si:H FPDs). Optimal design choices were also considered with respect to practical constraints and available hardware components. The model was verified in comparison to measurements on a CBCT imaging bench as a function of the numerous design parameters mentioned above. An extended geometry (SAD  =  750 mm, SDD  =  1100 mm) was found to be advantageous in terms of patient dose (20 mGy) and scatter reduction, while a more isocentric configuration (SAD  =  550 mm, SDD  =  1000 mm) was found to give a more compact and mechanically favorable configuration with minor tradeoff in detectability. An x-ray source with a 0.6 mm focal spot size provided the best compromise between spatial resolution requirements and x-ray tube power. Use of a modest anti-scatter grid (8:1 GR) at a 20 mGy dose provided slight improvement (~5-10%) in the detectability index, but the benefit was lost at reduced dose. The potential advantages of CMOS detectors over FPDs were quantified, showing that both detectors provided sufficient spatial resolution for ICH detection, while the former provided a potentially superior low-dose performance, and the latter provided the requisite FOV for volumetric imaging in a centered-detector geometry. Task-based imaging performance modeling provides an important starting point for CBCT system design, especially for the challenging task of ICH detection, which is somewhat beyond the capabilities of existing CBCT platforms. The model identifies important tradeoffs in system geometry and hardware configuration, and it supports the development of a dedicated CBCT system for point-of-care application. A prototype suitable for clinical studies is in development based on this analysis.

  14. The Earth Phenomena Observing System: Intelligent Autonomy for Satellite Operations

    NASA Technical Reports Server (NTRS)

    Ricard, Michael; Abramson, Mark; Carter, David; Kolitz, Stephan

    2003-01-01

    Earth monitoring systems of the future may include large numbers of inexpensive small satellites, tasked in a coordinated fashion to observe both long term and transient targets. For best performance, a tool which helps operators optimally assign targets to satellites will be required. We present the design of algorithms developed for real-time optimized autonomous planning of large numbers of small single-sensor Earth observation satellites. The algorithms will reduce requirements on the human operators of such a system of satellites, ensure good utilization of system resources, and provide the capability to dynamically respond to temporal terrestrial phenomena. Our initial real-time system model consists of approximately 100 satellites and large number of points of interest on Earth (e.g., hurricanes, volcanoes, and forest fires) with the objective to maximize the total science value of observations over time. Several options for calculating the science value of observations include the following: 1) total observation time, 2) number of observations, and the 3) quality (a function of e.g., sensor type, range, slant angle) of the observations. An integrated approach using integer programming, optimization and astrodynamics is used to calculate optimized observation and sensor tasking plans.

  15. Task-based data-acquisition optimization for sparse image reconstruction systems

    NASA Astrophysics Data System (ADS)

    Chen, Yujia; Lou, Yang; Kupinski, Matthew A.; Anastasio, Mark A.

    2017-03-01

    Conventional wisdom dictates that imaging hardware should be optimized by use of an ideal observer (IO) that exploits full statistical knowledge of the class of objects to be imaged, without consideration of the reconstruction method to be employed. However, accurate and tractable models of the complete object statistics are often difficult to determine in practice. Moreover, in imaging systems that employ compressive sensing concepts, imaging hardware and (sparse) image reconstruction are innately coupled technologies. We have previously proposed a sparsity-driven ideal observer (SDIO) that can be employed to optimize hardware by use of a stochastic object model that describes object sparsity. The SDIO and sparse reconstruction method can therefore be "matched" in the sense that they both utilize the same statistical information regarding the class of objects to be imaged. To efficiently compute SDIO performance, the posterior distribution is estimated by use of computational tools developed recently for variational Bayesian inference. Subsequently, the SDIO test statistic can be computed semi-analytically. The advantages of employing the SDIO instead of a Hotelling observer are systematically demonstrated in case studies in which magnetic resonance imaging (MRI) data acquisition schemes are optimized for signal detection tasks.

  16. Bilevel Model-Based Discriminative Dictionary Learning for Recognition.

    PubMed

    Zhou, Pan; Zhang, Chao; Lin, Zhouchen

    2017-03-01

    Most supervised dictionary learning methods optimize the combinations of reconstruction error, sparsity prior, and discriminative terms. Thus, the learnt dictionaries may not be optimal for recognition tasks. Also, the sparse codes learning models in the training and the testing phases are inconsistent. Besides, without utilizing the intrinsic data structure, many dictionary learning methods only employ the l 0 or l 1 norm to encode each datum independently, limiting the performance of the learnt dictionaries. We present a novel bilevel model-based discriminative dictionary learning method for recognition tasks. The upper level directly minimizes the classification error, while the lower level uses the sparsity term and the Laplacian term to characterize the intrinsic data structure. The lower level is subordinate to the upper level. Therefore, our model achieves an overall optimality for recognition in that the learnt dictionary is directly tailored for recognition. Moreover, the sparse codes learning models in the training and the testing phases can be the same. We further propose a novel method to solve our bilevel optimization problem. It first replaces the lower level with its Karush-Kuhn-Tucker conditions and then applies the alternating direction method of multipliers to solve the equivalent problem. Extensive experiments demonstrate the effectiveness and robustness of our method.

  17. Quantum-state comparison and discrimination

    NASA Astrophysics Data System (ADS)

    Hayashi, A.; Hashimoto, T.; Horibe, M.

    2018-05-01

    We investigate the performance of discrimination strategy in the comparison task of known quantum states. In the discrimination strategy, one infers whether or not two quantum systems are in the same state on the basis of the outcomes of separate discrimination measurements on each system. In some cases with more than two possible states, the optimal strategy in minimum-error comparison is that one should infer the two systems are in different states without any measurement, implying that the discrimination strategy performs worse than the trivial "no-measurement" strategy. We present a sufficient condition for this phenomenon to happen. For two pure states with equal prior probabilities, we determine the optimal comparison success probability with an error margin, which interpolates the minimum-error and unambiguous comparison. We find that the discrimination strategy is not optimal except for the minimum-error case.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bisio, Alessandro; D'Ariano, Giacomo Mauro; Perinotti, Paolo

    We analyze quantum algorithms for cloning of a quantum measurement. Our aim is to mimic two uses of a device performing an unknown von Neumann measurement with a single use of the device. When the unknown device has to be used before the bipartite state to be measured is available we talk about 1{yields}2 learning of the measurement, otherwise the task is called 1{yields}2 cloning of a measurement. We perform the optimization for both learning and cloning for arbitrary dimension d of the Hilbert space. For 1{yields}2 cloning we also propose a simple quantum network that achieves the optimal fidelity.more » The optimal fidelity for 1{yields}2 learning just slightly outperforms the estimate and prepare strategy in which one first estimates the unknown measurement and depending on the result suitably prepares the duplicate.« less

  19. Optimal control model predictions of system performance and attention allocation and their experimental validation in a display design study

    NASA Technical Reports Server (NTRS)

    Johannsen, G.; Govindaraj, T.

    1980-01-01

    The influence of different types of predictor displays in a longitudinal vertical takeoff and landing (VTOL) hover task is analyzed in a theoretical study. Several cases with differing amounts of predictive and rate information are compared. The optimal control model of the human operator is used to estimate human and system performance in terms of root-mean-square (rms) values and to compute optimized attention allocation. The only part of the model which is varied to predict these data is the observation matrix. Typical cases are selected for a subsequent experimental validation. The rms values as well as eye-movement data are recorded. The results agree favorably with those of the theoretical study in terms of relative differences. Better matching is achieved by revised model input data.

  20. Design and Analysis of Optimization Algorithms to Minimize Cryptographic Processing in BGP Security Protocols.

    PubMed

    Sriram, Vinay K; Montgomery, Doug

    2017-07-01

    The Internet is subject to attacks due to vulnerabilities in its routing protocols. One proposed approach to attain greater security is to cryptographically protect network reachability announcements exchanged between Border Gateway Protocol (BGP) routers. This study proposes and evaluates the performance and efficiency of various optimization algorithms for validation of digitally signed BGP updates. In particular, this investigation focuses on the BGPSEC (BGP with SECurity extensions) protocol, currently under consideration for standardization in the Internet Engineering Task Force. We analyze three basic BGPSEC update processing algorithms: Unoptimized, Cache Common Segments (CCS) optimization, and Best Path Only (BPO) optimization. We further propose and study cache management schemes to be used in conjunction with the CCS and BPO algorithms. The performance metrics used in the analyses are: (1) routing table convergence time after BGPSEC peering reset or router reboot events and (2) peak-second signature verification workload. Both analytical modeling and detailed trace-driven simulation were performed. Results show that the BPO algorithm is 330% to 628% faster than the unoptimized algorithm for routing table convergence in a typical Internet core-facing provider edge router.

  1. Multicompare tests of the performance of different metaheuristics in EEG dipole source localization.

    PubMed

    Escalona-Vargas, Diana Irazú; Lopez-Arevalo, Ivan; Gutiérrez, David

    2014-01-01

    We study the use of nonparametric multicompare statistical tests on the performance of simulated annealing (SA), genetic algorithm (GA), particle swarm optimization (PSO), and differential evolution (DE), when used for electroencephalographic (EEG) source localization. Such task can be posed as an optimization problem for which the referred metaheuristic methods are well suited. Hence, we evaluate the localization's performance in terms of metaheuristics' operational parameters and for a fixed number of evaluations of the objective function. In this way, we are able to link the efficiency of the metaheuristics with a common measure of computational cost. Our results did not show significant differences in the metaheuristics' performance for the case of single source localization. In case of localizing two correlated sources, we found that PSO (ring and tree topologies) and DE performed the worst, then they should not be considered in large-scale EEG source localization problems. Overall, the multicompare tests allowed to demonstrate the little effect that the selection of a particular metaheuristic and the variations in their operational parameters have in this optimization problem.

  2. A global optimization approach to multi-polarity sentiment analysis.

    PubMed

    Li, Xinmiao; Li, Jing; Wu, Yukeng

    2015-01-01

    Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From the results of this comparison, we found that PSOGO-Senti is more suitable for improving a difficult multi-polarity sentiment analysis problem.

  3. Enforcement of entailment constraints in distributed service-based business processes.

    PubMed

    Hummer, Waldemar; Gaubatz, Patrick; Strembeck, Mark; Zdun, Uwe; Dustdar, Schahram

    2013-11-01

    A distributed business process is executed in a distributed computing environment. The service-oriented architecture (SOA) paradigm is a popular option for the integration of software services and execution of distributed business processes. Entailment constraints, such as mutual exclusion and binding constraints, are important means to control process execution. Mutually exclusive tasks result from the division of powerful rights and responsibilities to prevent fraud and abuse. In contrast, binding constraints define that a subject who performed one task must also perform the corresponding bound task(s). We aim to provide a model-driven approach for the specification and enforcement of task-based entailment constraints in distributed service-based business processes. Based on a generic metamodel, we define a domain-specific language (DSL) that maps the different modeling-level artifacts to the implementation-level. The DSL integrates elements from role-based access control (RBAC) with the tasks that are performed in a business process. Process definitions are annotated using the DSL, and our software platform uses automated model transformations to produce executable WS-BPEL specifications which enforce the entailment constraints. We evaluate the impact of constraint enforcement on runtime performance for five selected service-based processes from existing literature. Our evaluation demonstrates that the approach correctly enforces task-based entailment constraints at runtime. The performance experiments illustrate that the runtime enforcement operates with an overhead that scales well up to the order of several ten thousand logged invocations. Using our DSL annotations, the user-defined process definition remains declarative and clean of security enforcement code. Our approach decouples the concerns of (non-technical) domain experts from technical details of entailment constraint enforcement. The developed framework integrates seamlessly with WS-BPEL and the Web services technology stack. Our prototype implementation shows the feasibility of the approach, and the evaluation points to future work and further performance optimizations.

  4. CPU-GPU hybrid accelerating the Zuker algorithm for RNA secondary structure prediction applications

    PubMed Central

    2012-01-01

    Background Prediction of ribonucleic acid (RNA) secondary structure remains one of the most important research areas in bioinformatics. The Zuker algorithm is one of the most popular methods of free energy minimization for RNA secondary structure prediction. Thus far, few studies have been reported on the acceleration of the Zuker algorithm on general-purpose processors or on extra accelerators such as Field Programmable Gate-Array (FPGA) and Graphics Processing Units (GPU). To the best of our knowledge, no implementation combines both CPU and extra accelerators, such as GPUs, to accelerate the Zuker algorithm applications. Results In this paper, a CPU-GPU hybrid computing system that accelerates Zuker algorithm applications for RNA secondary structure prediction is proposed. The computing tasks are allocated between CPU and GPU for parallel cooperate execution. Performance differences between the CPU and the GPU in the task-allocation scheme are considered to obtain workload balance. To improve the hybrid system performance, the Zuker algorithm is optimally implemented with special methods for CPU and GPU architecture. Conclusions Speedup of 15.93× over optimized multi-core SIMD CPU implementation and performance advantage of 16% over optimized GPU implementation are shown in the experimental results. More than 14% of the sequences are executed on CPU in the hybrid system. The system combining CPU and GPU to accelerate the Zuker algorithm is proven to be promising and can be applied to other bioinformatics applications. PMID:22369626

  5. Human Guidance Behavior Decomposition and Modeling

    NASA Astrophysics Data System (ADS)

    Feit, Andrew James

    Trained humans are capable of high performance, adaptable, and robust first-person dynamic motion guidance behavior. This behavior is exhibited in a wide variety of activities such as driving, piloting aircraft, skiing, biking, and many others. Human performance in such activities far exceeds the current capability of autonomous systems in terms of adaptability to new tasks, real-time motion planning, robustness, and trading safety for performance. The present work investigates the structure of human dynamic motion guidance that enables these performance qualities. This work uses a first-person experimental framework that presents a driving task to the subject, measuring control inputs, vehicle motion, and operator visual gaze movement. The resulting data is decomposed into subspace segment clusters that form primitive elements of action-perception interactive behavior. Subspace clusters are defined by both agent-environment system dynamic constraints and operator control strategies. A key contribution of this work is to define transitions between subspace cluster segments, or subgoals, as points where the set of active constraints, either system or operator defined, changes. This definition provides necessary conditions to determine transition points for a given task-environment scenario that allow a solution trajectory to be planned from known behavior elements. In addition, human gaze behavior during this task contains predictive behavior elements, indicating that the identified control modes are internally modeled. Based on these ideas, a generative, autonomous guidance framework is introduced that efficiently generates optimal dynamic motion behavior in new tasks. The new subgoal planning algorithm is shown to generate solutions to certain tasks more quickly than existing approaches currently used in robotics.

  6. Understanding neuromotor strategy during functional upper extremity tasks using symbolic dynamics.

    PubMed

    Nathan, Dominic E; Guastello, Stephen J; Prost, Robert W; Jeutter, Dean C

    2012-01-01

    The ability to model and quantify brain activation patterns that pertain to natural neuromotor strategy of the upper extremities during functional task performance is critical to the development of therapeutic interventions such as neuroprosthetic devices. The mechanisms of information flow, activation sequence and patterns, and the interaction between anatomical regions of the brain that are specific to movement planning, intention and execution of voluntary upper extremity motor tasks were investigated here. This paper presents a novel method using symbolic dynamics (orbital decomposition) and nonlinear dynamic tools of entropy, self-organization and chaos to describe the underlying structure of activation shifts in regions of the brain that are involved with the cognitive aspects of functional upper extremity task performance. Several questions were addressed: (a) How is it possible to distinguish deterministic or causal patterns of activity in brain fMRI from those that are really random or non-contributory to the neuromotor control process? (b) Can the complexity of activation patterns over time be quantified? (c) What are the optimal ways of organizing fMRI data to preserve patterns of activation, activation levels, and extract meaningful temporal patterns as they evolve over time? Analysis was performed using data from a custom developed time resolved fMRI paradigm involving human subjects (N=18) who performed functional upper extremity motor tasks with varying time delays between the onset of intention and onset of actual movements. The results indicate that there is structure in the data that can be quantified through entropy and dimensional complexity metrics and statistical inference, and furthermore, orbital decomposition is sensitive in capturing the transition of states that correlate with the cognitive aspects of functional task performance.

  7. Case study: Optimizing fault model input parameters using bio-inspired algorithms

    NASA Astrophysics Data System (ADS)

    Plucar, Jan; Grunt, Onřej; Zelinka, Ivan

    2017-07-01

    We present a case study that demonstrates a bio-inspired approach in the process of finding optimal parameters for GSM fault model. This model is constructed using Petri Nets approach it represents dynamic model of GSM network environment in the suburban areas of Ostrava city (Czech Republic). We have been faced with a task of finding optimal parameters for an application that requires high amount of data transfers between the application itself and secure servers located in datacenter. In order to find the optimal set of parameters we employ bio-inspired algorithms such as Differential Evolution (DE) or Self Organizing Migrating Algorithm (SOMA). In this paper we present use of these algorithms, compare results and judge their performance in fault probability mitigation.

  8. Long-Term Stability of Motor Cortical Activity: Implications for Brain Machine Interfaces and Optimal Feedback Control.

    PubMed

    Flint, Robert D; Scheid, Michael R; Wright, Zachary A; Solla, Sara A; Slutzky, Marc W

    2016-03-23

    The human motor system is capable of remarkably precise control of movements--consider the skill of professional baseball pitchers or surgeons. This precise control relies upon stable representations of movements in the brain. Here, we investigated the stability of cortical activity at multiple spatial and temporal scales by recording local field potentials (LFPs) and action potentials (multiunit spikes, MSPs) while two monkeys controlled a cursor either with their hand or directly from the brain using a brain-machine interface. LFPs and some MSPs were remarkably stable over time periods ranging from 3 d to over 3 years; overall, LFPs were significantly more stable than spikes. We then assessed whether the stability of all neural activity, or just a subset of activity, was necessary to achieve stable behavior. We showed that projections of neural activity into the subspace relevant to the task (the "task-relevant space") were significantly more stable than were projections into the task-irrelevant (or "task-null") space. This provides cortical evidence in support of the minimum intervention principle, which proposes that optimal feedback control (OFC) allows the brain to tightly control only activity in the task-relevant space while allowing activity in the task-irrelevant space to vary substantially from trial to trial. We found that the brain appears capable of maintaining stable movement representations for extremely long periods of time, particularly so for neural activity in the task-relevant space, which agrees with OFC predictions. It is unknown whether cortical signals are stable for more than a few weeks. Here, we demonstrate that motor cortical signals can exhibit high stability over several years. This result is particularly important to brain-machine interfaces because it could enable stable performance with infrequent recalibration. Although we can maintain movement accuracy over time, movement components that are unrelated to the goals of a task (such as elbow position during reaching) often vary from trial to trial. This is consistent with the minimum intervention principle of optimal feedback control. We provide evidence that the motor cortex acts according to this principle: cortical activity is more stable in the task-relevant space and more variable in the task-irrelevant space. Copyright © 2016 the authors 0270-6474/16/363623-10$15.00/0.

  9. Effects of Color Stimulation and Information on the Copying Performance of Attention-Problem Adolescents.

    ERIC Educational Resources Information Center

    Zentall, Sydney S.; And Others

    The optimal stimulaton theory (which proposes that hyperactive children are more readily underaroused than nonhyperactive children and should thus derive greater gains from stimulation added to repetitive copying tasks than comparisons) was tested with 16 adolescents, rating high on attention and behavior problems, and 16 controls. Matched pairs…

  10. Pilot/vehicle model analysis of visual and motion cue requirements in flight simulation. [helicopter hovering

    NASA Technical Reports Server (NTRS)

    Baron, S.; Lancraft, R.; Zacharias, G.

    1980-01-01

    The optimal control model (OCM) of the human operator is used to predict the effect of simulator characteristics on pilot performance and workload. The piloting task studied is helicopter hover. Among the simulator characteristics considered were (computer generated) visual display resolution, field of view and time delay.

  11. What REALLY Works: Optimizing Classroom Discussions to Promote Comprehension and Critical-Analytic Thinking

    ERIC Educational Resources Information Center

    Murphy, P. Karen; Firetto, Carla M.; Wei, Liwei; Li, Mengyi; Croninger, Rachel M. V.

    2016-01-01

    Many American students struggle to perform even basic comprehension of text, such as locating information, determining the main idea, or supporting details of a story. Even more students are inadequately prepared to complete more complex tasks, such as critically or analytically interpreting information in text or making reasoned decisions from…

  12. Active learning: learning a motor skill without a coach.

    PubMed

    Huang, Vincent S; Shadmehr, Reza; Diedrichsen, Jörn

    2008-08-01

    When we learn a new skill (e.g., golf) without a coach, we are "active learners": we have to choose the specific components of the task on which to train (e.g., iron, driver, putter, etc.). What guides our selection of the training sequence? How do choices that people make compare with choices made by machine learning algorithms that attempt to optimize performance? We asked subjects to learn the novel dynamics of a robotic tool while moving it in four directions. They were instructed to choose their practice directions to maximize their performance in subsequent tests. We found that their choices were strongly influenced by motor errors: subjects tended to immediately repeat an action if that action had produced a large error. This strategy was correlated with better performance on test trials. However, even when participants performed perfectly on a movement, they did not avoid repeating that movement. The probability of repeating an action did not drop below chance even when no errors were observed. This behavior led to suboptimal performance. It also violated a strong prediction of current machine learning algorithms, which solve the active learning problem by choosing a training sequence that will maximally reduce the learner's uncertainty about the task. While we show that these algorithms do not provide an adequate description of human behavior, our results suggest ways to improve human motor learning by helping people choose an optimal training sequence.

  13. Spatio-temporal Hotelling observer for signal detection from image sequences

    PubMed Central

    Caucci, Luca; Barrett, Harrison H.; Rodríguez, Jeffrey J.

    2010-01-01

    Detection of signals in noisy images is necessary in many applications, including astronomy and medical imaging. The optimal linear observer for performing a detection task, called the Hotelling observer in the medical literature, can be regarded as a generalization of the familiar prewhitening matched filter. Performance on the detection task is limited by randomness in the image data, which stems from randomness in the object, randomness in the imaging system, and randomness in the detector outputs due to photon and readout noise, and the Hotelling observer accounts for all of these effects in an optimal way. If multiple temporal frames of images are acquired, the resulting data set is a spatio-temporal random process, and the Hotelling observer becomes a spatio-temporal linear operator. This paper discusses the theory of the spatio-temporal Hotelling observer and estimation of the required spatio-temporal covariance matrices. It also presents a parallel implementation of the observer on a cluster of Sony PLAYSTATION 3 gaming consoles. As an example, we consider the use of the spatio-temporal Hotelling observer for exoplanet detection. PMID:19550494

  14. Spatio-temporal Hotelling observer for signal detection from image sequences.

    PubMed

    Caucci, Luca; Barrett, Harrison H; Rodriguez, Jeffrey J

    2009-06-22

    Detection of signals in noisy images is necessary in many applications, including astronomy and medical imaging. The optimal linear observer for performing a detection task, called the Hotelling observer in the medical literature, can be regarded as a generalization of the familiar prewhitening matched filter. Performance on the detection task is limited by randomness in the image data, which stems from randomness in the object, randomness in the imaging system, and randomness in the detector outputs due to photon and readout noise, and the Hotelling observer accounts for all of these effects in an optimal way. If multiple temporal frames of images are acquired, the resulting data set is a spatio-temporal random process, and the Hotelling observer becomes a spatio-temporal linear operator. This paper discusses the theory of the spatio-temporal Hotelling observer and estimation of the required spatio-temporal covariance matrices. It also presents a parallel implementation of the observer on a cluster of Sony PLAYSTATION 3 gaming consoles. As an example, we consider the use of the spatio-temporal Hotelling observer for exoplanet detection.

  15. Dynamic Causal Modeling of Preclinical Autosomal-Dominant Alzheimer's Disease.

    PubMed

    Penny, Will; Iglesias-Fuster, Jorge; Quiroz, Yakeel T; Lopera, Francisco Javier; Bobes, Maria A

    2018-03-16

    Dynamic causal modeling (DCM) is a framework for making inferences about changes in brain connectivity using neuroimaging data. We fitted DCMs to high-density EEG data from subjects performing a semantic picture matching task. The subjects are carriers of the PSEN1 mutation, which leads to early onset Alzheimer's disease, but at the time of EEG acquisition in 1999, these subjects were cognitively unimpaired. We asked 1) what is the optimal model architecture for explaining the event-related potentials in this population, 2) which connections are different between this Presymptomatic Carrier (PreC) group and a Non-Carrier (NonC) group performing the same task, and 3) which network connections are predictive of subsequent Mini-Mental State Exam (MMSE) trajectories. We found 1) a model with hierarchical rather than lateral connections between hemispheres to be optimal, 2) that a pathway from right inferotemporal cortex (IT) to left medial temporal lobe (MTL) was preferentially activated by incongruent items for subjects in the PreC group but not the NonC group, and 3) that increased effective connectivity among left MTL, right IT, and right MTL was predictive of subsequent MMSE scores.

  16. Spiking neuron network Helmholtz machine.

    PubMed

    Sountsov, Pavel; Miller, Paul

    2015-01-01

    An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm) can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well-studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule.

  17. Spiking neuron network Helmholtz machine

    PubMed Central

    Sountsov, Pavel; Miller, Paul

    2015-01-01

    An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm) can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well-studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule. PMID:25954191

  18. Optimal Control-Based Adaptive NN Design for a Class of Nonlinear Discrete-Time Block-Triangular Systems.

    PubMed

    Liu, Yan-Jun; Tong, Shaocheng

    2016-11-01

    In this paper, we propose an optimal control scheme-based adaptive neural network design for a class of unknown nonlinear discrete-time systems. The controlled systems are in a block-triangular multi-input-multi-output pure-feedback structure, i.e., there are both state and input couplings and nonaffine functions to be included in every equation of each subsystem. The design objective is to provide a control scheme, which not only guarantees the stability of the systems, but also achieves optimal control performance. The main contribution of this paper is that it is for the first time to achieve the optimal performance for such a class of systems. Owing to the interactions among subsystems, making an optimal control signal is a difficult task. The design ideas are that: 1) the systems are transformed into an output predictor form; 2) for the output predictor, the ideal control signal and the strategic utility function can be approximated by using an action network and a critic network, respectively; and 3) an optimal control signal is constructed with the weight update rules to be designed based on a gradient descent method. The stability of the systems can be proved based on the difference Lyapunov method. Finally, a numerical simulation is given to illustrate the performance of the proposed scheme.

  19. An expert system for integrated structural analysis and design optimization for aerospace structures

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The results of a research study on the development of an expert system for integrated structural analysis and design optimization is presented. An Object Representation Language (ORL) was developed first in conjunction with a rule-based system. This ORL/AI shell was then used to develop expert systems to provide assistance with a variety of structural analysis and design optimization tasks, in conjunction with procedural modules for finite element structural analysis and design optimization. The main goal of the research study was to provide expertise, judgment, and reasoning capabilities in the aerospace structural design process. This will allow engineers performing structural analysis and design, even without extensive experience in the field, to develop error-free, efficient and reliable structural designs very rapidly and cost-effectively. This would not only improve the productivity of design engineers and analysts, but also significantly reduce time to completion of structural design. An extensive literature survey in the field of structural analysis, design optimization, artificial intelligence, and database management systems and their application to the structural design process was first performed. A feasibility study was then performed, and the architecture and the conceptual design for the integrated 'intelligent' structural analysis and design optimization software was then developed. An Object Representation Language (ORL), in conjunction with a rule-based system, was then developed using C++. Such an approach would improve the expressiveness for knowledge representation (especially for structural analysis and design applications), provide ability to build very large and practical expert systems, and provide an efficient way for storing knowledge. Functional specifications for the expert systems were then developed. The ORL/AI shell was then used to develop a variety of modules of expert systems for a variety of modeling, finite element analysis, and design optimization tasks in the integrated aerospace structural design process. These expert systems were developed to work in conjunction with procedural finite element structural analysis and design optimization modules (developed in-house at SAT, Inc.). The complete software, AutoDesign, so developed, can be used for integrated 'intelligent' structural analysis and design optimization. The software was beta-tested at a variety of companies, used by a range of engineers with different levels of background and expertise. Based on the feedback obtained by such users, conclusions were developed and are provided.

  20. An expert system for integrated structural analysis and design optimization for aerospace structures

    NASA Astrophysics Data System (ADS)

    1992-04-01

    The results of a research study on the development of an expert system for integrated structural analysis and design optimization is presented. An Object Representation Language (ORL) was developed first in conjunction with a rule-based system. This ORL/AI shell was then used to develop expert systems to provide assistance with a variety of structural analysis and design optimization tasks, in conjunction with procedural modules for finite element structural analysis and design optimization. The main goal of the research study was to provide expertise, judgment, and reasoning capabilities in the aerospace structural design process. This will allow engineers performing structural analysis and design, even without extensive experience in the field, to develop error-free, efficient and reliable structural designs very rapidly and cost-effectively. This would not only improve the productivity of design engineers and analysts, but also significantly reduce time to completion of structural design. An extensive literature survey in the field of structural analysis, design optimization, artificial intelligence, and database management systems and their application to the structural design process was first performed. A feasibility study was then performed, and the architecture and the conceptual design for the integrated 'intelligent' structural analysis and design optimization software was then developed. An Object Representation Language (ORL), in conjunction with a rule-based system, was then developed using C++. Such an approach would improve the expressiveness for knowledge representation (especially for structural analysis and design applications), provide ability to build very large and practical expert systems, and provide an efficient way for storing knowledge. Functional specifications for the expert systems were then developed. The ORL/AI shell was then used to develop a variety of modules of expert systems for a variety of modeling, finite element analysis, and design optimization tasks in the integrated aerospace structural design process. These expert systems were developed to work in conjunction with procedural finite element structural analysis and design optimization modules (developed in-house at SAT, Inc.). The complete software, AutoDesign, so developed, can be used for integrated 'intelligent' structural analysis and design optimization. The software was beta-tested at a variety of companies, used by a range of engineers with different levels of background and expertise. Based on the feedback obtained by such users, conclusions were developed and are provided.

  1. Optimal Measurement Tasks and Their Physical Realizations

    NASA Astrophysics Data System (ADS)

    Yerokhin, Vadim

    This thesis reflects works previously published by the author and materials hitherto unpublished on the subject of quantum information theory. Particularly, results in optimal discrimination, cloning, and separation of quantum states, and their relationships, are discussed. Our interest lies in the scenario where we are given one of two quantum states prepared with a known a-priori probability. We are given full information about the states and are assigned the task of performing an optimal measurement on the incoming state. Given that none of these tasks is in general possible to perform perfectly we must choose a figure of merit to optimize, and as we shall see there is always a trade-off between competing figures of merit, such as the likelihood of getting the desired result versus the quality of the result. For state discrimination the competing figures of merit are the success rate of the measurement, the errors involved, and the inconclusiveness. Similarly increasing the separation between states comes at a cost of less frequent successful applications of the separation protocol. For cloning, aside from successfully producing clones we are also interested in the fidelity of the clones compared to the original state, which is a measure of the quality of the clones. Because all quantum operations obey the same set of conditions for evolution one may expect similar restrictions on disparate measurement strategies, and our work shows a deep connection between all three branches, with cloning and separation asymptotically converging to state discrimination. Via Neumark's theorem, our description of these unitary processes can be implemented using single-photon interferometry with linear optical devices. Amazingly any quantum mechanical evolution may be decomposed as an experiment involving only lasers, beamsplitters, phase-shifters and mirrors. Such readily available tools allow for verification of the aforementioned protocols and we build upon existing results to derive explicit setups that the experimentalist may build.

  2. A quantitative model of optimal data selection in Wason's selection task.

    PubMed

    Hattori, Masasi

    2002-10-01

    The optimal data selection model proposed by Oaksford and Chater (1994) successfully formalized Wason's selection task (Wason, 1966). The model, however, involved some questionable assumptions and was also not sufficient as a model of the task because it could not provide quantitative predictions of the card selection frequencies. In this paper, the model was revised to provide quantitative fits to the data. The model can predict the selection frequencies of cards based on a selection tendency function (STF), or conversely, it enables the estimation of subjective probabilities from data. Past experimental data were first re-analysed based on the model. In Experiment 1, the superiority of the revised model was shown. However, when the relationship between antecedent and consequent was forced to deviate from the biconditional form, the model was not supported. In Experiment 2, it was shown that sufficient emphasis on probabilistic information can affect participants' performance. A detailed experimental method to sort participants by probabilistic strategies was introduced. Here, the model was supported by a subgroup of participants who used the probabilistic strategy. Finally, the results were discussed from the viewpoint of adaptive rationality.

  3. Optimization: Old Dogs and New Tasks

    ERIC Educational Resources Information Center

    Kaplan, Jennifer J.; Otten, Samuel

    2012-01-01

    This article introduces an optimization task with a ready-made motivating question that may be paraphrased as follows: "Are you smarter than a Welsh corgi?" The authors present the task along with descriptions of the ways in which two groups of students approached it. These group vignettes reveal as much about the nature of calculus students'…

  4. Self-Efficacy and Interest: Experimental Studies of Optimal Incompetence.

    ERIC Educational Resources Information Center

    Silvia, Paul J.

    2003-01-01

    To test the optimal incompetence hypothesis (high self-efficacy lowers task interest), 30 subjects rated interest, perceived difficulty, and confidence of success in different tasks. In study 2, 33 subjects completed a dart-game task in easy, moderate, and difficult conditions. In both, interest was a quadratic function of self-efficacy,…

  5. High-speed prediction of crystal structures for organic molecules

    NASA Astrophysics Data System (ADS)

    Obata, Shigeaki; Goto, Hitoshi

    2015-02-01

    We developed a master-worker type parallel algorithm for allocating tasks of crystal structure optimizations to distributed compute nodes, in order to improve a performance of simulations for crystal structure predictions. The performance experiments were demonstrated on TUT-ADSIM supercomputer system (HITACHI HA8000-tc/HT210). The experimental results show that our parallel algorithm could achieve speed-ups of 214 and 179 times using 256 processor cores on crystal structure optimizations in predictions of crystal structures for 3-aza-bicyclo(3.3.1)nonane-2,4-dione and 2-diazo-3,5-cyclohexadiene-1-one, respectively. We expect that this parallel algorithm is always possible to reduce computational costs of any crystal structure predictions.

  6. Caffeine Enhances Memory Performance in Young Adults during Their Non-optimal Time of Day

    PubMed Central

    Sherman, Stephanie M.; Buckley, Timothy P.; Baena, Elsa; Ryan, Lee

    2016-01-01

    Many college students struggle to perform well on exams in the early morning. Although students drink caffeinated beverages to feel more awake, it is unclear whether these actually improve performance. After consuming coffee (caffeinated or decaffeinated), college-age adults completed implicit and explicit memory tasks in the early morning and late afternoon (Experiment 1). During the morning, participants ingesting caffeine demonstrated a striking improvement in explicit memory, but not implicit memory. Caffeine did not alter memory performance in the afternoon. In Experiment 2, participants engaged in cardiovascular exercise in order to examine whether increases in physiological arousal similarly improved memory. Despite clear increases in physiological arousal, exercise did not improve memory performance compared to a stretching control condition. These results suggest that caffeine has a specific benefit for memory during students’ non-optimal time of day – early morning. These findings have real-world implications for students taking morning exams. PMID:27895607

  7. Energy-Aware Multipath Routing Scheme Based on Particle Swarm Optimization in Mobile Ad Hoc Networks

    PubMed Central

    Robinson, Y. Harold; Rajaram, M.

    2015-01-01

    Mobile ad hoc network (MANET) is a collection of autonomous mobile nodes forming an ad hoc network without fixed infrastructure. Dynamic topology property of MANET may degrade the performance of the network. However, multipath selection is a great challenging task to improve the network lifetime. We proposed an energy-aware multipath routing scheme based on particle swarm optimization (EMPSO) that uses continuous time recurrent neural network (CTRNN) to solve optimization problems. CTRNN finds the optimal loop-free paths to solve link disjoint paths in a MANET. The CTRNN is used as an optimum path selection technique that produces a set of optimal paths between source and destination. In CTRNN, particle swarm optimization (PSO) method is primly used for training the RNN. The proposed scheme uses the reliability measures such as transmission cost, energy factor, and the optimal traffic ratio between source and destination to increase routing performance. In this scheme, optimal loop-free paths can be found using PSO to seek better link quality nodes in route discovery phase. PSO optimizes a problem by iteratively trying to get a better solution with regard to a measure of quality. The proposed scheme discovers multiple loop-free paths by using PSO technique. PMID:26819966

  8. Varied Practice in Laparoscopy Training: Beneficial Learning Stimulation or Cognitive Overload?

    PubMed

    Spruit, Edward N; Kleijweg, Luca; Band, Guido P H; Hamming, Jaap F

    2016-01-01

    Determining the optimal design for surgical skills training is an ongoing research endeavor. In education literature, varied practice is listed as a positive intervention to improve acquisition of knowledge and motor skills. In the current study we tested the effectiveness of a varied practice intervention during laparoscopy training. Twenty-four trainees (control group) without prior experience received a 3 weeks laparoscopic skills training utilizing four basic and one advanced training task. Twenty-eight trainees (experimental group) received the same training with a random training task schedule, more frequent task switching and inverted viewing conditions on the four basic training tasks, but not the advanced task. Results showed inferior performance of the experimental group on the four basic laparoscopy tasks during training, at the end of training and at a 2 months retention session. We assume the inverted viewing conditions have led to the deterioration of learning in the experimental group because no significant differences were found between groups on the only task that had not been practiced under inverted viewing conditions; the advanced laparoscopic task. Potential moderating effects of inter-task similarity, task complexity, and trainee characteristics are discussed.

  9. Varied Practice in Laparoscopy Training: Beneficial Learning Stimulation or Cognitive Overload?

    PubMed Central

    Spruit, Edward N.; Kleijweg, Luca; Band, Guido P. H.; Hamming, Jaap F.

    2016-01-01

    Determining the optimal design for surgical skills training is an ongoing research endeavor. In education literature, varied practice is listed as a positive intervention to improve acquisition of knowledge and motor skills. In the current study we tested the effectiveness of a varied practice intervention during laparoscopy training. Twenty-four trainees (control group) without prior experience received a 3 weeks laparoscopic skills training utilizing four basic and one advanced training task. Twenty-eight trainees (experimental group) received the same training with a random training task schedule, more frequent task switching and inverted viewing conditions on the four basic training tasks, but not the advanced task. Results showed inferior performance of the experimental group on the four basic laparoscopy tasks during training, at the end of training and at a 2 months retention session. We assume the inverted viewing conditions have led to the deterioration of learning in the experimental group because no significant differences were found between groups on the only task that had not been practiced under inverted viewing conditions; the advanced laparoscopic task. Potential moderating effects of inter-task similarity, task complexity, and trainee characteristics are discussed. PMID:27242599

  10. Group interaction and flight crew performance

    NASA Technical Reports Server (NTRS)

    Foushee, H. Clayton; Helmreich, Robert L.

    1988-01-01

    The application of human-factors analysis to the performance of aircraft-operation tasks by the crew as a group is discussed in an introductory review and illustrated with anecdotal material. Topics addressed include the function of a group in the operational environment, the classification of group performance factors (input, process, and output parameters), input variables and the flight crew process, and the effect of process variables on performance. Consideration is given to aviation safety issues, techniques for altering group norms, ways of increasing crew effort and coordination, and the optimization of group composition.

  11. Optimal multisensory decision-making in a reaction-time task.

    PubMed

    Drugowitsch, Jan; DeAngelis, Gregory C; Klier, Eliana M; Angelaki, Dora E; Pouget, Alexandre

    2014-06-14

    Humans and animals can integrate sensory evidence from various sources to make decisions in a statistically near-optimal manner, provided that the stimulus presentation time is fixed across trials. Little is known about whether optimality is preserved when subjects can choose when to make a decision (reaction-time task), nor when sensory inputs have time-varying reliability. Using a reaction-time version of a visual/vestibular heading discrimination task, we show that behavior is clearly sub-optimal when quantified with traditional optimality metrics that ignore reaction times. We created a computational model that accumulates evidence optimally across both cues and time, and trades off accuracy with decision speed. This model quantitatively explains subjects's choices and reaction times, supporting the hypothesis that subjects do, in fact, accumulate evidence optimally over time and across sensory modalities, even when the reaction time is under the subject's control.

  12. Individual differences in voluntary alcohol intake in rats: relationship with impulsivity, decision making and Pavlovian conditioned approach.

    PubMed

    Spoelder, Marcia; Flores Dourojeanni, Jacques P; de Git, Kathy C G; Baars, Annemarie M; Lesscher, Heidi M B; Vanderschuren, Louk J M J

    2017-07-01

    Alcohol use disorder (AUD) has been associated with suboptimal decision making, exaggerated impulsivity, and aberrant responses to reward-paired cues, but the relationship between AUD and these behaviors is incompletely understood. This study aims to assess decision making, impulsivity, and Pavlovian-conditioned approach in rats that voluntarily consume low (LD) or high (HD) amounts of alcohol. LD and HD were tested in the rat gambling task (rGT) or the delayed reward task (DRT). Next, the effect of alcohol (0-1.0 g/kg) was tested in these tasks. Pavlovian-conditioned approach (PCA) was assessed both prior to and after intermittent alcohol access (IAA). Principal component analyses were performed to identify relationships between the most important behavioral parameters. HD showed more optimal decision making in the rGT. In the DRT, HD transiently showed reduced impulsive choice. In both LD and HD, alcohol treatment increased optimal decision making in the rGT and increased impulsive choice in the DRT. PCA prior to and after IAA was comparable for LD and HD. When PCA was tested after IAA only, HD showed a more sign-tracking behavior. The principal component analyses indicated dimensional relationships between alcohol intake, impulsivity, and sign-tracking behavior in the PCA task after IAA. HD showed a more efficient performance in the rGT and DRT. Moreover, alcohol consumption enhanced approach behavior to reward-predictive cues, but sign-tracking did not predict the level of alcohol consumption. Taken together, these findings suggest that high levels of voluntary alcohol intake are associated with enhanced cue- and reward-driven behavior.

  13. Optimal beamforming in ultrasound using the ideal observer.

    PubMed

    Abbey, Craig K; Nguyen, Nghia Q; Insana, Michael F

    2010-08-01

    Beamforming of received pulse-echo data generally involves the compression of signals from multiple channels within an aperture. This compression is irreversible, and therefore allows the possibility that information relevant for performing a diagnostic task is irretrievably lost. The purpose of this study was to evaluate information transfer in beamforming using a previously developed ideal observer model to quantify diagnostic information relevant to performing a task. We describe an elaborated statistical model of image formation for fixed-focus transmission and single-channel reception within a moving aperture, and we use this model on a panel of tasks related to breast sonography to evaluate receive-beamforming approaches that optimize the transfer of information. Under the assumption that acquisition noise is well described as an additive wide-band Gaussian white-noise process, we show that signal compression across receive-aperture channels after a 2-D matched-filtering operation results in no loss of diagnostic information. Across tasks, the matched-filter beamformer results in more information than standard delay-and-sum beamforming in the subsequent radio-frequency signal by a factor of two. We also show that for this matched filter, 68% of the information gain can be attributed to the phase of the matched-filter and 21% can be attributed to the amplitude. A 1-D matched filtering along axial lines shows no advantage over delay-andsum, suggesting an important role for incorporating correlations across different aperture windows in beamforming. We also show that a post-compression processing before the computation of an envelope is necessary to pass the diagnostic information in the beamformed radio-frequency signal to the final envelope image.

  14. Multidisciplinary Optimization for Aerospace Using Genetic Optimization

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Hahn, Edward E.; Herrera, Claudia Y.

    2007-01-01

    In support of the ARMD guidelines NASA's Dryden Flight Research Center is developing a multidisciplinary design and optimization tool This tool will leverage existing tools and practices, and allow the easy integration and adoption of new state-of-the-art software. Optimization has made its way into many mainstream applications. For example NASTRAN(TradeMark) has its solution sequence 200 for Design Optimization, and MATLAB(TradeMark) has an Optimization Tool box. Other packages, such as ZAERO(TradeMark) aeroelastic panel code and the CFL3D(TradeMark) Navier-Stokes solver have no built in optimizer. The goal of the tool development is to generate a central executive capable of using disparate software packages ina cross platform network environment so as to quickly perform optimization and design tasks in a cohesive streamlined manner. A provided figure (Figure 1) shows a typical set of tools and their relation to the central executive. Optimization can take place within each individual too, or in a loop between the executive and the tool, or both.

  15. Application of a COTS Resource Optimization Framework to the SSN Sensor Tasking Domain - Part I: Problem Definition

    NASA Astrophysics Data System (ADS)

    Tran, T.

    With the onset of the SmallSat era, the RSO catalog is expected to see continuing growth in the near future. This presents a significant challenge to the current sensor tasking of the SSN. The Air Force is in need of a sensor tasking system that is robust, efficient, scalable, and able to respond in real-time to interruptive events that can change the tracking requirements of the RSOs. Furthermore, the system must be capable of using processed data from heterogeneous sensors to improve tasking efficiency. The SSN sensor tasking can be regarded as an economic problem of supply and demand: the amount of tracking data needed by each RSO represents the demand side while the SSN sensor tasking represents the supply side. As the number of RSOs to be tracked grows, demand exceeds supply. The decision-maker is faced with the problem of how to allocate resources in the most efficient manner. Braxton recently developed a framework called Multi-Objective Resource Optimization using Genetic Algorithm (MOROUGA) as one of its modern COTS software products. This optimization framework took advantage of the maturing technology of evolutionary computation in the last 15 years. This framework was applied successfully to address the resource allocation of an AFSCN-like problem. In any resource allocation problem, there are five key elements: (1) the resource pool, (2) the tasks using the resources, (3) a set of constraints on the tasks and the resources, (4) the objective functions to be optimized, and (5) the demand levied on the resources. In this paper we explain in detail how the design features of this optimization framework are directly applicable to address the SSN sensor tasking domain. We also discuss our validation effort as well as present the result of the AFSCN resource allocation domain using a prototype based on this optimization framework.

  16. High Performance Databases For Scientific Applications

    NASA Technical Reports Server (NTRS)

    French, James C.; Grimshaw, Andrew S.

    1997-01-01

    The goal for this task is to develop an Extensible File System (ELFS). ELFS attacks the problem of the following: 1. Providing high bandwidth performance architectures; 2. Reducing the cognitive burden faced by applications programmers when they attempt to optimize; and 3. Seamlessly managing the proliferation of data formats and architectural differences. The approach for ELFS solution consists of language and run-time system support that permits the specification on a hierarchy of file classes.

  17. A task-based parallelism and vectorized approach to 3D Method of Characteristics (MOC) reactor simulation for high performance computing architectures

    NASA Astrophysics Data System (ADS)

    Tramm, John R.; Gunow, Geoffrey; He, Tim; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.

    2016-05-01

    In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures.

  18. Optimizing the Distribution of Leg Muscles for Vertical Jumping

    PubMed Central

    Wong, Jeremy D.; Bobbert, Maarten F.; van Soest, Arthur J.; Gribble, Paul L.; Kistemaker, Dinant A.

    2016-01-01

    A goal of biomechanics and motor control is to understand the design of the human musculoskeletal system. Here we investigated human functional morphology by making predictions about the muscle volume distribution that is optimal for a specific motor task. We examined a well-studied and relatively simple human movement, vertical jumping. We investigated how high a human could jump if muscle volume were optimized for jumping, and determined how the optimal parameters improve performance. We used a four-link inverted pendulum model of human vertical jumping actuated by Hill-type muscles, that well-approximates skilled human performance. We optimized muscle volume by allowing the cross-sectional area and muscle fiber optimum length to be changed for each muscle, while maintaining constant total muscle volume. We observed, perhaps surprisingly, that the reference model, based on human anthropometric data, is relatively good for vertical jumping; it achieves 90% of the jump height predicted by a model with muscles designed specifically for jumping. Alteration of cross-sectional areas—which determine the maximum force deliverable by the muscles—constitutes the majority of improvement to jump height. The optimal distribution results in large vastus, gastrocnemius and hamstrings muscles that deliver more work, while producing a kinematic pattern essentially identical to the reference model. Work output is increased by removing muscle from rectus femoris, which cannot do work on the skeleton given its moment arm at the hip and the joint excursions during push-off. The gluteus composes a disproportionate amount of muscle volume and jump height is improved by moving it to other muscles. This approach represents a way to test hypotheses about optimal human functional morphology. Future studies may extend this approach to address other morphological questions in ethological tasks such as locomotion, and feature other sets of parameters such as properties of the skeletal segments. PMID:26919645

  19. Investigation of statistical iterative reconstruction for dedicated breast CT

    PubMed Central

    Makeev, Andrey; Glick, Stephen J.

    2013-01-01

    Purpose: Dedicated breast CT has great potential for improving the detection and diagnosis of breast cancer. Statistical iterative reconstruction (SIR) in dedicated breast CT is a promising alternative to traditional filtered backprojection (FBP). One of the difficulties in using SIR is the presence of free parameters in the algorithm that control the appearance of the resulting image. These parameters require tuning in order to achieve high quality reconstructions. In this study, the authors investigated the penalized maximum likelihood (PML) method with two commonly used types of roughness penalty functions: hyperbolic potential and anisotropic total variation (TV) norm. Reconstructed images were compared with images obtained using standard FBP. Optimal parameters for PML with the hyperbolic prior are reported for the task of detecting microcalcifications embedded in breast tissue. Methods: Computer simulations were used to acquire projections in a half-cone beam geometry. The modeled setup describes a realistic breast CT benchtop system, with an x-ray spectra produced by a point source and an a-Si, CsI:Tl flat-panel detector. A voxelized anthropomorphic breast phantom with 280 μm microcalcification spheres embedded in it was used to model attenuation properties of the uncompressed woman's breast in a pendant position. The reconstruction of 3D images was performed using the separable paraboloidal surrogates algorithm with ordered subsets. Task performance was assessed with the ideal observer detectability index to determine optimal PML parameters. Results: The authors' findings suggest that there is a preferred range of values of the roughness penalty weight and the edge preservation threshold in the penalized objective function with the hyperbolic potential, which resulted in low noise images with high contrast microcalcifications preserved. In terms of numerical observer detectability index, the PML method with optimal parameters yielded substantially improved performance (by a factor of greater than 10) compared to FBP. The hyperbolic prior was also observed to be superior to the TV norm. A few of the best-performing parameter pairs for the PML method also demonstrated superior performance for various radiation doses. In fact, using PML with certain parameter values results in better images, acquired using 2 mGy dose, than FBP-reconstructed images acquired using 6 mGy dose. Conclusions: A range of optimal free parameters for the PML algorithm with hyperbolic and TV norm-based potentials is presented for the microcalcification detection task, in dedicated breast CT. The reported values can be used as starting values of the free parameters, when SIR techniques are used for image reconstruction. Significant improvement in image quality can be achieved by using PML with optimal combination of parameters, as compared to FBP. Importantly, these results suggest improved detection of microcalcifications can be obtained by using PML with lower radiation dose to the patient, than using FBP with higher dose. PMID:23927318

  20. A Comparison of Three Determinants of an Engagement Index for Use in a Simulated Flight Environment

    NASA Technical Reports Server (NTRS)

    Hitt, James M., II

    1995-01-01

    The following report details a project design that is to be completed by the end of the year. Determining how engaged a person is at a task is rather difficult. There are many different ways to assess engagement. One such method is to use psychophysical measures. The current study focuses on three determinants of an engagement index proposed by researchers at NASA-Langley (Pope, A. T., Bogart, E. H., and Bartolome, D. S., 1995). The index (20 Beta/(Alpha+Theta)) uses EEG power bands to determine a person's level of engagement while performs a compensatory tracking task. The tracking task switches between manual and automatic modes. Participants each experience both positive and negative feedback within each trial of the three trials. The tracking task is altered in terms of difficulty depending on the participants current engagement index. The rationale of this study is to determine the optimal level of engagement to gain peak performance. The three determinants are based on an absolute index which differs from the past research which uses a slope index.

Top