DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The objective of the contract is to consolidate the advances made during the previous contract in the conversion of syngas to motor fuels using Molecular Sieve-containing catalysts and to demonstrate the practical utility and economic value of the new catalyst/process systems with appropriate laboratory runs. Work on the program is divided into the following six tasks: (1) preparation of a detailed work plan covering the entire performance of the contract; (2) preliminary techno-economic assessment of the UCC catalyst/process system; (3) optimization of the most promising catalyst developed under prior contract; (4) optimization of the UCC catalyst system in a mannermore » that will give it the longest possible service life; (5) optimization of a UCC process/catalyst system based upon a tubular reactor with a recycle loop containing the most promising catalyst developed under Tasks 3 and 4 studies; and (6) economic evaluation of the optimal performance found under Task 5 for the UCC process/catalyst system. Progress reports are presented for tasks 2 through 5. 232 figs., 19 tabs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The objective of the contract is to consolidate the advances made during the previous contract in the conversion of syngas to motor fuels using Molecular Sieve-containing catalysts and to demonstrate the practical utility and economic value of the new catalyst/process systems with appropriate laboratory runs. Work on the program is divided into the following six tasks: (1) preparation of a detailed work plan covering the entire performance of the contract; (2) preliminary techno-economic assessment of the UCC catalyst/process system; (3) optimization of the most promising catalysts developed under prior contract; (4) optimization of the UCC catalyst system in a mannermore » that will give it the longest possible service life; (5) optimization of a UCC process/catalyst system based upon a tubular reactor with a recycle loop; and (6) economic evaluation of the optimal performance found under Task 5 for the UCC process/catalyst system. Accomplishments are reported for Tasks 2 through 5.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The objective of the contract is to consolidate the advances made during the previous contract in the conversion of syngas to motor fuels using Molecular Sieve-containing catalysts and to demonstrate the practical utility and economic value of the new catalyst/process systems with appropriate laboratory runs. Work on the program is divided into the following six tasks: (1) preparation of a detailed work plan covering the entire performance of the contract; (2) techno-economic studies that will supplement those that are presently being carried out by MITRE; (3) optimization of the most promising catalysts developed under prior contract; (4) optimization of themore » UCC catalyst system in a manner that will give it the longest possible service life; (5) optimization of a UCC process/catalyst system based upon a tubular reactor with a recycle loop containing the most promising catalyst developed under Tasks 3 and 4 studies; and (6) economic evaluation of the optimal performance found under Task 5 for the UCC process/catalyst system. Progress reports are presented for Tasks 1, 3, 4, and 5.« less
Multi-tasking arbitration and behaviour design for human-interactive robots
NASA Astrophysics Data System (ADS)
Kobayashi, Yuichi; Onishi, Masaki; Hosoe, Shigeyuki; Luo, Zhiwei
2013-05-01
Robots that interact with humans in household environments are required to handle multiple real-time tasks simultaneously, such as carrying objects, collision avoidance and conversation with human. This article presents a design framework for the control and recognition processes to meet these requirements taking into account stochastic human behaviour. The proposed design method first introduces a Petri net for synchronisation of multiple tasks. The Petri net formulation is converted to Markov decision processes and processed in an optimal control framework. Three tasks (safety confirmation, object conveyance and conversation) interact and are expressed by the Petri net. Using the proposed framework, tasks that normally tend to be designed by integrating many if-then rules can be designed in a systematic manner in a state estimation and optimisation framework from the viewpoint of the shortest time optimal control. The proposed arbitration method was verified by simulations and experiments using RI-MAN, which was developed for interactive tasks with humans.
Kell, Alexander J E; Yamins, Daniel L K; Shook, Erica N; Norman-Haignere, Sam V; McDermott, Josh H
2018-05-02
A core goal of auditory neuroscience is to build quantitative models that predict cortical responses to natural sounds. Reasoning that a complete model of auditory cortex must solve ecologically relevant tasks, we optimized hierarchical neural networks for speech and music recognition. The best-performing network contained separate music and speech pathways following early shared processing, potentially replicating human cortical organization. The network performed both tasks as well as humans and exhibited human-like errors despite not being optimized to do so, suggesting common constraints on network and human performance. The network predicted fMRI voxel responses substantially better than traditional spectrotemporal filter models throughout auditory cortex. It also provided a quantitative signature of cortical representational hierarchy-primary and non-primary responses were best predicted by intermediate and late network layers, respectively. The results suggest that task optimization provides a powerful set of tools for modeling sensory systems. Copyright © 2018 Elsevier Inc. All rights reserved.
Video game practice optimizes executive control skills in dual-task and task switching situations.
Strobach, Tilo; Frensch, Peter A; Schubert, Torsten
2012-05-01
We examined the relation of action video game practice and the optimization of executive control skills that are needed to coordinate two different tasks. As action video games are similar to real life situations and complex in nature, and include numerous concurrent actions, they may generate an ideal environment for practicing these skills (Green & Bavelier, 2008). For two types of experimental paradigms, dual-task and task switching respectively; we obtained performance advantages for experienced video gamers compared to non-gamers in situations in which two different tasks were processed simultaneously or sequentially. This advantage was absent in single-task situations. These findings indicate optimized executive control skills in video gamers. Similar findings in non-gamers after 15 h of action video game practice when compared to non-gamers with practice on a puzzle game clarified the causal relation between video game practice and the optimization of executive control skills. Copyright © 2012 Elsevier B.V. All rights reserved.
Energy Supply- Production of Fuel from Agricultural and Animal Waste
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gabriel Miller
2009-03-25
The Society for Energy and Environmental Research (SEER) was funded in March 2004 by the Department of Energy, under grant DE-FG-36-04GO14268, to produce a study, and oversee construction and implementation, for the thermo-chemical production of fuel from agricultural and animal waste. The grant focuses on the Changing World Technologies (CWT) of West Hempstead, NY, thermal conversion process (TCP), which converts animal residues and industrial food processing biproducts into fuels, and as an additional product, fertilizers. A commercial plant was designed and built by CWT, partially using grant funds, in Carthage, Missouri, to process animal residues from a nearby turkey processingmore » plant. The DOE sponsored program consisted of four tasks. These were: Task 1 Optimization of the CWT Plant in Carthage - This task focused on advancing and optimizing the process plant operated by CWT that converts organic waste to fuel and energy. Task 2 Characterize and Validate Fuels Produced by CWT - This task focused on testing of bio-derived hydrocarbon fuels from the Carthage plant in power generating equipment to determine the regulatory compliance of emissions and overall performance of the fuel. Task 3 Characterize Mixed Waste Streams - This task focused on studies performed at Princeton University to better characterize mixed waste incoming streams from animal and vegetable residues. Task 4 Fundamental Research in Waste Processing Technologies - This task focused on studies performed at the Massachusetts Institute of Technology (MIT) on the chemical reformation reaction of agricultural biomass compounds in a hydrothermal medium. Many of the challenges to optimize, improve and perfect the technology, equipment and processes in order to provide an economically viable means of creating sustainable energy were identified in the DOE Stage Gate Review, whose summary report was issued on July 30, 2004. This summary report appears herein as Appendix 1, and the findings of the report formed the basis for much of the subsequent work under the grant. An explanation of the process is presented as well as the completed work on the four tasks.« less
Barriers to success: physical separation optimizes event-file retrieval in shared workspaces.
Klempova, Bibiana; Liepelt, Roman
2017-07-08
Sharing tasks with other persons can simplify our work and life, but seeing and hearing other people's actions may also be very distracting. The joint Simon effect (JSE) is a standard measure of referential response coding when two persons share a Simon task. Sequential modulations of the joint Simon effect (smJSE) are interpreted as a measure of event-file processing containing stimulus information, response information and information about the just relevant control-state active in a given social situation. This study tested effects of physical (Experiment 1) and virtual (Experiment 2) separation of shared workspaces on referential coding and event-file processing using a joint Simon task. In Experiment 1, participants performed this task in individual (go-nogo), joint and standard Simon task conditions with and without a transparent curtain (physical separation) placed along the imagined vertical midline of the monitor. In Experiment 2, participants performed the same tasks with and without receiving background music (virtual separation). For response times, physical separation enhanced event-file retrieval indicated by an enlarged smJSE in the joint Simon task with curtain than without curtain (Experiment1), but did not change referential response coding. In line with this, we also found evidence for enhanced event-file processing through physical separation in the joint Simon task for error rates. Virtual separation did neither impact event-file processing, nor referential coding, but generally slowed down response times in the joint Simon task. For errors, virtual separation hampered event-file processing in the joint Simon task. For the cognitively more demanding standard two-choice Simon task, we found music to have a degrading effect on event-file retrieval for response times. Our findings suggest that adding a physical separation optimizes event-file processing in shared workspaces, while music seems to lead to a more relaxed task processing mode under shared task conditions. In addition, music had an interfering impact on joint error processing and more generally when dealing with a more complex task in isolation.
Multidisciplinary optimization for engineering systems - Achievements and potential
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1989-01-01
The currently common sequential design process for engineering systems is likely to lead to suboptimal designs. Recently developed decomposition methods offer an alternative for coming closer to optimum by breaking the large task of system optimization into smaller, concurrently executed and, yet, coupled tasks, identified with engineering disciplines or subsystems. The hierarchic and non-hierarchic decompositions are discussed and illustrated by examples. An organization of a design process centered on the non-hierarchic decomposition is proposed.
Multidisciplinary optimization for engineering systems: Achievements and potential
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1989-01-01
The currently common sequential design process for engineering systems is likely to lead to suboptimal designs. Recently developed decomposition methods offer an alternative for coming closer to optimum by breaking the large task of system optimization into smaller, concurrently executed and, yet, coupled tasks, identified with engineering disciplines or subsystems. The hierarchic and non-hierarchic decompositions are discussed and illustrated by examples. An organization of a design process centered on the non-hierarchic decomposition is proposed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, G.A.; Sepehrnoori, K.
1995-12-31
The objective of this research is to develop cost-effective surfactant flooding technology by using simulation studies to evaluate and optimize alternative design strategies taking into account reservoir characteristics process chemistry, and process design options such as horizontal wells. Task 1 is the development of an improved numerical method for our simulator that will enable us to solve a wider class of these difficult simulation problems accurately and affordably. Task 2 is the application of this simulator to the optimization of surfactant flooding to reduce its risk and cost. In this quarter, we have continued working on Task 2 to optimizemore » surfactant flooding design and have included economic analysis to the optimization process. An economic model was developed using a spreadsheet and the discounted cash flow (DCF) method of economic analysis. The model was designed specifically for a domestic onshore surfactant flood and has been used to economically evaluate previous work that used a technical approach to optimization. The DCF model outputs common economic decision making criteria, such as net present value (NPV), internal rate of return (IRR), and payback period.« less
A dual-task investigation of automaticity in visual word processing
NASA Technical Reports Server (NTRS)
McCann, R. S.; Remington, R. W.; Van Selst, M.
2000-01-01
An analysis of activation models of visual word processing suggests that frequency-sensitive forms of lexical processing should proceed normally while unattended. This hypothesis was tested by having participants perform a speeded pitch discrimination task followed by lexical decisions or word naming. As the stimulus onset asynchrony between the tasks was reduced, lexical-decision and naming latencies increased dramatically. Word-frequency effects were additive with the increase, indicating that frequency-sensitive processing was subject to postponement while attention was devoted to the other task. Either (a) the same neural hardware shares responsibility for lexical processing and central stages of choice reaction time task processing and cannot perform both computations simultaneously, or (b) lexical processing is blocked in order to optimize performance on the pitch discrimination task. Either way, word processing is not as automatic as activation models suggest.
Efficient multitasking: parallel versus serial processing of multiple tasks
Fischer, Rico; Plessow, Franziska
2015-01-01
In the context of performance optimizations in multitasking, a central debate has unfolded in multitasking research around whether cognitive processes related to different tasks proceed only sequentially (one at a time), or can operate in parallel (simultaneously). This review features a discussion of theoretical considerations and empirical evidence regarding parallel versus serial task processing in multitasking. In addition, we highlight how methodological differences and theoretical conceptions determine the extent to which parallel processing in multitasking can be detected, to guide their employment in future research. Parallel and serial processing of multiple tasks are not mutually exclusive. Therefore, questions focusing exclusively on either task-processing mode are too simplified. We review empirical evidence and demonstrate that shifting between more parallel and more serial task processing critically depends on the conditions under which multiple tasks are performed. We conclude that efficient multitasking is reflected by the ability of individuals to adjust multitasking performance to environmental demands by flexibly shifting between different processing strategies of multiple task-component scheduling. PMID:26441742
Efficient multitasking: parallel versus serial processing of multiple tasks.
Fischer, Rico; Plessow, Franziska
2015-01-01
In the context of performance optimizations in multitasking, a central debate has unfolded in multitasking research around whether cognitive processes related to different tasks proceed only sequentially (one at a time), or can operate in parallel (simultaneously). This review features a discussion of theoretical considerations and empirical evidence regarding parallel versus serial task processing in multitasking. In addition, we highlight how methodological differences and theoretical conceptions determine the extent to which parallel processing in multitasking can be detected, to guide their employment in future research. Parallel and serial processing of multiple tasks are not mutually exclusive. Therefore, questions focusing exclusively on either task-processing mode are too simplified. We review empirical evidence and demonstrate that shifting between more parallel and more serial task processing critically depends on the conditions under which multiple tasks are performed. We conclude that efficient multitasking is reflected by the ability of individuals to adjust multitasking performance to environmental demands by flexibly shifting between different processing strategies of multiple task-component scheduling.
A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori
2005-07-01
The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectivesmore » of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.« less
A Framework to Design and Optimize Chemical Flooding Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori
2006-08-31
The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectivesmore » of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.« less
A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori
2004-11-01
The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectivesmore » of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.« less
Interrupted monitoring of a stochastic process
NASA Technical Reports Server (NTRS)
Palmer, E.
1977-01-01
Normative strategies are developed for tasks where the pilot must interrupt his monitoring of a stochastic process in order to attend to other duties. Results are given as to how characteristics of the stochastic process and the other tasks affect the optimal strategies. The optimum strategy is also compared to the strategies used by subjects in a pilot experiment.
Advanced automation for in-space vehicle processing
NASA Technical Reports Server (NTRS)
Sklar, Michael; Wegerif, D.
1990-01-01
The primary objective of this 3-year planned study is to assure that the fully evolved Space Station Freedom (SSF) can support automated processing of exploratory mission vehicles. Current study assessments show that required extravehicular activity (EVA) and to some extent intravehicular activity (IVA) manpower requirements for required processing tasks far exceeds the available manpower. Furthermore, many processing tasks are either hazardous operations or they exceed EVA capability. Thus, automation is essential for SSF transportation node functionality. Here, advanced automation represents the replacement of human performed tasks beyond the planned baseline automated tasks. Both physical tasks such as manipulation, assembly and actuation, and cognitive tasks such as visual inspection, monitoring and diagnosis, and task planning are considered. During this first year of activity both the Phobos/Gateway Mars Expedition and Lunar Evolution missions proposed by the Office of Exploration have been evaluated. A methodology for choosing optimal tasks to be automated has been developed. Processing tasks for both missions have been ranked on the basis of automation potential. The underlying concept in evaluating and describing processing tasks has been the use of a common set of 'Primitive' task descriptions. Primitive or standard tasks have been developed both for manual or crew processing and automated machine processing.
The effect of spectral filters on visual search in stroke patients.
Beasley, Ian G; Davies, Leon N
2013-01-01
Visual search impairment can occur following stroke. The utility of optimal spectral filters on visual search in stroke patients has not been considered to date. The present study measured the effect of optimal spectral filters on visual search response time and accuracy, using a task requiring serial processing. A stroke and control cohort undertook the task three times: (i) using an optimally selected spectral filter; (ii) the subjects were randomly assigned to two groups with group 1 using an optimal filter for two weeks, whereas group 2 used a grey filter for two weeks; (iii) the groups were crossed over with group 1 using a grey filter for a further two weeks and group 2 given an optimal filter, before undertaking the task for the final time. Initial use of an optimal spectral filter improved visual search response time but not error scores in the stroke cohort. Prolonged use of neither an optimal nor a grey filter improved response time or reduced error scores. In fact, response times increased with the filter, regardless of its type, for stroke and control subjects; this outcome may be due to contrast reduction or a reflection of task design, given that significant practice effects were noted.
Amemiya, S; Noji, T; Kubota, N; Nishijima, T; Kita, I
2014-04-18
Deliberation between possible options before making a decision is crucial to responding with an optimal choice. However, the neural mechanisms regulating this deliberative decision-making process are still unclear. Recent studies have proposed that the locus coeruleus-noradrenaline (LC-NA) system plays a role in attention, behavioral flexibility, and exploration, which contribute to the search for an optimal choice under uncertain situations. In the present study, we examined whether the LC-NA system relates to the deliberative process in a T-maze spatial decision-making task in rats. To quantify deliberation in rats, we recorded vicarious trial-and-error behavior (VTE), which is considered to reflect a deliberative process exploring optimal choices. In experiment 1, we manipulated the difficulty of choice by varying the amount of reward pellets between the two maze arms (0 vs. 4, 1 vs. 3, 2 vs. 2). A difficulty-dependent increase in VTE was accompanied by a reduction of choice bias toward the high reward arm and an increase in time required to select one of the two arms in the more difficult manipulation. In addition, the increase of c-Fos-positive NA neurons in the LC depended on the task difficulty and the amount of c-Fos expression in LC-NA neurons positively correlated with the occurrence of VTE. In experiment 2, we inhibited LC-NA activity by injection of clonidine, an agonist of the alpha2 autoreceptor, during a decision-making task (1 vs. 3). The clonidine injection suppressed occurrence of VTE in the early phase of the task and subsequently impaired a valuable choice later in the task. These results suggest that the LC-NA system regulates the deliberative process during decision-making. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
Computer control improves ethylene plant operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitehead, B.D.; Parnis, M.
ICIA Australia ordered a turnkey 250,000-tpy ethylene plant to be built at the Botany site, Sydney, Australia. Following a feasibility study, an additional order was placed for a process computer system for advanced process control and optimization. This article gives a broad outline of the process computer tasks, how the tasks were implemented, what problems were met, what lessons were learned and what results were achieved.
Data Understanding Applied to Optimization
NASA Technical Reports Server (NTRS)
Buntine, Wray; Shilman, Michael
1998-01-01
The goal of this research is to explore and develop software for supporting visualization and data analysis of search and optimization. Optimization is an ever-present problem in science. The theory of NP-completeness implies that the problems can only be resolved by increasingly smarter problem specific knowledge, possibly for use in some general purpose algorithms. Visualization and data analysis offers an opportunity to accelerate our understanding of key computational bottlenecks in optimization and to automatically tune aspects of the computation for specific problems. We will prototype systems to demonstrate how data understanding can be successfully applied to problems characteristic of NASA's key science optimization tasks, such as central tasks for parallel processing, spacecraft scheduling, and data transmission from a remote satellite.
NASA Astrophysics Data System (ADS)
Xing, Xi; Rey-de-Castro, Roberto; Rabitz, Herschel
2014-12-01
Optimally shaped femtosecond laser pulses can often be effectively identified in adaptive feedback quantum control experiments, but elucidating the underlying control mechanism can be a difficult task requiring significant additional analysis. We introduce landscape Hessian analysis (LHA) as a practical experimental tool to aid in elucidating control mechanism insights. This technique is applied to the dissociative ionization of CH2BrI using shaped fs laser pulses for optimization of the absolute yields of ionic fragments as well as their ratios for the competing processes of breaking the C-Br and C-I bonds. The experimental results suggest that these nominally complex problems can be reduced to a low-dimensional control space with insights into the control mechanisms. While the optimal yield for some fragments is dominated by a non-resonant intensity-driven process, the optimal generation of other fragments maa difficult task requiring significant additionaly be explained by a non-resonant process coupled to few level resonant dynamics. Theoretical analysis and modeling is consistent with the experimental observations.
Methods for Maximizing the Learning Process: A Theoretical and Experimental Analysis.
ERIC Educational Resources Information Center
Atkinson, Richard C.
This research deals with optimizing the instructional process. The approach adopted was to limit consideration to simple learning tasks for which adequate mathematical models could be developed. Optimal or suitable suboptimal instructional strategies were developed for the models. The basic idea was to solve for strategies that either maximize the…
Steele, Catherine C.; Peterson, Jennifer R.; Marshall, Andrew T.; Stuebing, Sarah L.; Kirkpatrick, Kimberly
2017-01-01
The nucleus accumbens core (NAc) has long been recognized as an important contributor to the computation of reward value that is critical for impulsive choice behavior. Impulsive choice refers to choosing a smaller-sooner (SS) over a larger-later (LL) reward when the LL is more optimal in terms of the rate of reward delivery. Two experiments examined the role of the NAc in impulsive choice and its component processes of delay and magnitude processing. Experiment 1 delivered an impulsive choice task with manipulations of LL reward magnitude, followed by a reward magnitude discrimination task. Experiment 2 tested impulsive choice under manipulations of LL delay, followed by temporal bisection and progressive interval tasks. NAc lesions, in comparison to sham control lesions, produced suboptimal preferences that resulted in lower reward earning rates, and led to reduced sensitivity to magnitude and delay within the impulsive choice task. The secondary tasks revealed intact reward magnitude and delay discrimination abilities, but the lesion rats persisted in responding more as the progressive interval increased during the session. The results suggest that the NAc is most critical for demonstrating good sensitivity to magnitude and delay, and adjusting behavior accordingly. Ultimately, the NAc lesions induced suboptimal choice behavior rather than simply promoting impulsive choice, suggesting that an intact NAc is necessary for optimal decision making. PMID:29146281
Gomez-Cardona, Daniel; Hayes, John W; Zhang, Ran; Li, Ke; Cruz-Bastida, Juan Pablo; Chen, Guang-Hong
2018-05-01
Different low-signal correction (LSC) methods have been shown to efficiently reduce noise streaks and noise level in CT to provide acceptable images at low-radiation dose levels. These methods usually result in CT images with highly shift-variant and anisotropic spatial resolution and noise, which makes the parameter optimization process highly nontrivial. The purpose of this work was to develop a local task-based parameter optimization framework for LSC methods. Two well-known LSC methods, the adaptive trimmed mean (ATM) filter and the anisotropic diffusion (AD) filter, were used as examples to demonstrate how to use the task-based framework to optimize filter parameter selection. Two parameters, denoted by the set P, for each LSC method were included in the optimization problem. For the ATM filter, these parameters are the low- and high-signal threshold levels p l and p h ; for the AD filter, the parameters are the exponents δ and γ in the brightness gradient function. The detectability index d' under the non-prewhitening (NPW) mathematical observer model was selected as the metric for parameter optimization. The optimization problem was formulated as an unconstrained optimization problem that consisted of maximizing an objective function d'(P), where i and j correspond to the i-th imaging task and j-th spatial location, respectively. Since there is no explicit mathematical function to describe the dependence of d' on the set of parameters P for each LSC method, the optimization problem was solved via an experimentally measured d' map over a densely sampled parameter space. In this work, three high-contrast-high-frequency discrimination imaging tasks were defined to explore the parameter space of each of the LSC methods: a vertical bar pattern (task I), a horizontal bar pattern (task II), and a multidirectional feature (task III). Two spatial locations were considered for the analysis, a posterior region-of-interest (ROI) located within the noise streaks region and an anterior ROI, located further from the noise streaks region. Optimal results derived from the task-based detectability index metric were compared to other operating points in the parameter space with different noise and spatial resolution trade-offs. The optimal operating points determined through the d' metric depended on the interplay between the major spatial frequency components of each imaging task and the highly shift-variant and anisotropic noise and spatial resolution properties associated with each operating point in the LSC parameter space. This interplay influenced imaging performance the most when the major spatial frequency component of a given imaging task coincided with the direction of spatial resolution loss or with the dominant noise spatial frequency component; this was the case of imaging task II. The performance of imaging tasks I and III was influenced by this interplay in a smaller scale than imaging task II, since the major frequency component of task I was perpendicular to imaging task II, and because imaging task III did not have strong directional dependence. For both LSC methods, there was a strong dependence of the overall d' magnitude and shape of the contours on the spatial location within the phantom, particularly for imaging tasks II and III. The d' value obtained at the optimal operating point for each spatial location and imaging task was similar when comparing the LSC methods studied in this work. A local task-based detectability framework to optimize the selection of parameters for LSC methods was developed. The framework takes into account the potential shift-variant and anisotropic spatial resolution and noise properties to maximize the imaging performance of the CT system. Optimal parameters for a given LSC method depend strongly on the spatial location within the image object. © 2018 American Association of Physicists in Medicine.
Optimal dynamic voltage scaling for wireless sensor nodes with real-time constraints
NASA Astrophysics Data System (ADS)
Cassandras, Christos G.; Zhuang, Shixin
2005-11-01
Sensors are increasingly embedded in manufacturing systems and wirelessly networked to monitor and manage operations ranging from process and inventory control to tracking equipment and even post-manufacturing product monitoring. In building such sensor networks, a critical issue is the limited and hard to replenish energy in the devices involved. Dynamic voltage scaling is a technique that controls the operating voltage of a processor to provide desired performance while conserving energy and prolonging the overall network's lifetime. We consider such power-limited devices processing time-critical tasks which are non-preemptive, aperiodic and have uncertain arrival times. We treat voltage scaling as a dynamic optimization problem whose objective is to minimize energy consumption subject to hard or soft real-time execution constraints. In the case of hard constraints, we build on prior work (which engages a voltage scaling controller at task completion times) by developing an intra-task controller that acts at all arrival times of incoming tasks. We show that this optimization problem can be decomposed into two simpler ones whose solution leads to an algorithm that does not actually require solving any nonlinear programming problems. In the case of soft constraints, this decomposition must be partly relaxed, but it still leads to a scalable (linear in the number of tasks) algorithm. Simulation results are provided to illustrate performance improvements in systems with intra-task controllers compared to uncontrolled systems or those using inter-task control.
Database Management in Design Optimization.
1983-10-30
processing program(s) engaged in the task of preparing input data for the (finite-element) analysis and optimization phases primary storage the main...and extraction of data from the database for further processing . It can be divided into two phases: a) The process of selection and identification of ...user wishes to stop the reading or the writing process . The meaning of END depends on the method specified for retrieving data: a) Row-wise - then
Honing process optimization algorithms
NASA Astrophysics Data System (ADS)
Kadyrov, Ramil R.; Charikov, Pavel N.; Pryanichnikova, Valeria V.
2018-03-01
This article considers the relevance of honing processes for creating high-quality mechanical engineering products. The features of the honing process are revealed and such important concepts as the task for optimization of honing operations, the optimal structure of the honing working cycles, stepped and stepless honing cycles, simulation of processing and its purpose are emphasized. It is noted that the reliability of the mathematical model determines the quality parameters of the honing process control. An algorithm for continuous control of the honing process is proposed. The process model reliably describes the machining of a workpiece in a sufficiently wide area and can be used to operate the CNC machine CC743.
Conditioning of Model Identification Task in Immune Inspired Optimizer SILO
NASA Astrophysics Data System (ADS)
Wojdan, K.; Swirski, K.; Warchol, M.; Maciorowski, M.
2009-10-01
Methods which provide good conditioning of model identification task in immune inspired, steady-state controller SILO (Stochastic Immune Layer Optimizer) are presented in this paper. These methods are implemented in a model based optimization algorithm. The first method uses a safe model to assure that gains of the process's model can be estimated. The second method is responsible for elimination of potential linear dependences between columns of observation matrix. Moreover new results from one of SILO implementation in polish power plant are presented. They confirm high efficiency of the presented solution in solving technical problems.
NASA Astrophysics Data System (ADS)
Tran, T.
With the onset of the SmallSat era, the RSO catalog is expected to see continuing growth in the near future. This presents a significant challenge to the current sensor tasking of the SSN. The Air Force is in need of a sensor tasking system that is robust, efficient, scalable, and able to respond in real-time to interruptive events that can change the tracking requirements of the RSOs. Furthermore, the system must be capable of using processed data from heterogeneous sensors to improve tasking efficiency. The SSN sensor tasking can be regarded as an economic problem of supply and demand: the amount of tracking data needed by each RSO represents the demand side while the SSN sensor tasking represents the supply side. As the number of RSOs to be tracked grows, demand exceeds supply. The decision-maker is faced with the problem of how to allocate resources in the most efficient manner. Braxton recently developed a framework called Multi-Objective Resource Optimization using Genetic Algorithm (MOROUGA) as one of its modern COTS software products. This optimization framework took advantage of the maturing technology of evolutionary computation in the last 15 years. This framework was applied successfully to address the resource allocation of an AFSCN-like problem. In any resource allocation problem, there are five key elements: (1) the resource pool, (2) the tasks using the resources, (3) a set of constraints on the tasks and the resources, (4) the objective functions to be optimized, and (5) the demand levied on the resources. In this paper we explain in detail how the design features of this optimization framework are directly applicable to address the SSN sensor tasking domain. We also discuss our validation effort as well as present the result of the AFSCN resource allocation domain using a prototype based on this optimization framework.
Lewis, Richard L; Shvartsman, Michael; Singh, Satinder
2013-07-01
We explore the idea that eye-movement strategies in reading are precisely adapted to the joint constraints of task structure, task payoff, and processing architecture. We present a model of saccadic control that separates a parametric control policy space from a parametric machine architecture, the latter based on a small set of assumptions derived from research on eye movements in reading (Engbert, Nuthmann, Richter, & Kliegl, 2005; Reichle, Warren, & McConnell, 2009). The eye-control model is embedded in a decision architecture (a machine and policy space) that is capable of performing a simple linguistic task integrating information across saccades. Model predictions are derived by jointly optimizing the control of eye movements and task decisions under payoffs that quantitatively express different desired speed-accuracy trade-offs. The model yields distinct eye-movement predictions for the same task under different payoffs, including single-fixation durations, frequency effects, accuracy effects, and list position effects, and their modulation by task payoff. The predictions are compared to-and found to accord with-eye-movement data obtained from human participants performing the same task under the same payoffs, but they are found not to accord as well when the assumptions concerning payoff optimization and processing architecture are varied. These results extend work on rational analysis of oculomotor control and adaptation of reading strategy (Bicknell & Levy, ; McConkie, Rayner, & Wilson, 1973; Norris, 2009; Wotschack, 2009) by providing evidence for adaptation at low levels of saccadic control that is shaped by quantitatively varying task demands and the dynamics of processing architecture. Copyright © 2013 Cognitive Science Society, Inc.
Reasoning and Memory: People Make Varied Use of the Information Available in Working Memory
ERIC Educational Resources Information Center
Hardman, Kyle O.; Cowan, Nelson
2016-01-01
Working memory (WM) is used for storing information in a highly accessible state so that other mental processes, such as reasoning, can use that information. Some WM tasks require that participants not only store information, but also reason about that information to perform optimally on the task. In this study, we used visual WM tasks that had…
Near-optimal integration of facial form and motion.
Dobs, Katharina; Ma, Wei Ji; Reddy, Leila
2017-09-08
Human perception consists of the continuous integration of sensory cues pertaining to the same object. While it has been fairly well shown that humans use an optimal strategy when integrating low-level cues proportional to their relative reliability, the integration processes underlying high-level perception are much less understood. Here we investigate cue integration in a complex high-level perceptual system, the human face processing system. We tested cue integration of facial form and motion in an identity categorization task and found that an optimal model could successfully predict subjects' identity choices. Our results suggest that optimal cue integration may be implemented across different levels of the visual processing hierarchy.
Li, Xuejun; Xu, Jia; Yang, Yun
2015-01-01
Cloud workflow system is a kind of platform service based on cloud computing. It facilitates the automation of workflow applications. Between cloud workflow system and its counterparts, market-oriented business model is one of the most prominent factors. The optimization of task-level scheduling in cloud workflow system is a hot topic. As the scheduling is a NP problem, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been proposed to optimize the cost. However, they have the characteristic of premature convergence in optimization process and therefore cannot effectively reduce the cost. To solve these problems, Chaotic Particle Swarm Optimization (CPSO) algorithm with chaotic sequence and adaptive inertia weight factor is applied to present the task-level scheduling. Chaotic sequence with high randomness improves the diversity of solutions, and its regularity assures a good global convergence. Adaptive inertia weight factor depends on the estimate value of cost. It makes the scheduling avoid premature convergence by properly balancing between global and local exploration. The experimental simulation shows that the cost obtained by our scheduling is always lower than the other two representative counterparts.
Li, Xuejun; Xu, Jia; Yang, Yun
2015-01-01
Cloud workflow system is a kind of platform service based on cloud computing. It facilitates the automation of workflow applications. Between cloud workflow system and its counterparts, market-oriented business model is one of the most prominent factors. The optimization of task-level scheduling in cloud workflow system is a hot topic. As the scheduling is a NP problem, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been proposed to optimize the cost. However, they have the characteristic of premature convergence in optimization process and therefore cannot effectively reduce the cost. To solve these problems, Chaotic Particle Swarm Optimization (CPSO) algorithm with chaotic sequence and adaptive inertia weight factor is applied to present the task-level scheduling. Chaotic sequence with high randomness improves the diversity of solutions, and its regularity assures a good global convergence. Adaptive inertia weight factor depends on the estimate value of cost. It makes the scheduling avoid premature convergence by properly balancing between global and local exploration. The experimental simulation shows that the cost obtained by our scheduling is always lower than the other two representative counterparts. PMID:26357510
Altered behavioral and neural responsiveness to counterfactual gains in the elderly.
Tobia, Michael J; Guo, Rong; Gläscher, Jan; Schwarze, Ulrike; Brassen, Stefanie; Büchel, Christian; Obermayer, Klaus; Sommer, Tobias
2016-06-01
Counterfactual information processing refers to the consideration of events that did not occur in comparison to those actually experienced, in order to determine optimal actions, and can be formulated as computational learning signals, referred to as fictive prediction errors. Decision making and the neural circuitry for counterfactual processing are altered in healthy elderly adults. This experiment investigated age differences in neural systems for decision making with knowledge of counterfactual outcomes. Two groups of healthy adult participants, young (N = 30; ages 19-30 years) and elderly (N = 19; ages 65-80 years), were scanned with fMRI during 240 trials of a strategic sequential investment task in which a particular strategy of differentially weighting counterfactual gains and losses during valuation is associated with more optimal performance. Elderly participants earned significantly less than young adults, differently weighted counterfactual consequences and exploited task knowledge, and exhibited altered activity in a fronto-striatal circuit while making choices, compared to young adults. The degree to which task knowledge was exploited was positively correlated with modulation of neural activity by expected value in the vmPFC for young adults, but not in the elderly. These findings demonstrate that elderly participants' poor task performance may be related to different counterfactual processing.
Higher Intelligence Is Associated with Less Task-Related Brain Network Reconfiguration
Cole, Michael W.
2016-01-01
The human brain is able to exceed modern computers on multiple computational demands (e.g., language, planning) using a small fraction of the energy. The mystery of how the brain can be so efficient is compounded by recent evidence that all brain regions are constantly active as they interact in so-called resting-state networks (RSNs). To investigate the brain's ability to process complex cognitive demands efficiently, we compared functional connectivity (FC) during rest and multiple highly distinct tasks. We found previously that RSNs are present during a wide variety of tasks and that tasks only minimally modify FC patterns throughout the brain. Here, we tested the hypothesis that, although subtle, these task-evoked FC updates from rest nonetheless contribute strongly to behavioral performance. One might expect that larger changes in FC reflect optimization of networks for the task at hand, improving behavioral performance. Alternatively, smaller changes in FC could reflect optimization for efficient (i.e., small) network updates, reducing processing demands to improve behavioral performance. We found across three task domains that high-performing individuals exhibited more efficient brain connectivity updates in the form of smaller changes in functional network architecture between rest and task. These smaller changes suggest that individuals with an optimized intrinsic network configuration for domain-general task performance experience more efficient network updates generally. Confirming this, network update efficiency correlated with general intelligence. The brain's reconfiguration efficiency therefore appears to be a key feature contributing to both its network dynamics and general cognitive ability. SIGNIFICANCE STATEMENT The brain's network configuration varies based on current task demands. For example, functional brain connections are organized in one way when one is resting quietly but in another way if one is asked to make a decision. We found that the efficiency of these updates in brain network organization is positively related to general intelligence, the ability to perform a wide variety of cognitively challenging tasks well. Specifically, we found that brain network configuration at rest was already closer to a wide variety of task configurations in intelligent individuals. This suggests that the ability to modify network connectivity efficiently when task demands change is a hallmark of high intelligence. PMID:27535904
Optimal resources for children's surgical care in the United States.
2014-03-01
In summary, the Task Force does understand that change is difficult and, in the circumstance of the US health care environment, quite complex. Having acknowledged this, the Task Force firmly believes that if optimal resource standards are clear, providers will act in the best interests of their patients, infants, and children undergoing surgery in this circumstance. We intend to provide evidence to this point, to define optimal resources, and to facilitate this process. The hope and the underlying intent of these recommendations is to insure that every infant and child undergoing a surgical procedure in the United States will receive his or her care in an environment that offers all of the facilities, equipment, and, most especially, access to the professional providers who have the appropriate background and training to provide optimal care. This must be done while balancing the issues of access, staff, and the need to improve the value proposition. The Task Force is unanimous in its intent to advocate for this agenda.
Decision Making in Concurrent Multitasking: Do People Adapt to Task Interference?
Nijboer, Menno; Taatgen, Niels A.; Brands, Annelies; Borst, Jelmer P.; van Rijn, Hedderik
2013-01-01
While multitasking has received a great deal of attention from researchers, we still know little about how well people adapt their behavior to multitasking demands. In three experiments, participants were presented with a multicolumn subtraction task, which required working memory in half of the trials. This primary task had to be combined with a secondary task requiring either working memory or visual attention, resulting in different types of interference. Before each trial, participants were asked to choose which secondary task they wanted to perform concurrently with the primary task. We predicted that if people seek to maximize performance or minimize effort required to perform the dual task, they choose task combinations that minimize interference. While performance data showed that the predicted optimal task combinations indeed resulted in minimal interference between tasks, the preferential choice data showed that a third of participants did not show any adaptation, and for the remainder it took a considerable number of trials before the optimal task combinations were chosen consistently. On the basis of these results we argue that, while in principle people are able to adapt their behavior according to multitasking demands, selection of the most efficient combination of strategies is not an automatic process. PMID:24244527
GPU computing in medical physics: a review.
Pratx, Guillem; Xing, Lei
2011-05-01
The graphics processing unit (GPU) has emerged as a competitive platform for computing massively parallel problems. Many computing applications in medical physics can be formulated as data-parallel tasks that exploit the capabilities of the GPU for reducing processing times. The authors review the basic principles of GPU computing as well as the main performance optimization techniques, and survey existing applications in three areas of medical physics, namely image reconstruction, dose calculation and treatment plan optimization, and image processing.
Power plant maintenance scheduling using ant colony optimization: an improved formulation
NASA Astrophysics Data System (ADS)
Foong, Wai Kuan; Maier, Holger; Simpson, Angus
2008-04-01
It is common practice in the hydropower industry to either shorten the maintenance duration or to postpone maintenance tasks in a hydropower system when there is expected unserved energy based on current water storage levels and forecast storage inflows. It is therefore essential that a maintenance scheduling optimizer can incorporate the options of shortening the maintenance duration and/or deferring maintenance tasks in the search for practical maintenance schedules. In this article, an improved ant colony optimization-power plant maintenance scheduling optimization (ACO-PPMSO) formulation that considers such options in the optimization process is introduced. As a result, both the optimum commencement time and the optimum outage duration are determined for each of the maintenance tasks that need to be scheduled. In addition, a local search strategy is presented in this article to boost the robustness of the algorithm. When tested on a five-station hydropower system problem, the improved formulation is shown to be capable of allowing shortening of maintenance duration in the event of expected demand shortfalls. In addition, the new local search strategy is also shown to have significantly improved the optimization ability of the ACO-PPMSO algorithm.
NASA Technical Reports Server (NTRS)
Biess, J. J.; Yu, Y.; Middlebrook, R. D.; Schoenfeld, A. D.
1974-01-01
A review is given of future power processing systems planned for the next 20 years, and the state-of-the-art of power processing design modeling and analysis techniques used to optimize power processing systems. A methodology of modeling and analysis of power processing equipment and systems has been formulated to fulfill future tradeoff studies and optimization requirements. Computer techniques were applied to simulate power processor performance and to optimize the design of power processing equipment. A program plan to systematically develop and apply the tools for power processing systems modeling and analysis is presented so that meaningful results can be obtained each year to aid the power processing system engineer and power processing equipment circuit designers in their conceptual and detail design and analysis tasks.
A derived heuristics based multi-objective optimization procedure for micro-grid scheduling
NASA Astrophysics Data System (ADS)
Li, Xin; Deb, Kalyanmoy; Fang, Yanjun
2017-06-01
With the availability of different types of power generators to be used in an electric micro-grid system, their operation scheduling as the load demand changes with time becomes an important task. Besides satisfying load balance constraints and the generator's rated power, several other practicalities, such as limited availability of grid power and restricted ramping of power output from generators, must all be considered during the operation scheduling process, which makes it difficult to decide whether the optimization results are accurate and satisfactory. In solving such complex practical problems, heuristics-based customized optimization algorithms are suggested. However, due to nonlinear and complex interactions of variables, it is difficult to come up with heuristics in such problems off-hand. In this article, a two-step strategy is proposed in which the first task deciphers important heuristics about the problem and the second task utilizes the derived heuristics to solve the original problem in a computationally fast manner. Specifically, the specific operation scheduling is considered from a two-objective (cost and emission) point of view. The first task develops basic and advanced level knowledge bases offline from a series of prior demand-wise optimization runs and then the second task utilizes them to modify optimized solutions in an application scenario. Results on island and grid connected modes and several pragmatic formulations of the micro-grid operation scheduling problem clearly indicate the merit of the proposed two-step procedure.
The Bayesian reader: explaining word recognition as an optimal Bayesian decision process.
Norris, Dennis
2006-04-01
This article presents a theory of visual word recognition that assumes that, in the tasks of word identification, lexical decision, and semantic categorization, human readers behave as optimal Bayesian decision makers. This leads to the development of a computational model of word recognition, the Bayesian reader. The Bayesian reader successfully simulates some of the most significant data on human reading. The model accounts for the nature of the function relating word frequency to reaction time and identification threshold, the effects of neighborhood density and its interaction with frequency, and the variation in the pattern of neighborhood density effects seen in different experimental tasks. Both the general behavior of the model and the way the model predicts different patterns of results in different tasks follow entirely from the assumption that human readers approximate optimal Bayesian decision makers. ((c) 2006 APA, all rights reserved).
Task-dependent vestibular feedback responses in reaching.
Keyser, Johannes; Medendorp, W Pieter; Selen, Luc P J
2017-07-01
When reaching for an earth-fixed object during self-rotation, the motor system should appropriately integrate vestibular signals and sensory predictions to compensate for the intervening motion and its induced inertial forces. While it is well established that this integration occurs rapidly, it is unknown whether vestibular feedback is specifically processed dependent on the behavioral goal. Here, we studied whether vestibular signals evoke fixed responses with the aim to preserve the hand trajectory in space or are processed more flexibly, correcting trajectories only in task-relevant spatial dimensions. We used galvanic vestibular stimulation to perturb reaching movements toward a narrow or a wide target. Results show that the same vestibular stimulation led to smaller trajectory corrections to the wide than the narrow target. We interpret this reduced compensation as a task-dependent modulation of vestibular feedback responses, tuned to minimally intervene with the task-irrelevant dimension of the reach. These task-dependent vestibular feedback corrections are in accordance with a central prediction of optimal feedback control theory and mirror the sophistication seen in feedback responses to mechanical and visual perturbations of the upper limb. NEW & NOTEWORTHY Correcting limb movements for external perturbations is a hallmark of flexible sensorimotor behavior. While visual and mechanical perturbations are corrected in a task-dependent manner, it is unclear whether a vestibular perturbation, naturally arising when the body moves, is selectively processed in reach control. We show, using galvanic vestibular stimulation, that reach corrections to vestibular perturbations are task dependent, consistent with a prediction of optimal feedback control theory. Copyright © 2017 the American Physiological Society.
Benchmarking image fusion system design parameters
NASA Astrophysics Data System (ADS)
Howell, Christopher L.
2013-06-01
A clear and absolute method for discriminating between image fusion algorithm performances is presented. This method can effectively be used to assist in the design and modeling of image fusion systems. Specifically, it is postulated that quantifying human task performance using image fusion should be benchmarked to whether the fusion algorithm, at a minimum, retained the performance benefit achievable by each independent spectral band being fused. The established benchmark would then clearly represent the threshold that a fusion system should surpass to be considered beneficial to a particular task. A genetic algorithm is employed to characterize the fused system parameters using a Matlab® implementation of NVThermIP as the objective function. By setting the problem up as a mixed-integer constraint optimization problem, one can effectively look backwards through the image acquisition process: optimizing fused system parameters by minimizing the difference between modeled task difficulty measure and the benchmark task difficulty measure. The results of an identification perception experiment are presented, where human observers were asked to identify a standard set of military targets, and used to demonstrate the effectiveness of the benchmarking process.
Young children do not succeed in choice tasks that imply evaluating chances.
Girotto, Vittorio; Fontanari, Laura; Gonzalez, Michel; Vallortigara, Giorgio; Blaye, Agnès
2016-07-01
Preverbal infants manifest probabilistic intuitions in their reactions to the outcomes of simple physical processes and in their choices. Their ability conflicts with the evidence that, before the age of about 5years, children's verbal judgments do not reveal probability understanding. To assess these conflicting results, three studies tested 3-5-year-olds on choice tasks on which infants perform successfully. The results showed that children of all age groups made optimal choices in tasks that did not require forming probabilistic expectations. In probabilistic tasks, however, only 5-year-olds made optimal choices. Younger children performed at random and/or were guided by superficial heuristics. These results suggest caution in interpreting infants' ability to evaluate chance, and indicate that the development of this ability may not follow a linear trajectory. Copyright © 2016 Elsevier B.V. All rights reserved.
Optimization of a Sample Processing Protocol for Recovery of ...
Journal Article Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps.
Performance Management and Optimization of Semiconductor Design Projects
NASA Astrophysics Data System (ADS)
Hinrichs, Neele; Olbrich, Markus; Barke, Erich
2010-06-01
The semiconductor industry is characterized by fast technological changes and small time-to-market windows. Improving productivity is the key factor to stand up to the competitors and thus successfully persist in the market. In this paper a Performance Management System for analyzing, optimizing and evaluating chip design projects is presented. A task graph representation is used to optimize the design process regarding time, cost and workload of resources. Key Performance Indicators are defined in the main areas cost, profit, resources, process and technical output to appraise the project.
Optimism as a Prior Belief about the Probability of Future Reward
Kalra, Aditi; Seriès, Peggy
2014-01-01
Optimists hold positive a priori beliefs about the future. In Bayesian statistical theory, a priori beliefs can be overcome by experience. However, optimistic beliefs can at times appear surprisingly resistant to evidence, suggesting that optimism might also influence how new information is selected and learned. Here, we use a novel Pavlovian conditioning task, embedded in a normative framework, to directly assess how trait optimism, as classically measured using self-report questionnaires, influences choices between visual targets, by learning about their association with reward progresses. We find that trait optimism relates to an a priori belief about the likelihood of rewards, but not losses, in our task. Critically, this positive belief behaves like a probabilistic prior, i.e. its influence reduces with increasing experience. Contrary to findings in the literature related to unrealistic optimism and self-beliefs, it does not appear to influence the iterative learning process directly. PMID:24853098
Software Would Largely Automate Design of Kalman Filter
NASA Technical Reports Server (NTRS)
Chuang, Jason C. H.; Negast, William J.
2005-01-01
Embedded Navigation Filter Automatic Designer (ENFAD) is a computer program being developed to automate the most difficult tasks in designing embedded software to implement a Kalman filter in a navigation system. The most difficult tasks are selection of error states of the filter and tuning of filter parameters, which are timeconsuming trial-and-error tasks that require expertise and rarely yield optimum results. An optimum selection of error states and filter parameters depends on navigation-sensor and vehicle characteristics, and on filter processing time. ENFAD would include a simulation module that would incorporate all possible error states with respect to a given set of vehicle and sensor characteristics. The first of two iterative optimization loops would vary the selection of error states until the best filter performance was achieved in Monte Carlo simulations. For a fixed selection of error states, the second loop would vary the filter parameter values until an optimal performance value was obtained. Design constraints would be satisfied in the optimization loops. Users would supply vehicle and sensor test data that would be used to refine digital models in ENFAD. Filter processing time and filter accuracy would be computed by ENFAD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, G.A.; Sepehrnoori, K.
1994-09-01
The objective of this research is to develop cost-effective surfactant flooding technology by using surfactant simulation studies to evaluate and optimize alternative design strategies taking into account reservoir characteristics, process chemistry, and process design options such as horizontal wells. Task 1 is the development of an improved numerical method for our simulator that will enable us to solve a wider class of these difficult simulation problems accurately and affordably. Task 2 is the application of this simulator to the optimization of surfactant flooding to reduce its risk and cost. The goal of Task 2 is to understand and generalize themore » impact of both process and reservoir characteristics on the optimal design of surfactant flooding. We have studied the effect of process parameters such as salinity gradient, surfactant adsorption, surfactant concentration, surfactant slug size, pH, polymer concentration and well constraints on surfactant floods. In this report, we show three dimensional field scale simulation results to illustrate the impact of one important design parameter, the salinity gradient. Although the use of a salinity gradient to improve the efficiency and robustness of surfactant flooding has been studied and applied for many years, this is the first time that we have evaluated it using stochastic simulations rather than simulations using the traditional layered reservoir description. The surfactant flooding simulations were performed using The University of Texas chemical flooding simulator called UTCHEM.« less
NASA Astrophysics Data System (ADS)
Holmes, Philip; Eckhoff, Philip; Wong-Lin, K. F.; Bogacz, Rafal; Zacksenhouse, Miriam; Cohen, Jonathan D.
2010-03-01
We describe how drift-diffusion (DD) processes - systems familiar in physics - can be used to model evidence accumulation and decision-making in two-alternative, forced choice tasks. We sketch the derivation of these stochastic differential equations from biophysically-detailed models of spiking neurons. DD processes are also continuum limits of the sequential probability ratio test and are therefore optimal in the sense that they deliver decisions of specified accuracy in the shortest possible time. This leaves open the critical balance of accuracy and speed. Using the DD model, we derive a speed-accuracy tradeoff that optimizes reward rate for a simple perceptual decision task, compare human performance with this benchmark, and discuss possible reasons for prevalent sub-optimality, focussing on the question of uncertain estimates of key parameters. We present an alternative theory of robust decisions that allows for uncertainty, and show that its predictions provide better fits to experimental data than a more prevalent account that emphasises a commitment to accuracy. The article illustrates how mathematical models can illuminate the neural basis of cognitive processes.
NASA Astrophysics Data System (ADS)
Cheng, Jie; Qian, Zhaogang; Irani, Keki B.; Etemad, Hossein; Elta, Michael E.
1991-03-01
To meet the ever-increasing demand of the rapidly-growing semiconductor manufacturing industry it is critical to have a comprehensive methodology integrating techniques for process optimization real-time monitoring and adaptive process control. To this end we have accomplished an integrated knowledge-based approach combining latest expert system technology machine learning method and traditional statistical process control (SPC) techniques. This knowledge-based approach is advantageous in that it makes it possible for the task of process optimization and adaptive control to be performed consistently and predictably. Furthermore this approach can be used to construct high-level and qualitative description of processes and thus make the process behavior easy to monitor predict and control. Two software packages RIST (Rule Induction and Statistical Testing) and KARSM (Knowledge Acquisition from Response Surface Methodology) have been developed and incorporated with two commercially available packages G2 (real-time expert system) and ULTRAMAX (a tool for sequential process optimization).
New approaches to optimization in aerospace conceptual design
NASA Technical Reports Server (NTRS)
Gage, Peter J.
1995-01-01
Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.
Heuristic-based information acquisition and decision making among pilots.
Wiggins, Mark W; Bollwerk, Sandra
2006-01-01
This research was designed to examine the impact of heuristic-based approaches to the acquisition of task-related information on the selection of an optimal alternative during simulated in-flight decision making. The work integrated features of naturalistic and normative decision making and strategies of information acquisition within a computer-based, decision support framework. The study comprised two phases, the first of which involved familiarizing pilots with three different heuristic-based strategies of information acquisition: frequency, elimination by aspects, and majority of confirming decisions. The second stage enabled participants to choose one of the three strategies of information acquisition to resolve a fourth (choice) scenario. The results indicated that task-oriented experience, rather than the information acquisition strategies, predicted the selection of the optimal alternative. It was also evident that of the three strategies available, the elimination by aspects information acquisition strategy was preferred by most participants. It was concluded that task-oriented experience, rather than the process of information acquisition, predicted task accuracy during the decision-making task. It was also concluded that pilots have a preference for one particular approach to information acquisition. Applications of outcomes of this research include the development of decision support systems that adapt to the information-processing capabilities and preferences of users.
Curtindale, Lori; Laurie-Rose, Cynthia; Bennett-Murphy, Laura; Hull, Sarah
2007-05-01
Applying optimal stimulation theory, the present study explored the development of sustained attention as a dynamic process. It examined the interaction of modality and temperament over time in children and adults. Second-grade children and college-aged adults performed auditory and visual vigilance tasks. Using the Carey temperament questionnaires (S. C. McDevitt & W. B. Carey, 1995), the authors classified participants according to temperament composites of reactivity and task orientation. In a preliminary study, tasks were equated across age and modality using d' matching procedures. In the main experiment, 48 children and 48 adults performed these calibrated tasks. The auditory task proved more difficult for both children and adults. Intermodal relations changed with age: Performance across modality was significantly correlated for children but not for adults. Although temperament did not significantly predict performance in adults, it did for children. The temperament effects observed in children--specifically in those with the composite of reactivity--occurred in connection with the auditory task and in a manner consistent with theoretical predictions derived from optimal stimulation theory. Copyright (c) 2007 APA, all rights reserved.
Specialty Task Force: A Strategic Component to Electronic Health Record (EHR) Optimization.
Romero, Mary Rachel; Staub, Allison
2016-01-01
Post-implementation stage comes after an electronic health record (EHR) deployment. Analyst and end users deal with the reality that some of the concepts and designs initially planned and created may not be complementary to the workflow; creating anxiety, dissatisfaction, and failure with early adoption of system. Problems encountered during deployment are numerous and can vary from simple to complex. Redundant ticket submission creates backlog for Information Technology personnel resulting in delays in resolving concerns with EHR system. The process of optimization allows for evaluation of system and reassessment of users' needs. A solid and well executed optimization infrastructure can help minimize unexpected end-user disruptions and help tailor the system to meet regulatory agency goals and practice standards. A well device plan to resolve problems during post implementation is necessary for cost containment and to streamline communication efforts. Creating a specialty specific collaborative task force is efficacious and expedites resolution of users' concerns through a more structured process.
NASA Astrophysics Data System (ADS)
Belokurov, V. P.; Belokurov, S. V.; Korablev, R. A.; Shtepa, A. A.
2018-05-01
The article deals with decision making concerning transport tasks on search iterations in the management of motor transport processes. An optimal selection of the best option for specific situations is suggested in the management of complex multi-criteria transport processes.
Kinjo, Ken; Uchibe, Eiji; Doya, Kenji
2013-01-01
Linearly solvable Markov Decision Process (LMDP) is a class of optimal control problem in which the Bellman's equation can be converted into a linear equation by an exponential transformation of the state value function (Todorov, 2009b). In an LMDP, the optimal value function and the corresponding control policy are obtained by solving an eigenvalue problem in a discrete state space or an eigenfunction problem in a continuous state using the knowledge of the system dynamics and the action, state, and terminal cost functions. In this study, we evaluate the effectiveness of the LMDP framework in real robot control, in which the dynamics of the body and the environment have to be learned from experience. We first perform a simulation study of a pole swing-up task to evaluate the effect of the accuracy of the learned dynamics model on the derived the action policy. The result shows that a crude linear approximation of the non-linear dynamics can still allow solution of the task, despite with a higher total cost. We then perform real robot experiments of a battery-catching task using our Spring Dog mobile robot platform. The state is given by the position and the size of a battery in its camera view and two neck joint angles. The action is the velocities of two wheels, while the neck joints were controlled by a visual servo controller. We test linear and bilinear dynamic models in tasks with quadratic and Guassian state cost functions. In the quadratic cost task, the LMDP controller derived from a learned linear dynamics model performed equivalently with the optimal linear quadratic regulator (LQR). In the non-quadratic task, the LMDP controller with a linear dynamics model showed the best performance. The results demonstrate the usefulness of the LMDP framework in real robot control even when simple linear models are used for dynamics learning.
Template optimization and transfer in perceptual learning.
Kurki, Ilmari; Hyvärinen, Aapo; Saarinen, Jussi
2016-08-01
We studied how learning changes the processing of a low-level Gabor stimulus, using a classification-image method (psychophysical reverse correlation) and a task where observers discriminated between slight differences in the phase (relative alignment) of a target Gabor in visual noise. The method estimates the internal "template" that describes how the visual system weights the input information for decisions. One popular idea has been that learning makes the template more like an ideal Bayesian weighting; however, the evidence has been indirect. We used a new regression technique to directly estimate the template weight change and to test whether the direction of reweighting is significantly different from an optimal learning strategy. The subjects trained the task for six daily sessions, and we tested the transfer of training to a target in an orthogonal orientation. Strong learning and partial transfer were observed. We tested whether task precision (difficulty) had an effect on template change and transfer: Observers trained in either a high-precision (small, 60° phase difference) or a low-precision task (180°). Task precision did not have an effect on the amount of template change or transfer, suggesting that task precision per se does not determine whether learning generalizes. Classification images show that training made observers use more task-relevant features and unlearn some irrelevant features. The transfer templates resembled partially optimized versions of templates in training sessions. The template change direction resembles ideal learning significantly but not completely. The amount of template change was highly correlated with the amount of learning.
Models for interrupted monitoring of a stochastic process
NASA Technical Reports Server (NTRS)
Palmer, E.
1977-01-01
As computers are added to the cockpit, the pilot's job is changing from of manually flying the aircraft, to one of supervising computers which are doing navigation, guidance and energy management calculations as well as automatically flying the aircraft. In this supervisorial role the pilot must divide his attention between monitoring the aircraft's performance and giving commands to the computer. Normative strategies are developed for tasks where the pilot must interrupt his monitoring of a stochastic process in order to attend to other duties. Results are given as to how characteristics of the stochastic process and the other tasks affect the optimal strategies.
The economic production of alcohol fuels from coal-derived synthesis gas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kugler, E.L.; Dadyburjor, D.B.; Yang, R.Y.K.
1995-12-31
The objectives of this project are to discover, (1) study and evaluate novel heterogeneous catalytic systems for the production of oxygenated fuel enhancers from synthesis gas. Specifically, alternative methods of preparing catalysts are to be investigated, and novel catalysts, including sulfur-tolerant ones, are to be pursued. (Task 1); (2) explore, analytically and on the bench scale, novel reactor and process concepts for use in converting syngas to liquid fuel products. (Task 1); (3) simulate by computer the most energy efficient and economically efficient process for converting coal to energy, with primary focus on converting syngas to fuel alcohols. (Task 2);more » (4) develop on the bench scale the best holistic combination of chemistry, catalyst, reactor and total process configuration integrated with the overall coal conversion process to achieve economic optimization for the conversion of syngas to liquid products within the framework of achieving the maximum cost effective transformation of coal to energy equivalents. (Tasks 1 and 2); and (5) evaluate the combustion, emission and performance characteristics of fuel alcohols and blends of alcohols with petroleum-based fuels. (Task 2)« less
Identification of Biokinetic Models Using the Concept of Extents.
Mašić, Alma; Srinivasan, Sriniketh; Billeter, Julien; Bonvin, Dominique; Villez, Kris
2017-07-05
The development of a wide array of process technologies to enable the shift from conventional biological wastewater treatment processes to resource recovery systems is matched by an increasing demand for predictive capabilities. Mathematical models are excellent tools to meet this demand. However, obtaining reliable and fit-for-purpose models remains a cumbersome task due to the inherent complexity of biological wastewater treatment processes. In this work, we present a first study in the context of environmental biotechnology that adopts and explores the use of extents as a way to simplify and streamline the dynamic process modeling task. In addition, the extent-based modeling strategy is enhanced by optimal accounting for nonlinear algebraic equilibria and nonlinear measurement equations. Finally, a thorough discussion of our results explains the benefits of extent-based modeling and its potential to turn environmental process modeling into a highly automated task.
A system approach to aircraft optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1991-01-01
Mutual couplings among the mathematical models of physical phenomena and parts of a system such as an aircraft complicate the design process because each contemplated design change may have a far reaching consequence throughout the system. Techniques are outlined for computing these influences as system design derivatives useful for both judgemental and formal optimization purposes. The techniques facilitate decomposition of the design process into smaller, more manageable tasks and they form a methodology that can easily fit into existing engineering organizations and incorporate their design tools.
Analysis of tasks for dynamic man/machine load balancing in advanced helicopters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jorgensen, C.C.
1987-10-01
This report considers task allocation requirements imposed by advanced helicopter designs incorporating mixes of human pilots and intelligent machines. Specifically, it develops an analogy between load balancing using distributed non-homogeneous multiprocessors and human team functions. A taxonomy is presented which can be used to identify task combinations likely to cause overload for dynamic scheduling and process allocation mechanisms. Designer criteria are given for function decomposition, separation of control from data, and communication handling for dynamic tasks. Possible effects of n-p complete scheduling problems are noted and a class of combinatorial optimization methods are examined.
Karayanidis, Frini; Nicholson, Rebecca; Schall, Ulrich; Meem, Lydia; Fulham, Ross; Michie, Patricia T
2006-10-01
The present study used behavioral and event-related potential (ERP) indices of task-switching to examine whether schizophrenia patients have a specific deficit in anticipatory task-set reconfiguration. Participants switched between univalent tasks in an alternating runs paradigms with blocked response-stimulus interval (RSI) manipulation (150, 300, 600, and 1200ms). Nineteen high functioning people with schizophrenia were compared to controls that were matched for age, gender, education and premorbid IQ estimate. Schizophrenia patients had overall increased RT, but no increase in corrected RT switch cost. In the schizophrenia group, ERPs showed reduced activation of the differential positivity in anticipation of switch trial at the optimal 600ms RSI and reduced activation of the frontal post-stimulus switch negativity at both 600 and 1200ms RSI compared to the control group. Despite no behavioral differences in task switching performance, anticipatory and stimulus-triggered ERP indices of task-switching suggest group differences in processing of switch and repeat trials, especially at longer RSI conditions that for control participants provide opportunity for anticipatory activation of task-set reconfiguration processes. These results are compatible with impaired implementation of endogenously driven processes in schizophrenia and greater reliance on external task cues, especially at long preparation intervals.
Alterations in Resting-State Activity Relate to Performance in a Verbal Recognition Task
López Zunini, Rocío A.; Thivierge, Jean-Philippe; Kousaie, Shanna; Sheppard, Christine; Taler, Vanessa
2013-01-01
In the brain, resting-state activity refers to non-random patterns of intrinsic activity occurring when participants are not actively engaged in a task. We monitored resting-state activity using electroencephalogram (EEG) both before and after a verbal recognition task. We show a strong positive correlation between accuracy in verbal recognition and pre-task resting-state alpha power at posterior sites. We further characterized this effect by examining resting-state post-task activity. We found marked alterations in resting-state alpha power when comparing pre- and post-task periods, with more pronounced alterations in participants that attained higher task accuracy. These findings support a dynamical view of cognitive processes where patterns of ongoing brain activity can facilitate –or interfere– with optimal task performance. PMID:23785436
Optimized mobile retroreflectivity unit data processing algorithms.
DOT National Transportation Integrated Search
2017-04-01
The University of North Florida, in collaboration with the FDOT, was tasked to establish precise line-stripe evaluation methods using the Mobile Retroreflectivity Unit (MRU). Initial implementation of the manufacturers software resulted in measure...
Optimal Design of Cable-Driven Manipulators Using Particle Swarm Optimization.
Bryson, Joshua T; Jin, Xin; Agrawal, Sunil K
2016-08-01
The design of cable-driven manipulators is complicated by the unidirectional nature of the cables, which results in extra actuators and limited workspaces. Furthermore, the particular arrangement of the cables and the geometry of the robot pose have a significant effect on the cable tension required to effect a desired joint torque. For a sufficiently complex robot, the identification of a satisfactory cable architecture can be difficult and can result in multiply redundant actuators and performance limitations based on workspace size and cable tensions. This work leverages previous research into the workspace analysis of cable systems combined with stochastic optimization to develop a generalized methodology for designing optimized cable routings for a given robot and desired task. A cable-driven robot leg performing a walking-gait motion is used as a motivating example to illustrate the methodology application. The components of the methodology are described, and the process is applied to the example problem. An optimal cable routing is identified, which provides the necessary controllable workspace to perform the desired task and enables the robot to perform that task with minimal cable tensions. A robot leg is constructed according to this routing and used to validate the theoretical model and to demonstrate the effectiveness of the resulting cable architecture.
Mathewson, Kyle E; Basak, Chandramallika; Maclin, Edward L; Low, Kathy A; Boot, Walter R; Kramer, Arthur F; Fabiani, Monica; Gratton, Gabriele
2012-12-01
We hypothesized that control processes, as measured using electrophysiological (EEG) variables, influence the rate of learning of complex tasks. Specifically, we measured alpha power, event-related spectral perturbations (ERSPs), and event-related brain potentials during early training of the Space Fortress task, and correlated these measures with subsequent learning rate and performance in transfer tasks. Once initial score was partialled out, the best predictors were frontal alpha power and alpha and delta ERSPs, but not P300. By combining these predictors, we could explain about 50% of the learning rate variance and 10%-20% of the variance in transfer to other tasks using only pretraining EEG measures. Thus, control processes, as indexed by alpha and delta EEG oscillations, can predict learning and skill improvements. The results are of potential use to optimize training regimes. Copyright © 2012 Society for Psychophysiological Research.
Deb, Kalyanmoy; Sinha, Ankur
2010-01-01
Bilevel optimization problems involve two optimization tasks (upper and lower level), in which every feasible upper level solution must correspond to an optimal solution to a lower level optimization problem. These problems commonly appear in many practical problem solving tasks including optimal control, process optimization, game-playing strategy developments, transportation problems, and others. However, they are commonly converted into a single level optimization problem by using an approximate solution procedure to replace the lower level optimization task. Although there exist a number of theoretical, numerical, and evolutionary optimization studies involving single-objective bilevel programming problems, not many studies look at the context of multiple conflicting objectives in each level of a bilevel programming problem. In this paper, we address certain intricate issues related to solving multi-objective bilevel programming problems, present challenging test problems, and propose a viable and hybrid evolutionary-cum-local-search based algorithm as a solution methodology. The hybrid approach performs better than a number of existing methodologies and scales well up to 40-variable difficult test problems used in this study. The population sizing and termination criteria are made self-adaptive, so that no additional parameters need to be supplied by the user. The study indicates a clear niche of evolutionary algorithms in solving such difficult problems of practical importance compared to their usual solution by a computationally expensive nested procedure. The study opens up many issues related to multi-objective bilevel programming and hopefully this study will motivate EMO and other researchers to pay more attention to this important and difficult problem solving activity.
Optimizing Utilization of Detectors
2016-03-01
provide a quantifiable process to determine how much time should be allocated to each task sharing the same asset . This optimized expected time... allocation is calculated by numerical analysis and Monte Carlo simulation. Numerical analysis determines the expectation by involving an integral and...determines the optimum time allocation of the asset by repeatedly running experiments to approximate the expectation of the random variables. This
Process Approach for Modeling of Machine and Tractor Fleet Structure
NASA Astrophysics Data System (ADS)
Dokin, B. D.; Aletdinova, A. A.; Kravchenko, M. S.; Tsybina, Y. S.
2018-05-01
The existing software complexes on modelling of the machine and tractor fleet structure are mostly aimed at solving the task of optimization. However, the creators, choosing only one optimization criterion and incorporating it in their software, provide grounds on why it is the best without giving a decision maker the opportunity to choose it for their enterprise. To analyze “bottlenecks” of machine and tractor fleet modelling, the authors of this article created a process model, in which they included adjustment to the plan of using machinery based on searching through alternative technologies. As a result, the following recommendations for software complex development have been worked out: the introduction of a database of alternative technologies; the possibility for a user to change the timing of the operations even beyond the allowable limits and in that case the calculation of the incurred loss; the possibility to rule out the solution of an optimization task, and if there is a necessity in it - the possibility to choose an optimization criterion; introducing graphical display of an annual complex of works, which could be enough for the development and adjustment of a business strategy.
NASA Technical Reports Server (NTRS)
Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke
1989-01-01
Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.
Automated Array Assembly, Phase 2
NASA Technical Reports Server (NTRS)
Carbajal, B. G.
1979-01-01
The Automated Array Assembly Task, Phase 2 of the Low Cost Silicon Solar Array Project is a process development task. The contract provides for the fabrication of modules from large area tandem junction cells (TJC). During this quarter, effort was focused on the design of a large area, approximately 36 sq cm, TJC and process verification runs. The large area TJC design was optimized for minimum I squared R power losses. In the TJM activity, the cell-module interfaces were defined, module substrates were formed and heat treated and clad metal interconnect strips were fabricated.
Demerouti, Evangelia; Sanz-Vergel, Ana Isabel; Petrou, Paraskevas; van den Heuvel, Machteld
2016-10-01
Although work and family are undoubtedly important life domains, individuals are also active in other life roles which are also important to them (like pursuing personal interests). Building on identity theory and the resource perspective on work-home interface, we examined whether there is an indirect effect of work-self conflict/facilitation on exhaustion and task performance over time through personal resources (i.e., self-efficacy and optimism). The sample was composed of 368 Dutch police officers. Results of the 3-wave longitudinal study confirmed that work-self conflict was related to lower levels of self-efficacy, whereas work-self facilitation was related to improved optimism over time. In turn, self-efficacy was related to higher task performance, whereas optimism was related to diminished levels of exhaustion over time. Further analysis supported the negative, indirect effect of work-self facilitation on exhaustion through optimism over time, and only a few reversed causal effects emerged. The study contributes to the literature on interrole management by showing the role of personal resources in the process of conflict or facilitation over time. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Hayashibe, Mitsuhiro; Shimoda, Shingo
2014-01-01
A human motor system can improve its behavior toward optimal movement. The skeletal system has more degrees of freedom than the task dimensions, which incurs an ill-posed problem. The multijoint system involves complex interaction torques between joints. To produce optimal motion in terms of energy consumption, the so-called cost function based optimization has been commonly used in previous works.Even if it is a fact that an optimal motor pattern is employed phenomenologically, there is no evidence that shows the existence of a physiological process that is similar to such a mathematical optimization in our central nervous system.In this study, we aim to find a more primitive computational mechanism with a modular configuration to realize adaptability and optimality without prior knowledge of system dynamics.We propose a novel motor control paradigm based on tacit learning with task space feedback. The motor command accumulation during repetitive environmental interactions, play a major role in the learning process. It is applied to a vertical cyclic reaching which involves complex interaction torques.We evaluated whether the proposed paradigm can learn how to optimize solutions with a 3-joint, planar biomechanical model. The results demonstrate that the proposed method was valid for acquiring motor synergy and resulted in energy efficient solutions for different load conditions. The case in feedback control is largely affected by the interaction torques. In contrast, the trajectory is corrected over time with tacit learning toward optimal solutions.Energy efficient solutions were obtained by the emergence of motor synergy. During learning, the contribution from feedforward controller is augmented and the one from the feedback controller is significantly minimized down to 12% for no load at hand, 16% for a 0.5 kg load condition.The proposed paradigm could provide an optimization process in redundant system with dynamic-model-free and cost-function-free approach. PMID:24616695
Hayashibe, Mitsuhiro; Shimoda, Shingo
2014-01-01
A human motor system can improve its behavior toward optimal movement. The skeletal system has more degrees of freedom than the task dimensions, which incurs an ill-posed problem. The multijoint system involves complex interaction torques between joints. To produce optimal motion in terms of energy consumption, the so-called cost function based optimization has been commonly used in previous works.Even if it is a fact that an optimal motor pattern is employed phenomenologically, there is no evidence that shows the existence of a physiological process that is similar to such a mathematical optimization in our central nervous system.In this study, we aim to find a more primitive computational mechanism with a modular configuration to realize adaptability and optimality without prior knowledge of system dynamics.We propose a novel motor control paradigm based on tacit learning with task space feedback. The motor command accumulation during repetitive environmental interactions, play a major role in the learning process. It is applied to a vertical cyclic reaching which involves complex interaction torques.We evaluated whether the proposed paradigm can learn how to optimize solutions with a 3-joint, planar biomechanical model. The results demonstrate that the proposed method was valid for acquiring motor synergy and resulted in energy efficient solutions for different load conditions. The case in feedback control is largely affected by the interaction torques. In contrast, the trajectory is corrected over time with tacit learning toward optimal solutions.Energy efficient solutions were obtained by the emergence of motor synergy. During learning, the contribution from feedforward controller is augmented and the one from the feedback controller is significantly minimized down to 12% for no load at hand, 16% for a 0.5 kg load condition.The proposed paradigm could provide an optimization process in redundant system with dynamic-model-free and cost-function-free approach.
Task-driven dictionary learning.
Mairal, Julien; Bach, Francis; Ponce, Jean
2012-04-01
Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience, and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a large-scale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in large-scale settings, and is well suited to supervised and semi-supervised classification, as well as regression tasks for data that admit sparse representations.
Flexible and fast: linguistic shortcut affects both shallow and deep conceptual processing.
Connell, Louise; Lynott, Dermot
2013-06-01
Previous research has shown that people use linguistic distributional information during conceptual processing, and that it is especially useful for shallow tasks and rapid responding. Using two conceptual combination tasks, we showed that this linguistic shortcut extends to the processing of novel stimuli, is used in both successful and unsuccessful conceptual processing, and is evident in both shallow and deep conceptual tasks. Specifically, as predicted by the ECCo theory of conceptual combination, people use the linguistic shortcut as a "quick-and-dirty" guide to whether the concepts are likely to combine into a coherent conceptual representation, in both shallow sensibility judgment and deep interpretation generation tasks. Linguistic distributional frequency predicts both the likelihood and the time course of rejecting a novel word compound as nonsensical or uninterpretable. However, it predicts the time course of successful processing only in shallow sensibility judgment, because the deeper conceptual process of interpretation generation does not allow the linguistic shortcut to suffice. Furthermore, the effects of linguistic distributional frequency are independent of any effects of conventional word frequency. We discuss the utility of the linguistic shortcut as a cognitive triage mechanism that can optimize processing in a limited-resource conceptual system.
A Technical Survey on Optimization of Processing Geo Distributed Data
NASA Astrophysics Data System (ADS)
Naga Malleswari, T. Y. J.; Ushasukhanya, S.; Nithyakalyani, A.; Girija, S.
2018-04-01
With growing cloud services and technology, there is growth in some geographically distributed data centers to store large amounts of data. Analysis of geo-distributed data is required in various services for data processing, storage of essential information, etc., processing this geo-distributed data and performing analytics on this data is a challenging task. The distributed data processing is accompanied by issues in storage, computation and communication. The key issues to be dealt with are time efficiency, cost minimization, utility maximization. This paper describes various optimization methods like end-to-end multiphase, G-MR, etc., using the techniques like Map-Reduce, CDS (Community Detection based Scheduling), ROUT, Workload-Aware Scheduling, SAGE, AMP (Ant Colony Optimization) to handle these issues. In this paper various optimization methods and techniques used are analyzed. It has been observed that end-to end multiphase achieves time efficiency; Cost minimization concentrates to achieve Quality of Service, Computation and reduction of Communication cost. SAGE achieves performance improvisation in processing geo-distributed data sets.
Ma, Ning; Yu, Angela J
2016-01-01
Inhibitory control, the ability to stop or modify preplanned actions under changing task conditions, is an important component of cognitive functions. Two lines of models of inhibitory control have previously been proposed for human response in the classical stop-signal task, in which subjects must inhibit a default go response upon presentation of an infrequent stop signal: (1) the race model, which posits two independent go and stop processes that race to determine the behavioral outcome, go or stop; and (2) an optimal decision-making model, which posits that observers decides whether and when to go based on continually (Bayesian) updated information about both the go and stop stimuli. In this work, we probe the relationship between go and stop processing by explicitly manipulating the discrimination difficulty of the go stimulus. While the race model assumes the go and stop processes are independent, and therefore go stimulus discriminability should not affect the stop stimulus processing, we simulate the optimal model to show that it predicts harder go discrimination should result in longer go reaction time (RT), lower stop error rate, as well as faster stop-signal RT. We then present novel behavioral data that validate these model predictions. The results thus favor a fundamentally inseparable account of go and stop processing, in a manner consistent with the optimal model, and contradicting the independence assumption of the race model. More broadly, our findings contribute to the growing evidence that the computations underlying inhibitory control are systematically modulated by cognitive influences in a Bayes-optimal manner, thus opening new avenues for interpreting neural responses underlying inhibitory control.
Zhang, Rubo; Yang, Yu
2017-01-01
Research on distributed task planning model for multi-autonomous underwater vehicle (MAUV). A scroll time domain quantum artificial bee colony (STDQABC) optimization algorithm is proposed to solve the multi-AUV optimal task planning scheme. In the uncertain marine environment, the rolling time domain control technique is used to realize a numerical optimization in a narrowed time range. Rolling time domain control is one of the better task planning techniques, which can greatly reduce the computational workload and realize the tradeoff between AUV dynamics, environment and cost. Finally, a simulation experiment was performed to evaluate the distributed task planning performance of the scroll time domain quantum bee colony optimization algorithm. The simulation results demonstrate that the STDQABC algorithm converges faster than the QABC and ABC algorithms in terms of both iterations and running time. The STDQABC algorithm can effectively improve MAUV distributed tasking planning performance, complete the task goal and get the approximate optimal solution. PMID:29186166
Li, Jianjun; Zhang, Rubo; Yang, Yu
2017-01-01
Research on distributed task planning model for multi-autonomous underwater vehicle (MAUV). A scroll time domain quantum artificial bee colony (STDQABC) optimization algorithm is proposed to solve the multi-AUV optimal task planning scheme. In the uncertain marine environment, the rolling time domain control technique is used to realize a numerical optimization in a narrowed time range. Rolling time domain control is one of the better task planning techniques, which can greatly reduce the computational workload and realize the tradeoff between AUV dynamics, environment and cost. Finally, a simulation experiment was performed to evaluate the distributed task planning performance of the scroll time domain quantum bee colony optimization algorithm. The simulation results demonstrate that the STDQABC algorithm converges faster than the QABC and ABC algorithms in terms of both iterations and running time. The STDQABC algorithm can effectively improve MAUV distributed tasking planning performance, complete the task goal and get the approximate optimal solution.
Optimal decision making on the basis of evidence represented in spike trains.
Zhang, Jiaxiang; Bogacz, Rafal
2010-05-01
Experimental data indicate that perceptual decision making involves integration of sensory evidence in certain cortical areas. Theoretical studies have proposed that the computation in neural decision circuits approximates statistically optimal decision procedures (e.g., sequential probability ratio test) that maximize the reward rate in sequential choice tasks. However, these previous studies assumed that the sensory evidence was represented by continuous values from gaussian distributions with the same variance across alternatives. In this article, we make a more realistic assumption that sensory evidence is represented in spike trains described by the Poisson processes, which naturally satisfy the mean-variance relationship observed in sensory neurons. We show that for such a representation, the neural circuits involving cortical integrators and basal ganglia can approximate the optimal decision procedures for two and multiple alternative choice tasks.
NASA Technical Reports Server (NTRS)
Tsang, Pamela S.; Hart, Sandra G.; Vidulich, Michael A.
1987-01-01
The utility of speech technology was evaluated in terms of three dual task principles: resource competition between the time shared tasks, stimulus central processing response compatibility, and task integrality. Empirical support for these principles was reviewed. Two studies investigating the interactive effects of the three principles were described. Objective performance and subjective workload ratings for both single and dual tasks were examined. It was found that the single task measures were not necessarily good predictors for the dual task measures. It was shown that all three principles played an important role in determining an optimal task configuration. This was reflected in both the performance measures and the subjective measures. Therefore, consideration of all three principles is required to insure proper use of speech technology in a complex environment.
Reliability Centred Maintenance (RCM) Analysis of Laser Machine in Filling Lithos at PT X
NASA Astrophysics Data System (ADS)
Suryono, M. A. E.; Rosyidi, C. N.
2018-03-01
PT. X used automated machines which work for sixteen hours per day. Therefore, the machines should be maintained to keep the availability of the machines. The aim of this research is to determine maintenance tasks according to the cause of component’s failure using Reliability Centred Maintenance (RCM) and determine the amount of optimal inspection frequency which must be performed to the machine at filling lithos process. In this research, RCM is used as an analysis tool to determine the critical component and find optimal inspection frequencies to maximize machine’s reliability. From the analysis, we found that the critical machine in filling lithos process is laser machine in Line 2. Then we proceed to determine the cause of machine’s failure. Lastube component has the highest Risk Priority Number (RPN) among other components such as power supply, lens, chiller, laser siren, encoder, conveyor, and mirror galvo. Most of the components have operational consequences and the others have hidden failure consequences and safety consequences. Time-directed life-renewal task, failure finding task, and servicing task can be used to overcome these consequences. The results of data analysis show that the inspection must be performed once a month for laser machine in the form of preventive maintenance to lowering the downtime.
Parsa, Behnoosh; Terekhov, Alexander; Zatsiorsky, Vladimir M; Latash, Mark L
2017-02-01
We address the nature of unintentional changes in performance in two papers. This first paper tested a hypothesis that unintentional changes in performance variables during continuous tasks without visual feedback are due to two processes. First, there is a drift of the referent coordinate for the salient performance variable toward the actual coordinate of the effector. Second, there is a drift toward minimum of a cost function. We tested this hypothesis in four-finger isometric pressing tasks that required the accurate production of a combination of total moment and total force with natural and modified finger involvement. Subjects performed accurate force-moment production tasks under visual feedback, and then visual feedback was removed for some or all of the salient variables. Analytical inverse optimization was used to compute a cost function. Without visual feedback, both force and moment drifted slowly toward lower absolute magnitudes. Over 15 s, the force drop could reach 20% of its initial magnitude while moment drop could reach 30% of its initial magnitude. Individual finger forces could show drifts toward both higher and lower forces. The cost function estimated using the analytical inverse optimization reduced its value as a consequence of the drift. We interpret the results within the framework of hierarchical control with referent spatial coordinates for salient variables at each level of the hierarchy combined with synergic control of salient variables. The force drift is discussed as a natural relaxation process toward states with lower potential energy in the physical (physiological) system involved in the task.
Parsa, Behnoosh; Terekhov, Alexander; Zatsiorsky, Vladimir M.; Latash, Mark L.
2016-01-01
We address the nature of unintentional changes in performance in two papers. This first paper tested a hypothesis that unintentional changes in performance variables during continuous tasks without visual feedback are due to two processes. First, there is a drift of the referent coordinate for the salient performance variable toward the actual coordinate of the effector. Second, there is a drift toward minimum of a cost function. We tested this hypothesis in four-finger isometric pressing tasks that required the accurate production of a combination of total moment and total force with natural and modified finger involvement. Subjects performed accurate force/moment production tasks under visual feedback, and then visual feedback was removed for some or all of the salient variables. Analytical inverse optimization was used to compute a cost function. Without visual feedback, both force and moment drifted slowly toward lower absolute magnitudes. Over 15 s, the force drop could reach 20% of its initial magnitude while moment drop could reach 30% of its initial magnitude. Individual finger forces could show drifts toward both higher and lower forces. The cost function estimated using the analytical inverse optimization reduced its value as a consequence of the drift. We interpret the results within the framework of hierarchical control with referent spatial coordinates for salient variables at each level of the hierarchy combined with synergic control of salient variables. The force drift is discussed as a natural relaxation process toward states with lower potential energy in the physical (physiological) system involved in the task. PMID:27785549
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agrawal, Rakesh; Delgass, W. N.; Ribeiro, F.
2013-08-31
The primary objective and outcome of this project was the development and validation of a novel, low-cost, high-pressure fast-hydropyrolysis/hydrodeoxygenation (HDO) process (H 2Bioil) using supplementary hydrogen (H 2) to produce liquid hydrocarbons from biomass. The research efforts under the various tasks of the project have culminated in the first experimental demonstration of the H 2Bioil process, producing 100% deoxygenated >C4+ hydrocarbons containing 36-40% of the carbon in the feed of pyrolysis products from biomass. The demonstrated H{sub 2}Bioil process technology (i.e. reactor, catalyst, and downstream product recovery) is scalable to a commercial level and is estimated to be economically competitivemore » for the cases when supplementary H 2 is sourced from coal, natural gas, or nuclear. Additionally, energy systems modeling has revealed several process integration options based on the H 2Bioilprocess for energy and carbon efficient liquid fuel production. All project tasks and milestones were completed or exceeded. Novel, commercially-scalable, high-pressure reactors for both fast-hydropyrolysis and hydrodeoxygenation were constructed, completing Task A. These reactors were capable of operation under a wide-range of conditions; enabling process studies that lead to identification of optimum process conditions. Model compounds representing biomass pyrolysis products were studied, completing Task B. These studies were critical in identifying and developing HDO catalysts to target specific oxygen functional groups. These process and model compound catalyst studies enabled identification of catalysts that achieved 100% deoxygenation of the real biomass feedstock, sorghum, to form hydrocarbons in high yields as part of Task C. The work completed during this grant has identified and validated the novel and commercially scalable H 2Bioil process for production of hydrocarbon fuels from biomass. Studies on model compounds as well as real biomass feedstocks were utilized to identify optimized process conditions and selective HDO catalyst for high yield production of hydrocarbons from biomass. In addition to these experimental efforts, in Tasks D and E, we have developed a mathematical optimization framework to identify carbon and energy efficient biomass-to-liquid fuel process designs that integrate the use of different primary energy sources along with biomass (e.g. solar, coal or natural gas) for liquid fuel production. Using this tool, we have identified augmented biomass-to-liquid fuel configurations based on the fast-hydropyrolysis/HDO pathway, which was experimentally studied in this project. The computational approach used for screening alternative process configurations represents a unique contribution to the field of biomass processing for liquid fuel production.« less
A Multivariate Quality Loss Function Approach for Optimization of Spinning Processes
NASA Astrophysics Data System (ADS)
Chakraborty, Shankar; Mitra, Ankan
2018-05-01
Recent advancements in textile industry have given rise to several spinning techniques, such as ring spinning, rotor spinning etc., which can be used to produce a wide variety of textile apparels so as to fulfil the end requirements of the customers. To achieve the best out of these processes, they should be utilized at their optimal parametric settings. However, in presence of multiple yarn characteristics which are often conflicting in nature, it becomes a challenging task for the spinning industry personnel to identify the best parametric mix which would simultaneously optimize all the responses. Hence, in this paper, the applicability of a new systematic approach in the form of multivariate quality loss function technique is explored for optimizing multiple quality characteristics of yarns while identifying the ideal settings of two spinning processes. It is observed that this approach performs well against the other multi-objective optimization techniques, such as desirability function, distance function and mean squared error methods. With slight modifications in the upper and lower specification limits of the considered quality characteristics, and constraints of the non-linear optimization problem, it can be successfully applied to other processes in textile industry to determine their optimal parametric settings.
Pusic, Martin V.; LeBlanc, Vicki; Patel, Vimla L.
2001-01-01
Traditional task analysis for instructional design has emphasized the importance of precisely defining behavioral educational objectives and working back to select objective-appropriate instructional strategies. However, this approach may miss effective strategies. Cognitive task analysis, on the other hand, breaks a process down into its component knowledge representations. Selection of instructional strategies based on all such representations in a domain is likely to lead to optimal instructional design. In this demonstration, using the interpretation of cervical spine x-rays as an educational example, we show how a detailed cognitive task analysis can guide the development of computer-aided instruction.
Decision Making and Ratio Processing in Patients with Mild Cognitive Impairment.
Pertl, Marie-Theres; Benke, Thomas; Zamarian, Laura; Delazer, Margarete
2015-01-01
Making advantageous decisions is important in everyday life. This study aimed at assessing how patients with mild cognitive impairment (MCI) make decisions under risk. Additionally, it investigated the relationship between decision making, ratio processing, basic numerical abilities, and executive functions. Patients with MCI (n = 22) were compared with healthy controls (n = 29) on a complex task of decision making under risk (Game of Dice Task-Double, GDT-D), on two tasks evaluating basic decision making under risk, on a task of ratio processing, and on several neuropsychological background tests. Patients performed significantly lower than controls on the GDT-D and on ratio processing, whereas groups performed comparably on basic decision tasks. Specifically, in the GDT-D, patients obtained lower net scores and lower mean expected values, which indicate a less advantageous performance relative to that of controls. Performance on the GDT-D correlated significantly with performance in basic decision tasks, ratio processing, and executive-function measures when the analysis was performed on the whole sample. Patients with MCI make sub-optimal decisions in complex risk situations, whereas they perform at the same level as healthy adults in simple decision situations. Ratio processing and executive functions have an impact on the decision-making performance of both patients and healthy older adults. In order to facilitate advantageous decisions in complex everyday situations, information should be presented in an easily comprehensible form and cognitive training programs for patients with MCI should focus--among other abilities--on executive functions and ratio processing.
Increasing Optimism Protects Against Pain-Induced Impairment in Task-Shifting Performance.
Boselie, Jantine J L M; Vancleef, Linda M G; Peters, Madelon L
2017-04-01
Persistent pain can lead to difficulties in executive task performance. Three core executive functions that are often postulated are inhibition, updating, and shifting. Optimism, the tendency to expect that good things happen in the future, has been shown to protect against pain-induced performance deterioration in executive function updating. This study tested whether this protective effect of a temporary optimistic state by means of a writing and visualization exercise extended to executive function shifting. A 2 (optimism: optimism vs no optimism) × 2 (pain: pain vs no pain) mixed factorial design was conducted. Participants (N = 61) completed a shifting task once with and once without concurrent painful heat stimulation after an optimism or neutral manipulation. Results showed that shifting performance was impaired when experimental heat pain was applied during task execution, and that optimism counteracted pain-induced deterioration in task-shifting performance. Experimentally-induced heat pain impairs shifting task performance and manipulated optimism or induced optimism counteracted this pain-induced performance deterioration. Identifying psychological factors that may diminish the negative effect of persistent pain on the ability to function in daily life is imperative. Copyright © 2016 American Pain Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Herz, A.; Herz, E.; Center, K.; George, P.; Axelrad, P.; Mutschler, S.; Jones, B.
2016-09-01
The Space Surveillance Network (SSN) is tasked with the increasingly difficult mission of detecting, tracking, cataloging and identifying artificial objects orbiting the Earth, including active and inactive satellites, spent rocket bodies, and fragmented debris. Much of the architecture and operations of the SSN are limited and outdated. Efforts are underway to modernize some elements of the systems. Even so, the ability to maintain the best current Space Situational Awareness (SSA) picture and identify emerging events in a timely fashion could be significantly improved by leveraging non-traditional sensor sites. Orbit Logic, the University of Colorado and the University of Texas at Austin are developing an innovative architecture and operations concept to coordinate the tasking and observation information processing of non - traditional assets based on information-theoretic approaches. These confirmed tasking schedules and the resulting data can then be used to "inform" the SSN tasking process. The 'Heimdall Web' system is comprised of core tasking optimization components and accompanying Web interfaces within a secure, split architecture that will for the first time allow non-traditional sensors to support SSA and improve SSN tasking. Heimdall Web application components appropriately score/prioritize space catalog objects based on covariance, priority, observability, expected information gain, and probability of detect - then coordinate an efficient sensor observation schedule for non-SSN sensors contributing to the overall SSA picture maintained by the Joint Space Operations Center (JSpOC). The Heimdall Web Ops concept supports sensor participation levels of "Scheduled", "Tasked" and "Contributing". Scheduled and Tasked sensors are provided optimized observation schedules or object tracking lists from central algorithms, while Contributing sensors review and select from a list of "desired track objects". All sensors are "Web Enabled" for tasking and feedback, supplying observation schedules, confirmed observations and related data back to Heimdall Web to complete the feedback loop for the next scheduling iteration.
Clery, Stephane; Cumming, Bruce G.
2017-01-01
Fine judgments of stereoscopic depth rely mainly on relative judgments of depth (relative binocular disparity) between objects, rather than judgments of the distance to where the eyes are fixating (absolute disparity). In macaques, visual area V2 is the earliest site in the visual processing hierarchy for which neurons selective for relative disparity have been observed (Thomas et al., 2002). Here, we found that, in macaques trained to perform a fine disparity discrimination task, disparity-selective neurons in V2 were highly selective for the task, and their activity correlated with the animals' perceptual decisions (unexplained by the stimulus). This may partially explain similar correlations reported in downstream areas. Although compatible with a perceptual role of these neurons for the task, the interpretation of such decision-related activity is complicated by the effects of interneuronal “noise” correlations between sensory neurons. Recent work has developed simple predictions to differentiate decoding schemes (Pitkow et al., 2015) without needing measures of noise correlations, and found that data from early sensory areas were compatible with optimal linear readout of populations with information-limiting correlations. In contrast, our data here deviated significantly from these predictions. We additionally tested this prediction for previously reported results of decision-related activity in V2 for a related task, coarse disparity discrimination (Nienborg and Cumming, 2006), thought to rely on absolute disparity. Although these data followed the predicted pattern, they violated the prediction quantitatively. This suggests that optimal linear decoding of sensory signals is not generally a good predictor of behavior in simple perceptual tasks. SIGNIFICANCE STATEMENT Activity in sensory neurons that correlates with an animal's decision is widely believed to provide insights into how the brain uses information from sensory neurons. Recent theoretical work developed simple predictions to differentiate decoding schemes, and found support for optimal linear readout of early sensory populations with information-limiting correlations. Here, we observed decision-related activity for neurons in visual area V2 of macaques performing fine disparity discrimination, as yet the earliest site for this task. These findings, and previously reported results from V2 in a different task, deviated from the predictions for optimal linear readout of a population with information-limiting correlations. Our results suggest that optimal linear decoding of early sensory information is not a general decoding strategy used by the brain. PMID:28100751
2013-02-01
Purified cultures are tested for optimized production under heterotrophic conditions with several organic carbon sources like beet and sorghum juice using ...Moreover, AFRL support sponsored the Master’s in Chemical Engineering project titled “Cost Analysis Of Local Bio- Products Processing Plant Using ...unlimited. 2.5 Screening for High Lipid Production Mutants Procedure: A selection of 84 single colony cultures was analyzed in this phase using the
NASA Astrophysics Data System (ADS)
Deris, A. M.; Zain, A. M.; Sallehuddin, R.; Sharif, S.
2017-09-01
Electric discharge machine (EDM) is one of the widely used nonconventional machining processes for hard and difficult to machine materials. Due to the large number of machining parameters in EDM and its complicated structural, the selection of the optimal solution of machining parameters for obtaining minimum machining performance is remain as a challenging task to the researchers. This paper proposed experimental investigation and optimization of machining parameters for EDM process on stainless steel 316L work piece using Harmony Search (HS) algorithm. The mathematical model was developed based on regression approach with four input parameters which are pulse on time, peak current, servo voltage and servo speed to the output response which is dimensional accuracy (DA). The optimal result of HS approach was compared with regression analysis and it was found HS gave better result y giving the most minimum DA value compared with regression approach.
NASA Astrophysics Data System (ADS)
Korotkova, T. I.; Popova, V. I.
2017-11-01
The generalized mathematical model of decision-making in the problem of planning and mode selection providing required heat loads in a large heat supply system is considered. The system is multilevel, decomposed into levels of main and distribution heating networks with intermediate control stages. Evaluation of the effectiveness, reliability and safety of such a complex system is carried out immediately according to several indicators, in particular pressure, flow, temperature. This global multicriteria optimization problem with constraints is decomposed into a number of local optimization problems and the coordination problem. An agreed solution of local problems provides a solution to the global multicriterion problem of decision making in a complex system. The choice of the optimum operational mode of operation of a complex heat supply system is made on the basis of the iterative coordination process, which converges to the coordinated solution of local optimization tasks. The interactive principle of multicriteria task decision-making includes, in particular, periodic adjustment adjustments, if necessary, guaranteeing optimal safety, reliability and efficiency of the system as a whole in the process of operation. The degree of accuracy of the solution, for example, the degree of deviation of the internal air temperature from the required value, can also be changed interactively. This allows to carry out adjustment activities in the best way and to improve the quality of heat supply to consumers. At the same time, an energy-saving task is being solved to determine the minimum required values of heads at sources and pumping stations.
Parameter meta-optimization of metaheuristics of solving specific NP-hard facility location problem
NASA Astrophysics Data System (ADS)
Skakov, E. S.; Malysh, V. N.
2018-03-01
The aim of the work is to create an evolutionary method for optimizing the values of the control parameters of metaheuristics of solving the NP-hard facility location problem. A system analysis of the tuning process of optimization algorithms parameters is carried out. The problem of finding the parameters of a metaheuristic algorithm is formulated as a meta-optimization problem. Evolutionary metaheuristic has been chosen to perform the task of meta-optimization. Thus, the approach proposed in this work can be called “meta-metaheuristic”. Computational experiment proving the effectiveness of the procedure of tuning the control parameters of metaheuristics has been performed.
Implementing Kanban for agile process management within the ALMA Software Operations Group
NASA Astrophysics Data System (ADS)
Reveco, Johnny; Mora, Matias; Shen, Tzu-Chiang; Soto, Ruben; Sepulveda, Jorge; Ibsen, Jorge
2014-07-01
After the inauguration of the Atacama Large Millimeter/submillimeter Array (ALMA), the Software Operations Group in Chile has refocused its objectives to: (1) providing software support to tasks related to System Integration, Scientific Commissioning and Verification, as well as Early Science observations; (2) testing the remaining software features, still under development by the Integrated Computing Team across the world; and (3) designing and developing processes to optimize and increase the level of automation of operational tasks. Due to their different stakeholders, each of these tasks presents a wide diversity of importances, lifespans and complexities. Aiming to provide the proper priority and traceability for every task without stressing our engineers, we introduced the Kanban methodology in our processes in order to balance the demand on the team against the throughput of the delivered work. The aim of this paper is to share experiences gained during the implementation of Kanban in our processes, describing the difficulties we have found, solutions and adaptations that led us to our current but still evolving implementation, which has greatly improved our throughput, prioritization and problem traceability.
Is Attentional Resource Allocation Across Sensory Modalities Task-Dependent?
Wahn, Basil; König, Peter
2017-01-01
Human information processing is limited by attentional resources. That is, via attentional mechanisms, humans select a limited amount of sensory input to process while other sensory input is neglected. In multisensory research, a matter of ongoing debate is whether there are distinct pools of attentional resources for each sensory modality or whether attentional resources are shared across sensory modalities. Recent studies have suggested that attentional resource allocation across sensory modalities is in part task-dependent. That is, the recruitment of attentional resources across the sensory modalities depends on whether processing involves object-based attention (e.g., the discrimination of stimulus attributes) or spatial attention (e.g., the localization of stimuli). In the present paper, we review findings in multisensory research related to this view. For the visual and auditory sensory modalities, findings suggest that distinct resources are recruited when humans perform object-based attention tasks, whereas for the visual and tactile sensory modalities, partially shared resources are recruited. If object-based attention tasks are time-critical, shared resources are recruited across the sensory modalities. When humans perform an object-based attention task in combination with a spatial attention task, partly shared resources are recruited across the sensory modalities as well. Conversely, for spatial attention tasks, attentional processing does consistently involve shared attentional resources for the sensory modalities. Generally, findings suggest that the attentional system flexibly allocates attentional resources depending on task demands. We propose that such flexibility reflects a large-scale optimization strategy that minimizes the brain's costly resource expenditures and simultaneously maximizes capability to process currently relevant information.
Learning optimal eye movements to unusual faces
Peterson, Matthew F.; Eckstein, Miguel P.
2014-01-01
Eye movements, which guide the fovea’s high resolution and computational power to relevant areas of the visual scene, are integral to efficient, successful completion of many visual tasks. How humans modify their eye movements through experience with their perceptual environments, and its functional role in learning new tasks, has not been fully investigated. Here, we used a face identification task where only the mouth discriminated exemplars to assess if, how, and when eye movement modulation may mediate learning. By interleaving trials of unconstrained eye movements with trials of forced fixation, we attempted to separate the contributions of eye movements and covert mechanisms to performance improvements. Without instruction, a majority of observers substantially increased accuracy and learned to direct their initial eye movements towards the optimal fixation point. The proximity of an observer’s default face identification eye movement behavior to the new optimal fixation point and the observer’s peripheral processing ability were predictive of performance gains and eye movement learning. After practice in a subsequent condition in which observers were directed to fixate different locations along the face, including the relevant mouth region, all observers learned to make eye movements to the optimal fixation point. In this fully learned state, augmented fixation strategy accounted for 43% of total efficiency improvements while covert mechanisms accounted for the remaining 57%. The findings suggest a critical role for eye movement planning to perceptual learning, and elucidate factors that can predict when and how well an observer can learn a new task with unusual exemplars. PMID:24291712
Testing the Limits of Optimizing Dual-Task Performance in Younger and Older Adults
Strobach, Tilo; Frensch, Peter; Müller, Herrmann Josef; Schubert, Torsten
2012-01-01
Impaired dual-task performance in younger and older adults can be improved with practice. Optimal conditions even allow for a (near) elimination of this impairment in younger adults. However, it is unknown whether such (near) elimination is the limit of performance improvements in older adults. The present study tests this limit in older adults under conditions of (a) a high amount of dual-task training and (b) training with simplified component tasks in dual-task situations. The data showed that a high amount of dual-task training in older adults provided no evidence for an improvement of dual-task performance to the optimal dual-task performance level achieved by younger adults. However, training with simplified component tasks in dual-task situations exclusively in older adults provided a similar level of optimal dual-task performance in both age groups. Therefore through applying a testing the limits approach, we demonstrated that older adults improved dual-task performance to the same level as younger adults at the end of training under very specific conditions. PMID:22408613
A Comparison of Five FMRI Protocols for Mapping Speech Comprehension Systems
Binder, Jeffrey R.; Swanson, Sara J.; Hammeke, Thomas A.; Sabsevitz, David S.
2008-01-01
Aims Many fMRI protocols for localizing speech comprehension have been described, but there has been little quantitative comparison of these methods. We compared five such protocols in terms of areas activated, extent of activation, and lateralization. Methods FMRI BOLD signals were measured in 26 healthy adults during passive listening and active tasks using words and tones. Contrasts were designed to identify speech perception and semantic processing systems. Activation extent and lateralization were quantified by counting activated voxels in each hemisphere for each participant. Results Passive listening to words produced bilateral superior temporal activation. After controlling for pre-linguistic auditory processing, only a small area in the left superior temporal sulcus responded selectively to speech. Active tasks engaged an extensive, bilateral attention and executive processing network. Optimal results (consistent activation and strongly lateralized pattern) were obtained by contrasting an active semantic decision task with a tone decision task. There was striking similarity between the network of brain regions activated by the semantic task and the network of brain regions that showed task-induced deactivation, suggesting that semantic processing occurs during the resting state. Conclusions FMRI protocols for mapping speech comprehension systems differ dramatically in pattern, extent, and lateralization of activation. Brain regions involved in semantic processing were identified only when an active, non-linguistic task was used as a baseline, supporting the notion that semantic processing occurs whenever attentional resources are not controlled. Identification of these lexical-semantic regions is particularly important for predicting language outcome in patients undergoing temporal lobe surgery. PMID:18513352
Abdullahi, Mohammed; Ngadi, Md Asri
2016-01-01
Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.
Abdullahi, Mohammed; Ngadi, Md Asri
2016-01-01
Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan. PMID:27348127
Temporal Integration Windows in Neural Processing and Perception Aligned to Saccadic Eye Movements.
Wutz, Andreas; Muschter, Evelyn; van Koningsbruggen, Martijn G; Weisz, Nathan; Melcher, David
2016-07-11
When processing dynamic input, the brain balances the opposing needs of temporal integration and sensitivity to change. We hypothesized that the visual system might resolve this challenge by aligning integration windows to the onset of newly arriving sensory samples. In a series of experiments, human participants observed the same sequence of two displays separated by a brief blank delay when performing either an integration or segregation task. First, using magneto-encephalography (MEG), we found a shift in the stimulus-evoked time courses by a 150-ms time window between task signals. After stimulus onset, multivariate pattern analysis (MVPA) decoding of task in occipital-parietal sources remained above chance for almost 1 s, and the task-decoding pattern interacted with task outcome. In the pre-stimulus period, the oscillatory phase in the theta frequency band was informative about both task processing and behavioral outcome for each task separately, suggesting that the post-stimulus effects were caused by a theta-band phase shift. Second, when aligning stimulus presentation to the onset of eye fixations, there was a similar phase shift in behavioral performance according to task demands. In both MEG and behavioral measures, task processing was optimal first for segregation and then integration, with opposite phase in the theta frequency range (3-5 Hz). The best fit to neurophysiological and behavioral data was given by a dampened 3-Hz oscillation from stimulus or eye fixation onset. The alignment of temporal integration windows to input changes found here may serve to actively organize the temporal processing of continuous sensory input. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Heuristics in Managing Complex Clinical Decision Tasks in Experts’ Decision Making
Islam, Roosan; Weir, Charlene; Del Fiol, Guilherme
2016-01-01
Background Clinical decision support is a tool to help experts make optimal and efficient decisions. However, little is known about the high level of abstractions in the thinking process for the experts. Objective The objective of the study is to understand how clinicians manage complexity while dealing with complex clinical decision tasks. Method After approval from the Institutional Review Board (IRB), three clinical experts were interviewed the transcripts from these interviews were analyzed. Results We found five broad categories of strategies by experts for managing complex clinical decision tasks: decision conflict, mental projection, decision trade-offs, managing uncertainty and generating rule of thumb. Conclusion Complexity is created by decision conflicts, mental projection, limited options and treatment uncertainty. Experts cope with complexity in a variety of ways, including using efficient and fast decision strategies to simplify complex decision tasks, mentally simulating outcomes and focusing on only the most relevant information. Application Understanding complex decision making processes can help design allocation based on the complexity of task for clinical decision support design. PMID:27275019
Heuristics in Managing Complex Clinical Decision Tasks in Experts' Decision Making.
Islam, Roosan; Weir, Charlene; Del Fiol, Guilherme
2014-09-01
Clinical decision support is a tool to help experts make optimal and efficient decisions. However, little is known about the high level of abstractions in the thinking process for the experts. The objective of the study is to understand how clinicians manage complexity while dealing with complex clinical decision tasks. After approval from the Institutional Review Board (IRB), three clinical experts were interviewed the transcripts from these interviews were analyzed. We found five broad categories of strategies by experts for managing complex clinical decision tasks: decision conflict, mental projection, decision trade-offs, managing uncertainty and generating rule of thumb. Complexity is created by decision conflicts, mental projection, limited options and treatment uncertainty. Experts cope with complexity in a variety of ways, including using efficient and fast decision strategies to simplify complex decision tasks, mentally simulating outcomes and focusing on only the most relevant information. Understanding complex decision making processes can help design allocation based on the complexity of task for clinical decision support design.
Tandonnet, Christophe; Davranche, Karen; Meynier, Chloé; Burle, Borís; Vidal, Franck; Hasbroucq, Thierry
2012-02-01
We investigated the influence of temporal preparation on information processing. Single-pulse transcranial magnetic stimulation (TMS) of the primary motor cortex was delivered during a between-hand choice task. The time interval between the warning and the imperative stimulus varied across blocks of trials was either optimal (500 ms) or nonoptimal (2500 ms) for participants' performance. Silent period duration was shorter prior to the first evidence of response selection for the optimal condition. Amplitude of the motor evoked potential specific to the responding hand increased earlier for the optimal condition. These results revealed an early release of cortical inhibition and a faster integration of the response selection-related inputs to the corticospinal pathway when temporal preparation is better. Temporal preparation may induce cortical activation prior to response selection that speeds up the implementation of the selected response. Copyright © 2011 Society for Psychophysiological Research.
The solution of private problems for optimization heat exchangers parameters
NASA Astrophysics Data System (ADS)
Melekhin, A.
2017-11-01
The relevance of the topic due to the decision of problems of the economy of resources in heating systems of buildings. To solve this problem we have developed an integrated method of research which allows solving tasks on optimization of parameters of heat exchangers. This method decides multicriteria optimization problem with the program nonlinear optimization on the basis of software with the introduction of an array of temperatures obtained using thermography. The author have developed a mathematical model of process of heat exchange in heat exchange surfaces of apparatuses with the solution of multicriteria optimization problem and check its adequacy to the experimental stand in the visualization of thermal fields, an optimal range of managed parameters influencing the process of heat exchange with minimal metal consumption and the maximum heat output fin heat exchanger, the regularities of heat exchange process with getting generalizing dependencies distribution of temperature on the heat-release surface of the heat exchanger vehicles, defined convergence of the results of research in the calculation on the basis of theoretical dependencies and solving mathematical model.
CQPSO scheduling algorithm for heterogeneous multi-core DAG task model
NASA Astrophysics Data System (ADS)
Zhai, Wenzheng; Hu, Yue-Li; Ran, Feng
2017-07-01
Efficient task scheduling is critical to achieve high performance in a heterogeneous multi-core computing environment. The paper focuses on the heterogeneous multi-core directed acyclic graph (DAG) task model and proposes a novel task scheduling method based on an improved chaotic quantum-behaved particle swarm optimization (CQPSO) algorithm. A task priority scheduling list was built. A processor with minimum cumulative earliest finish time (EFT) was acted as the object of the first task assignment. The task precedence relationships were satisfied and the total execution time of all tasks was minimized. The experimental results show that the proposed algorithm has the advantage of optimization abilities, simple and feasible, fast convergence, and can be applied to the task scheduling optimization for other heterogeneous and distributed environment.
A Bayesian hierarchical diffusion model decomposition of performance in Approach–Avoidance Tasks
Krypotos, Angelos-Miltiadis; Beckers, Tom; Kindt, Merel; Wagenmakers, Eric-Jan
2015-01-01
Common methods for analysing response time (RT) tasks, frequently used across different disciplines of psychology, suffer from a number of limitations such as the failure to directly measure the underlying latent processes of interest and the inability to take into account the uncertainty associated with each individual's point estimate of performance. Here, we discuss a Bayesian hierarchical diffusion model and apply it to RT data. This model allows researchers to decompose performance into meaningful psychological processes and to account optimally for individual differences and commonalities, even with relatively sparse data. We highlight the advantages of the Bayesian hierarchical diffusion model decomposition by applying it to performance on Approach–Avoidance Tasks, widely used in the emotion and psychopathology literature. Model fits for two experimental data-sets demonstrate that the model performs well. The Bayesian hierarchical diffusion model overcomes important limitations of current analysis procedures and provides deeper insight in latent psychological processes of interest. PMID:25491372
Mediodorsal thalamus is required for discrete phases of goal-directed behavior in macaques.
Wicker, Evan; Turchi, Janita; Malkova, Ludise; Forcelli, Patrick Alexander
2018-05-31
Reward contingencies are dynamic: outcomes that were valued at one point may subsequently lose value. Action selection in the face of dynamic reward associations requires several cognitive processes: registering a change in value of the primary reinforcer, adjusting the value of secondary reinforcers to reflect the new value of the primary reinforcer, and guiding action selection to optimal choices. Flexible responding has been evaluated extensively using reinforcer devaluation tasks. Performance on this task relies upon amygdala, Areas 11 and 13 of orbitofrontal cortex (OFC), and mediodorsal thalamus (MD). Differential contributions of amygdala and Areas 11 and 13 of OFC to specific sub-processes have been established, but the role of MD in these sub-processes is unknown. Pharmacological inactivation of the macaque MD during specific phases of this task revealed that MD is required for reward valuation and action selection. This profile is unique, differing from both amygdala and subregions of the OFC.
Optimal policy for value-based decision-making.
Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre
2016-08-18
For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down.
Optimal policy for value-based decision-making
Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre
2016-01-01
For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down. PMID:27535638
Intended actions and unexpected outcomes: automatic and controlled processing in a rapid motor task
Cheyne, Douglas O.; Ferrari, Paul; Cheyne, James A.
2012-01-01
Human action involves a combination of controlled and automatic behavior. These processes may interact in tasks requiring rapid response selection or inhibition, where temporal constraints preclude timely intervention by conscious, controlled processes over automatized prepotent responses. Such contexts tend to produce frequent errors, but also rapidly executed correct responses, both of which may sometimes be perceived as surprising, unintended, or “automatic”. In order to identify neural processes underlying these two aspects of cognitive control, we measured neuromagnetic brain activity in 12 right-handed subjects during manual responses to rapidly presented digits, with an infrequent target digit that required switching response hand (bimanual task) or response finger (unimanual task). Automaticity of responding was evidenced by response speeding (shorter response times) prior to both failed and fast correct switches. Consistent with this automaticity interpretation of fast correct switches, we observed bilateral motor preparation, as indexed by suppression of beta band (15–30 Hz) oscillations in motor cortex, prior to processing of the switch cue in the bimanual task. In contrast, right frontal theta activity (4–8 Hz) accompanying correct switch responses began after cue onset, suggesting that it reflected controlled inhibition of the default response. Further, this activity was reduced on fast correct switch trials suggesting a more automatic mode of inhibitory control. We also observed post-movement (event-related negativity) ERN-like responses and theta band increases in medial and anterior frontal regions that were significantly larger on error trials, and may reflect a combination of error and delayed inhibitory signals. We conclude that both automatic and controlled processes are engaged in parallel during rapid motor tasks, and that the relative strength and timing of these processes may underlie both optimal task performance and subjective experiences of automaticity or control. PMID:22912612
Dynamic optimization of cargo movement by trucks in metropolitan areas with adjacent ports
DOT National Transportation Integrated Search
2002-06-01
Today, in the trucking industry, dispatchers perform the tasks of cargo assignment, and driver scheduling. The growing number of containers processed at marine centers and the increasing traffic congestion in metropolitan areas adjacent to marine por...
Yang, Liu-Qin; Simon, Lauren S; Wang, Lei; Zheng, Xiaoming
2016-06-01
We draw from personality systems interaction (PSI) theory (Kuhl, 2000) and regulatory focus theory (Higgins, 1997) to examine how dynamic positive and negative affective processes interact to predict both task and contextual performance. Using a twice-daily diary design over the course of a 3-week period, results from multilevel regression analysis revealed that distinct patterns of change in positive and negative affect optimally predicted contextual and task performance among a sample of 71 employees at a medium-sized technology company. Specifically, within persons, increases (upshifts) in positive affect over the course of a workday better predicted the subsequent day's organizational citizenship behavior (OCB) when such increases were coupled with decreases (downshifts) in negative affect. The optimal pattern of change in positive and negative affect differed, however, in predicting task performance. That is, upshifts in positive affect over the course of the workday better predicted the subsequent day's task performance when such upshifts were accompanied by upshifts in negative affect. The contribution of our findings to PSI theory and the broader affective and motivation regulation literatures, along with practical implications, are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Effort in Multitasking: Local and Global Assessment of Effort.
Kiesel, Andrea; Dignath, David
2017-01-01
When performing multiple tasks in succession, self-organization of task order might be superior compared to external-controlled task schedules, because self-organization allows optimizing processing modes and thus reduces switch costs, and it increases commitment to task goals. However, self-organization is an additional executive control process that is not required if task order is externally specified and as such it is considered as time-consuming and effortful. To compare self-organized and externally controlled task scheduling, we suggest assessing global subjective and objectives measures of effort in addition to local performance measures. In our new experimental approach, we combined characteristics of dual tasking settings and task switching settings and compared local and global measures of effort in a condition with free choice of task sequence and a condition with cued task sequence. In a multi-tasking environment, participants chose the task order while the task requirement of the not-yet-performed task remained the same. This task preview allowed participants to work on the previously non-chosen items in parallel and resulted in faster responses and fewer errors in task switch trials than in task repetition trials. The free-choice group profited more from this task preview than the cued group when considering local performance measures. Nevertheless, the free-choice group invested more effort than the cued group when considering global measures. Thus, self-organization in task scheduling seems to be effortful even in conditions in which it is beneficiary for task processing. In a second experiment, we reduced the possibility of task preview for the not-yet-performed tasks in order to hinder efficient self-organization. Here neither local nor global measures revealed substantial differences between the free-choice and a cued task sequence condition. Based on the results of both experiments, we suggest that global assessment of effort in addition to local performance measures might be a useful tool for multitasking research.
Aircraft Flight Modeling During the Optimization of Gas Turbine Engine Working Process
NASA Astrophysics Data System (ADS)
Tkachenko, A. Yu; Kuz'michev, V. S.; Krupenich, I. N.
2018-01-01
The article describes a method for simulating the flight of the aircraft along a predetermined path, establishing a functional connection between the parameters of the working process of gas turbine engine and the efficiency criteria of the aircraft. This connection is necessary for solving the optimization tasks of the conceptual design stage of the engine according to the systems approach. Engine thrust level, in turn, influences the operation of aircraft, thus making accurate simulation of the aircraft behavior during flight necessary for obtaining the correct solution. The described mathematical model of aircraft flight provides the functional connection between the airframe characteristics, working process of gas turbine engines (propulsion system), ambient and flight conditions and flight profile features. This model provides accurate results of flight simulation and the resulting aircraft efficiency criteria, required for optimization of working process and control function of a gas turbine engine.
Role of optimization in the human dynamics of task execution
NASA Astrophysics Data System (ADS)
Cajueiro, Daniel O.; Maldonado, Wilfredo L.
2008-03-01
In order to explain the empirical evidence that the dynamics of human activity may not be well modeled by Poisson processes, a model based on queuing processes was built in the literature [A. L. Barabasi, Nature (London) 435, 207 (2005)]. The main assumption behind that model is that people execute their tasks based on a protocol that first executes the high priority item. In this context, the purpose of this paper is to analyze the validity of that hypothesis assuming that people are rational agents that make their decisions in order to minimize the cost of keeping nonexecuted tasks on the list. Therefore, we build and analytically solve a dynamic programming model with two priority types of tasks and show that the validity of this hypothesis depends strongly on the structure of the instantaneous costs that a person has to face if a given task is kept on the list for more than one period. Moreover, one interesting finding is that in one of the situations the protocol used to execute the tasks generates complex one-dimensional dynamics.
Telemanipulator design and optimization software
NASA Astrophysics Data System (ADS)
Cote, Jean; Pelletier, Michel
1995-12-01
For many years, industrial robots have been used to execute specific repetitive tasks. In those cases, the optimal configuration and location of the manipulator only has to be found once. The optimal configuration or position where often found empirically according to the tasks to be performed. In telemanipulation, the nature of the tasks to be executed is much wider and can be very demanding in terms of dexterity and workspace. The position/orientation of the robot's base could be required to move during the execution of a task. At present, the choice of the initial position of the teleoperator is usually found empirically which can be sufficient in the case of an easy or repetitive task. In the converse situation, the amount of time wasted to move the teleoperator support platform has to be taken into account during the execution of the task. Automatic optimization of the position/orientation of the platform or a better designed robot configuration could minimize these movements and save time. This paper will present two algorithms. The first algorithm is used to optimize the position and orientation of a given manipulator (or manipulators) with respect to the environment on which a task has to be executed. The second algorithm is used to optimize the position or the kinematic configuration of a robot. For this purpose, the tasks to be executed are digitized using a position/orientation measurement system and a compact representation based on special octrees. Given a digitized task, the optimal position or Denavit-Hartenberg configuration of the manipulator can be obtained numerically. Constraints on the robot design can also be taken into account. A graphical interface has been designed to facilitate the use of the two optimization algorithms.
The effects of experimental pain and induced optimism on working memory task performance.
Boselie, Jantine J L M; Vancleef, Linda M G; Peters, Madelon L
2016-07-01
Pain can interrupt and deteriorate executive task performance. We have previously shown that experimentally induced optimism can diminish the deteriorating effect of cold pressor pain on a subsequent working memory task (i.e., operation span task). In two successive experiments we sought further evidence for the protective role of optimism on pain-induced working memory impairments. We used another working memory task (i.e., 2-back task) that was performed either after or during pain induction. Study 1 employed a 2 (optimism vs. no-optimism)×2 (pain vs. no-pain)×2 (pre-score vs. post-score) mixed factorial design. In half of the participants optimism was induced by the Best Possible Self (BPS) manipulation, which required them to write and visualize about a life in the future where everything turned out for the best. In the control condition, participants wrote and visualized a typical day in their life (TD). Next, participants completed either the cold pressor task (CPT) or a warm water control task (WWCT). Before (baseline) and after the CPT or WWCT participants working memory performance was measured with the 2-back task. The 2-back task measures the ability to monitor and update working memory representation by asking participants to indicate whether the current stimulus corresponds to the stimulus that was presented 2 stimuli ago. Study 2 had a 2 (optimism vs. no-optimism)×2 (pain vs. no-pain) mixed factorial design. After receiving the BPS or control manipulation, participants completed the 2-back task twice: once with painful heat stimulation, and once without any stimulation (counter-balanced order). Continuous heat stimulation was used with temperatures oscillating around 1°C above and 1°C below the individual pain threshold. In study 1, the results did not show an effect of cold pressor pain on subsequent 2-back task performance. Results of study 2 indicated that heat pain impaired concurrent 2-back task performance. However, no evidence was found that optimism protected against this pain-induced performance deterioration. Experimentally induced pain impairs concurrent but not subsequent working memory task performance. Manipulated optimism did not counteract pain-induced deterioration of 2-back performance. It is important to explore factors that may diminish the negative impact of pain on the ability to function in daily life, as pain itself often cannot be remediated. We are planning to conduct future studies that should shed further light on the conditions, contexts and executive operations for which optimism can act as a protective factor. Copyright © 2016 Scandinavian Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.
Optimal Synthesis of Compliant Mechanisms using Subdivision and Commercial FEA (DETC2004-57497)
NASA Technical Reports Server (NTRS)
Hull, Patrick V.; Canfield, Stephen
2004-01-01
The field of distributed-compliance mechanisms has seen significant work in developing suitable topology optimization tools for their design. These optimal design tools have grown out of the techniques of structural optimization. This paper will build on the previous work in topology optimization and compliant mechanism design by proposing an alternative design space parameterization through control points and adding another step to the process, that of subdivision. The control points allow a specific design to be represented as a solid model during the optimization process. The process of subdivision creates an additional number of control points that help smooth the surface (for example a C(sup 2) continuous surface depending on the method of subdivision chosen) creating a manufacturable design free of some traditional numerical instabilities. Note that these additional control points do not add to the number of design parameters. This alternative parameterization and description as a solid model effectively and completely separates the design variables from the analysis variables during the optimization procedure. The motivation behind this work is to create an automated design tool from task definition to functional prototype created on a CNC or rapid-prototype machine. This paper will describe the proposed compliant mechanism design process and will demonstrate the procedure on several examples common in the literature.
Sensakovic, William F; O'Dell, M Cody; Letter, Haley; Kohler, Nathan; Rop, Baiywo; Cook, Jane; Logsdon, Gregory; Varich, Laura
2016-10-01
Image processing plays an important role in optimizing image quality and radiation dose in projection radiography. Unfortunately commercial algorithms are black boxes that are often left at or near vendor default settings rather than being optimized. We hypothesize that different commercial image-processing systems, when left at or near default settings, create significant differences in image quality. We further hypothesize that image-quality differences can be exploited to produce images of equivalent quality but lower radiation dose. We used a portable radiography system to acquire images on a neonatal chest phantom and recorded the entrance surface air kerma (ESAK). We applied two image-processing systems (Optima XR220amx, by GE Healthcare, Waukesha, WI; and MUSICA(2) by Agfa HealthCare, Mortsel, Belgium) to the images. Seven observers (attending pediatric radiologists and radiology residents) independently assessed image quality using two methods: rating and matching. Image-quality ratings were independently assessed by each observer on a 10-point scale. Matching consisted of each observer matching GE-processed images and Agfa-processed images with equivalent image quality. A total of 210 rating tasks and 42 matching tasks were performed and effective dose was estimated. Median Agfa-processed image-quality ratings were higher than GE-processed ratings. Non-diagnostic ratings were seen over a wider range of doses for GE-processed images than for Agfa-processed images. During matching tasks, observers matched image quality between GE-processed images and Agfa-processed images acquired at a lower effective dose (11 ± 9 μSv; P < 0.0001). Image-processing methods significantly impact perceived image quality. These image-quality differences can be exploited to alter protocols and produce images of equivalent image quality but lower doses. Those purchasing projection radiography systems or third-party image-processing software should be aware that image processing can significantly impact image quality when settings are left near default values.
Ayal, Shahar; Rusou, Zohar; Zakay, Dan; Hochman, Guy
2015-01-01
A framework is presented to better characterize the role of individual differences in information processing style and their interplay with contextual factors in determining decision making quality. In Experiment 1, we show that individual differences in information processing style are flexible and can be modified by situational factors. Specifically, a situational manipulation that induced an analytical mode of thought improved decision quality. In Experiment 2, we show that this improvement in decision quality is highly contingent on the compatibility between the dominant thinking mode and the nature of the task. That is, encouraging an intuitive mode of thought led to better performance on an intuitive task but hampered performance on an analytical task. The reverse pattern was obtained when an analytical mode of thought was encouraged. We discuss the implications of these results for the assessment of decision making competence, and suggest practical directions to help individuals better adjust their information processing style to the situation at hand and make optimal decisions. PMID:26284011
Ayal, Shahar; Rusou, Zohar; Zakay, Dan; Hochman, Guy
2015-01-01
A framework is presented to better characterize the role of individual differences in information processing style and their interplay with contextual factors in determining decision making quality. In Experiment 1, we show that individual differences in information processing style are flexible and can be modified by situational factors. Specifically, a situational manipulation that induced an analytical mode of thought improved decision quality. In Experiment 2, we show that this improvement in decision quality is highly contingent on the compatibility between the dominant thinking mode and the nature of the task. That is, encouraging an intuitive mode of thought led to better performance on an intuitive task but hampered performance on an analytical task. The reverse pattern was obtained when an analytical mode of thought was encouraged. We discuss the implications of these results for the assessment of decision making competence, and suggest practical directions to help individuals better adjust their information processing style to the situation at hand and make optimal decisions.
Automaticity in reading and the Stroop task: testing the limits of involuntary word processing.
Brown, Tracy L; Joneleit, Kelly; Robinson, Cathy S; Brown, Carli Rose
2002-01-01
We investigated the parameters of involuntary word reading in the Stroop task in 7 experiments. Experiments 1-4 varied response modality and the presence of congruent word trials in a test of the claim that presenting a Stroop color word with only one letter in the target color eliminates the Stroop effect. Experiments 5 and 6 addressed the roles of spatial attention and orthographic processing as possible mechanisms behind the reduction of Stroop effects with the single-letter format. Experiment 7 investigated the limits of involuntary reading under optimal conditions for selective processing of rectangular color patch targets. We found that the single-letter format reduced but never eliminated Stroop effects, spatial attention but not orthographic processing plays a role in the effect of the single-letter format, and word reading is not completely prevented even with austere presentation conditions. We conclude with a defense of the involuntariness criterion for automaticity in the Stroop task, particularly when word reading is viewed in the context of a skilled performance.
Novel Elastomeric Closed Cell Foam - Nonwoven Fabric Composite Material (Phase III)
2008-10-01
increasing the polymer content of the foam. From laboratory studies, processing was found to improve by using different types of NBR rubber . The AF07 B...Foam Optimization (Task 1) Prior development of fire retarded closed cell foam yielded attractive candidates for scale-up. Nitrile-butadiene rubber ... NBR ) and polyvinyl chloride (PVC) blends provided the most cost effective solutions. Two types of formulas were chosen for optimization. The first
2010-11-10
CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ...ORGANIZATION NAME(S) AND ADDRESS(ES) Woods Hole Oceanographic Institution,Woods Hole,MA,02543 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING...consider an alternate means of finding the minima of 〈|θ|2〉. We perform a two-part optimization process based on Matlab’s built-in nonlinear
Learning optimal quantum models is NP-hard
NASA Astrophysics Data System (ADS)
Stark, Cyril J.
2018-02-01
Physical modeling translates measured data into a physical model. Physical modeling is a major objective in physics and is generally regarded as a creative process. How good are computers at solving this task? Here, we show that in the absence of physical heuristics, the inference of optimal quantum models cannot be computed efficiently (unless P=NP ). This result illuminates rigorous limits to the extent to which computers can be used to further our understanding of nature.
48 CFR 1034.004 - Acquisition strategy.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Order, Task Order, or Interagency Agreement) to the overall investment requirements and management... investment; (3) A description of the effort, by acquisition, and the plans to include required clauses in the... requirements to manage the acquisition processes through the investment lifecycle; (7) Consideration of optimal...
Vitre-graf Coating on Mullite. Low Cost Silicon Array Project: Large Area Sillicon Sheet Task
NASA Technical Reports Server (NTRS)
Rossi, R. C.
1979-01-01
The processing parameters of the Vitre-Graf coating for optimal performance and economy when applied to mullite and graphite as substrates were presented. A minor effort was also performed on slip-cast fused silica substractes.
Global optimization for quantum dynamics of few-fermion systems
NASA Astrophysics Data System (ADS)
Li, Xikun; Pecak, Daniel; Sowiński, Tomasz; Sherson, Jacob; Nielsen, Anne E. B.
2018-03-01
Quantum state preparation is vital to quantum computation and quantum information processing tasks. In adiabatic state preparation, the target state is theoretically obtained with nearly perfect fidelity if the control parameter is tuned slowly enough. As this, however, leads to slow dynamics, it is often desirable to be able to carry out processes more rapidly. In this work, we employ two global optimization methods to estimate the quantum speed limit for few-fermion systems confined in a one-dimensional harmonic trap. Such systems can be produced experimentally in a well-controlled manner. We determine the optimized control fields and achieve a reduction in the ramping time of more than a factor of four compared to linear ramping. We also investigate how robust the fidelity is to small variations of the control fields away from the optimized shapes.
Simulative design and process optimization of the two-stage stretch-blow molding process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hopmann, Ch.; Rasche, S.; Windeck, C.
2015-05-22
The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development timemore » and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress.« less
Simulative design and process optimization of the two-stage stretch-blow molding process
NASA Astrophysics Data System (ADS)
Hopmann, Ch.; Rasche, S.; Windeck, C.
2015-05-01
The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development time and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress.
Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A
2017-04-01
In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.
Non-rigid Reconstruction of Casting Process with Temperature Feature
NASA Astrophysics Data System (ADS)
Lin, Jinhua; Wang, Yanjie; Li, Xin; Wang, Ying; Wang, Lu
2017-09-01
Off-line reconstruction of rigid scene has made a great progress in the past decade. However, the on-line reconstruction of non-rigid scene is still a very challenging task. The casting process is a non-rigid reconstruction problem, it is a high-dynamic molding process lacking of geometric features. In order to reconstruct the casting process robustly, an on-line fusion strategy is proposed for dynamic reconstruction of casting process. Firstly, the geometric and flowing feature of casting are parameterized in manner of TSDF (truncated signed distance field) which is a volumetric block, parameterized casting guarantees real-time tracking and optimal deformation of casting process. Secondly, data structure of the volume grid is extended to have temperature value, the temperature interpolation function is build to generate the temperature of each voxel. This data structure allows for dynamic tracking of temperature of casting during deformation stages. Then, the sparse RGB features is extracted from casting scene to search correspondence between geometric representation and depth constraint. The extracted color data guarantees robust tracking of flowing motion of casting. Finally, the optimal deformation of the target space is transformed into a nonlinear regular variational optimization problem. This optimization step achieves smooth and optimal deformation of casting process. The experimental results show that the proposed method can reconstruct the casting process robustly and reduce drift in the process of non-rigid reconstruction of casting.
Auto-SEIA: simultaneous optimization of image processing and machine learning algorithms
NASA Astrophysics Data System (ADS)
Negro Maggio, Valentina; Iocchi, Luca
2015-02-01
Object classification from images is an important task for machine vision and it is a crucial ingredient for many computer vision applications, ranging from security and surveillance to marketing. Image based object classification techniques properly integrate image processing and machine learning (i.e., classification) procedures. In this paper we present a system for automatic simultaneous optimization of algorithms and parameters for object classification from images. More specifically, the proposed system is able to process a dataset of labelled images and to return a best configuration of image processing and classification algorithms and of their parameters with respect to the accuracy of classification. Experiments with real public datasets are used to demonstrate the effectiveness of the developed system.
Hee Han; Woodam Chung; Lucas Wells; Nathaniel Anderson
2018-01-01
An important task in forest residue recovery operations is to select the most cost-efficient feedstock logistics system for a given distribution of residue piles, road access, and available machinery. Notable considerations include inaccessibility of treatment units to large chip vans and frequent, long-distance mobilization of forestry equipment required to process...
Clery, Stephane; Cumming, Bruce G; Nienborg, Hendrikje
2017-01-18
Fine judgments of stereoscopic depth rely mainly on relative judgments of depth (relative binocular disparity) between objects, rather than judgments of the distance to where the eyes are fixating (absolute disparity). In macaques, visual area V2 is the earliest site in the visual processing hierarchy for which neurons selective for relative disparity have been observed (Thomas et al., 2002). Here, we found that, in macaques trained to perform a fine disparity discrimination task, disparity-selective neurons in V2 were highly selective for the task, and their activity correlated with the animals' perceptual decisions (unexplained by the stimulus). This may partially explain similar correlations reported in downstream areas. Although compatible with a perceptual role of these neurons for the task, the interpretation of such decision-related activity is complicated by the effects of interneuronal "noise" correlations between sensory neurons. Recent work has developed simple predictions to differentiate decoding schemes (Pitkow et al., 2015) without needing measures of noise correlations, and found that data from early sensory areas were compatible with optimal linear readout of populations with information-limiting correlations. In contrast, our data here deviated significantly from these predictions. We additionally tested this prediction for previously reported results of decision-related activity in V2 for a related task, coarse disparity discrimination (Nienborg and Cumming, 2006), thought to rely on absolute disparity. Although these data followed the predicted pattern, they violated the prediction quantitatively. This suggests that optimal linear decoding of sensory signals is not generally a good predictor of behavior in simple perceptual tasks. Activity in sensory neurons that correlates with an animal's decision is widely believed to provide insights into how the brain uses information from sensory neurons. Recent theoretical work developed simple predictions to differentiate decoding schemes, and found support for optimal linear readout of early sensory populations with information-limiting correlations. Here, we observed decision-related activity for neurons in visual area V2 of macaques performing fine disparity discrimination, as yet the earliest site for this task. These findings, and previously reported results from V2 in a different task, deviated from the predictions for optimal linear readout of a population with information-limiting correlations. Our results suggest that optimal linear decoding of early sensory information is not a general decoding strategy used by the brain. Copyright © 2017 the authors 0270-6474/17/370715-11$15.00/0.
Conceptual design of distillation-based hybrid separation processes.
Skiborowski, Mirko; Harwardt, Andreas; Marquardt, Wolfgang
2013-01-01
Hybrid separation processes combine different separation principles and constitute a promising design option for the separation of complex mixtures. Particularly, the integration of distillation with other unit operations can significantly improve the separation of close-boiling or azeotropic mixtures. Although the design of single-unit operations is well understood and supported by computational methods, the optimal design of flowsheets of hybrid separation processes is still a challenging task. The large number of operational and design degrees of freedom requires a systematic and optimization-based design approach. To this end, a structured approach, the so-called process synthesis framework, is proposed. This article reviews available computational methods for the conceptual design of distillation-based hybrid processes for the separation of liquid mixtures. Open problems are identified that must be addressed to finally establish a structured process synthesis framework for such processes.
A Low Cost Structurally Optimized Design for Diverse Filter Types
Kazmi, Majida; Aziz, Arshad; Akhtar, Pervez; Ikram, Nassar
2016-01-01
A wide range of image processing applications deploys two dimensional (2D)-filters for performing diversified tasks such as image enhancement, edge detection, noise suppression, multi scale decomposition and compression etc. All of these tasks require multiple type of 2D-filters simultaneously to acquire the desired results. The resource hungry conventional approach is not a viable option for implementing these computationally intensive 2D-filters especially in a resource constraint environment. Thus it calls for optimized solutions. Mostly the optimization of these filters are based on exploiting structural properties. A common shortcoming of all previously reported optimized approaches is their restricted applicability only for a specific filter type. These narrow scoped solutions completely disregard the versatility attribute of advanced image processing applications and in turn offset their effectiveness while implementing a complete application. This paper presents an efficient framework which exploits the structural properties of 2D-filters for effectually reducing its computational cost along with an added advantage of versatility for supporting diverse filter types. A composite symmetric filter structure is introduced which exploits the identities of quadrant and circular T-symmetries in two distinct filter regions simultaneously. These T-symmetries effectually reduce the number of filter coefficients and consequently its multipliers count. The proposed framework at the same time empowers this composite filter structure with additional capabilities of realizing all of its Ψ-symmetry based subtypes and also its special asymmetric filters case. The two-fold optimized framework thus reduces filter computational cost up to 75% as compared to the conventional approach as well as its versatility attribute not only supports diverse filter types but also offers further cost reduction via resource sharing for sequential implementation of diversified image processing applications especially in a constraint environment. PMID:27832133
Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster
2017-12-01
This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.
Enforcement of entailment constraints in distributed service-based business processes.
Hummer, Waldemar; Gaubatz, Patrick; Strembeck, Mark; Zdun, Uwe; Dustdar, Schahram
2013-11-01
A distributed business process is executed in a distributed computing environment. The service-oriented architecture (SOA) paradigm is a popular option for the integration of software services and execution of distributed business processes. Entailment constraints, such as mutual exclusion and binding constraints, are important means to control process execution. Mutually exclusive tasks result from the division of powerful rights and responsibilities to prevent fraud and abuse. In contrast, binding constraints define that a subject who performed one task must also perform the corresponding bound task(s). We aim to provide a model-driven approach for the specification and enforcement of task-based entailment constraints in distributed service-based business processes. Based on a generic metamodel, we define a domain-specific language (DSL) that maps the different modeling-level artifacts to the implementation-level. The DSL integrates elements from role-based access control (RBAC) with the tasks that are performed in a business process. Process definitions are annotated using the DSL, and our software platform uses automated model transformations to produce executable WS-BPEL specifications which enforce the entailment constraints. We evaluate the impact of constraint enforcement on runtime performance for five selected service-based processes from existing literature. Our evaluation demonstrates that the approach correctly enforces task-based entailment constraints at runtime. The performance experiments illustrate that the runtime enforcement operates with an overhead that scales well up to the order of several ten thousand logged invocations. Using our DSL annotations, the user-defined process definition remains declarative and clean of security enforcement code. Our approach decouples the concerns of (non-technical) domain experts from technical details of entailment constraint enforcement. The developed framework integrates seamlessly with WS-BPEL and the Web services technology stack. Our prototype implementation shows the feasibility of the approach, and the evaluation points to future work and further performance optimizations.
Method and Apparatus for Performance Optimization Through Physical Perturbation of Task Elements
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III (Inventor); Pope, Alan T. (Inventor); Palsson, Olafur S. (Inventor); Turner, Marsha J. (Inventor)
2016-01-01
The invention is an apparatus and method of biofeedback training for attaining a physiological state optimally consistent with the successful performance of a task, wherein the probability of successfully completing the task is made is inversely proportional to a physiological difference value, computed as the absolute value of the difference between at least one physiological signal optimally consistent with the successful performance of the task and at least one corresponding measured physiological signal of a trainee performing the task. The probability of successfully completing the task is made inversely proportional to the physiological difference value by making one or more measurable physical attributes of the environment in which the task is performed, and upon which completion of the task depends, vary in inverse proportion to the physiological difference value.
Using RGB-D sensors and evolutionary algorithms for the optimization of workstation layouts.
Diego-Mas, Jose Antonio; Poveda-Bautista, Rocio; Garzon-Leal, Diana
2017-11-01
RGB-D sensors can collect postural data in an automatized way. However, the application of these devices in real work environments requires overcoming problems such as lack of accuracy or body parts' occlusion. This work presents the use of RGB-D sensors and genetic algorithms for the optimization of workstation layouts. RGB-D sensors are used to capture workers' movements when they reach objects on workbenches. Collected data are then used to optimize workstation layout by means of genetic algorithms considering multiple ergonomic criteria. Results show that typical drawbacks of using RGB-D sensors for body tracking are not a problem for this application, and that the combination with intelligent algorithms can automatize the layout design process. The procedure described can be used to automatically suggest new layouts when workers or processes of production change, to adapt layouts to specific workers based on their ways to do the tasks, or to obtain layouts simultaneously optimized for several production processes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Optimization of hole generation in Ti/CFRP stacks
NASA Astrophysics Data System (ADS)
Ivanov, Y. N.; Pashkov, A. E.; Chashhin, N. S.
2018-03-01
The article aims to describe methods for improving the surface quality and hole accuracy in Ti/CFRP stacks by optimizing cutting methods and drill geometry. The research is based on the fundamentals of machine building, theory of probability, mathematical statistics, and experiment planning and manufacturing process optimization theories. Statistical processing of experiment data was carried out by means of Statistica 6 and Microsoft Excel 2010. Surface geometry in Ti stacks was analyzed using a Taylor Hobson Form Talysurf i200 Series Profilometer, and in CFRP stacks - using a Bruker ContourGT-Kl Optical Microscope. Hole shapes and sizes were analyzed using a Carl Zeiss CONTURA G2 Measuring machine, temperatures in cutting zones were recorded with a FLIR SC7000 Series Infrared Camera. Models of multivariate analysis of variance were developed. They show effects of drilling modes on surface quality and accuracy of holes in Ti/CFRP stacks. The task of multicriteria drilling process optimization was solved. Optimal cutting technologies which improve performance were developed. Methods for assessing thermal tool and material expansion effects on the accuracy of holes in Ti/CFRP/Ti stacks were developed.
Case studies on optimization problems in MATLAB and COMSOL multiphysics by means of the livelink
NASA Astrophysics Data System (ADS)
Ozana, Stepan; Pies, Martin; Docekal, Tomas
2016-06-01
LiveLink for COMSOL is a tool that integrates COMSOL Multiphysics with MATLAB to extend one's modeling with scripting programming in the MATLAB environment. It allows user to utilize the full power of MATLAB and its toolboxes in preprocessing, model manipulation, and post processing. At first, the head script launches COMSOL with MATLAB and defines initial value of all parameters, refers to the objective function J described in the objective function and creates and runs the defined optimization task. Once the task is launches, the COMSOL model is being called in the iteration loop (from MATLAB environment by use of API interface), changing defined optimization parameters so that the objective function is minimized, using fmincon function to find a local or global minimum of constrained linear or nonlinear multivariable function. Once the minimum is found, it returns exit flag, terminates optimization and returns the optimized values of the parameters. The cooperation with MATLAB via LiveLink enhances a powerful computational environment with complex multiphysics simulations. The paper will introduce using of the LiveLink for COMSOL for chosen case studies in the field of technical cybernetics and bioengineering.
Optimization of a sample processing protocol for recovery of Bacillus anthracis spores from soil
Silvestri, Erin E.; Feldhake, David; Griffin, Dale; Lisle, John T.; Nichols, Tonya L.; Shah, Sanjiv; Pemberton, A; Schaefer III, Frank W
2016-01-01
Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps. Optimization of the protocol included: identifying an ideal extraction diluent, variation in the number of wash steps, variation in the initial centrifugation speed, sonication and shaking mechanisms. The optimized protocol was demonstrated at two laboratories in order to evaluate the recovery of spores from loamy and sandy soils. The new protocol demonstrated an improved limit of detection for loamy and sandy soils over the non-optimized protocol with an approximate matrix limit of detection at 14 spores/g of soil. There were no significant differences overall between the two laboratories for either soil type, suggesting that the processing protocol will be robust enough to use at multiple laboratories while achieving comparable recoveries.
Optimal Design of Magnetic ComponentsinPlasma Cutting Power Supply
NASA Astrophysics Data System (ADS)
Jiang, J. F.; Zhu, B. R.; Zhao, W. N.; Yang, X. J.; Tang, H. J.
2017-10-01
Phase-shifted transformer and DC reactor are usually needed in chopper plasma cutting power supply. Because of high power rate, the loss of magnetic components may reach to several kilowatts, which seriously affects the conversion efficiency. Therefore, it is necessary to research and design low loss magnetic components by means of efficient magnetic materials and optimal design methods. The main task in this paper is to compare the core loss of different magnetic material, to analyze the influence of transformer structure, winding arrangement and wire structure on the characteristics of magnetic component. Then another task is to select suitable magnetic material, structure and wire in order to reduce the loss and volume of magnetic components. Based on the above outcome, the optimization design process of transformer and dc reactor are proposed in chopper plasma cutting power supply with a lot of solutions. These solutions are analyzed and compared before the determination of the optimal solution in order to reduce the volume and power loss of the two magnetic components and improve the conversion efficiency of plasma cutting power supply.
NASA Astrophysics Data System (ADS)
Ghaly, Michael; Links, Jonathan M.; Frey, Eric
2015-03-01
In this work, we used the ideal observer (IO) and IO with model mismatch (IO-MM) applied in the projection domain and an anthropomorphic Channelized Hotelling Observer (CHO) applied to reconstructed images to optimize the acquisition energy window width and evaluate various scatter compensation methods in the context of a myocardial perfusion SPECT defect detection task. The IO has perfect knowledge of the image formation process and thus reflects performance with perfect compensation for image-degrading factors. Thus, using the IO to optimize imaging systems could lead to suboptimal parameters compared to those optimized for humans interpreting SPECT images reconstructed with imperfect or no compensation. The IO-MM allows incorporating imperfect system models into the IO optimization process. We found that with near-perfect scatter compensation, the optimal energy window for the IO and CHO were similar; in its absence the IO-MM gave a better prediction of the optimal energy window for the CHO using different scatter compensation methods. These data suggest that the IO-MM may be useful for projection-domain optimization when model mismatch is significant, and that the IO is useful when followed by reconstruction with good models of the image formation process.
Modifications to Improve Data Acquisition and Analysis for Camouflage Design
1983-01-01
terrains into facsimiles of the original scenes in 3, 4# or 5 colors in CIELAB notation. Tasks that were addressed included optimization of the...a histogram algorithm (HIST) was used as a first step In the clustering of the CIELAB values of the scene pixels. This algorithm Is highly efficient...however, an optimal process and the CIELAB coordinates of the final color domains can be Influenced by the color coordinate Increments used In the
Introduction to SIMRAND: Simulation of research and development project
NASA Technical Reports Server (NTRS)
Miles, R. F., Jr.
1982-01-01
SIMRAND: SIMulation of Research ANd Development Projects is a methodology developed to aid the engineering and management decision process in the selection of the optimal set of systems or tasks to be funded on a research and development project. A project may have a set of systems or tasks under consideration for which the total cost exceeds the allocated budget. Other factors such as personnel and facilities may also enter as constraints. Thus the project's management must select, from among the complete set of systems or tasks under consideration, a partial set that satisfies all project constraints. The SIMRAND methodology uses analytical techniques and probability theory, decision analysis of management science, and computer simulation, in the selection of this optimal partial set. The SIMRAND methodology is truly a management tool. It initially specifies the information that must be generated by the engineers, thus providing information for the management direction of the engineers, and it ranks the alternatives according to the preferences of the decision makers.
ERIC Educational Resources Information Center
Miller, Jeff; Ulrich, Rolf; Rolke, Bettina
2009-01-01
Within the context of the psychological refractory period (PRP) paradigm, we developed a general theoretical framework for deciding when it is more efficient to process two tasks in serial and when it is more efficient to process them in parallel. This analysis suggests that a serial mode is more efficient than a parallel mode under a wide variety…
Attentional load and attentional boost: a review of data and theory.
Swallow, Khena M; Jiang, Yuhong V
2013-01-01
Both perceptual and cognitive processes are limited in capacity. As a result, attention is selective, prioritizing items and tasks that are important for adaptive behavior. However, a number of recent behavioral and neuroimaging studies suggest that, at least under some circumstances, increasing attention to one task can enhance performance in a second task (e.g., the attentional boost effect). Here we review these findings and suggest a new theoretical framework, the dual-task interaction model, that integrates these findings with current views of attentional selection. To reconcile the attentional boost effect with the effects of attentional load, we suggest that temporal selection results in a temporally specific enhancement across modalities, tasks, and spatial locations. Moreover, the effects of temporal selection may be best observed when the attentional system is optimally tuned to the temporal dynamics of incoming stimuli. Several avenues of research motivated by the dual-task interaction model are then discussed.
Attentional Load and Attentional Boost: A Review of Data and Theory
Swallow, Khena M.; Jiang, Yuhong V.
2013-01-01
Both perceptual and cognitive processes are limited in capacity. As a result, attention is selective, prioritizing items and tasks that are important for adaptive behavior. However, a number of recent behavioral and neuroimaging studies suggest that, at least under some circumstances, increasing attention to one task can enhance performance in a second task (e.g., the attentional boost effect). Here we review these findings and suggest a new theoretical framework, the dual-task interaction model, that integrates these findings with current views of attentional selection. To reconcile the attentional boost effect with the effects of attentional load, we suggest that temporal selection results in a temporally specific enhancement across modalities, tasks, and spatial locations. Moreover, the effects of temporal selection may be best observed when the attentional system is optimally tuned to the temporal dynamics of incoming stimuli. Several avenues of research motivated by the dual-task interaction model are then discussed. PMID:23730294
Multi-robot task allocation based on two dimensional artificial fish swarm algorithm
NASA Astrophysics Data System (ADS)
Zheng, Taixiong; Li, Xueqin; Yang, Liangyi
2007-12-01
The problem of task allocation for multiple robots is to allocate more relative-tasks to less relative-robots so as to minimize the processing time of these tasks. In order to get optimal multi-robot task allocation scheme, a twodimensional artificial swarm algorithm based approach is proposed in this paper. In this approach, the normal artificial fish is extended to be two dimension artificial fish. In the two dimension artificial fish, each vector of primary artificial fish is extended to be an m-dimensional vector. Thus, each vector can express a group of tasks. By redefining the distance between artificial fish and the center of artificial fish, the behavior of two dimension fish is designed and the task allocation algorithm based on two dimension artificial swarm algorithm is put forward. At last, the proposed algorithm is applied to the problem of multi-robot task allocation and comparer with GA and SA based algorithm is done. Simulation and compare result shows the proposed algorithm is effective.
How age affects memory task performance in clinically normal hearing persons.
Vercammen, Charlotte; Goossens, Tine; Wouters, Jan; van Wieringen, Astrid
2017-05-01
The main objective of this study is to investigate memory task performance in different age groups, irrespective of hearing status. Data are collected on a short-term memory task (WAIS-III Digit Span forward) and two working memory tasks (WAIS-III Digit Span backward and the Reading Span Test). The tasks are administered to young (20-30 years, n = 56), middle-aged (50-60 years, n = 47), and older participants (70-80 years, n = 16) with normal hearing thresholds. All participants have passed a cognitive screening task (Montreal Cognitive Assessment (MoCA)). Young participants perform significantly better than middle-aged participants, while middle-aged and older participants perform similarly on the three memory tasks. Our data show that older clinically normal hearing persons perform equally well on the memory tasks as middle-aged persons. However, even under optimal conditions of preserved sensory processing, changes in memory performance occur. Based on our data, these changes set in before middle age.
Brain-computer interface analysis of a dynamic visuo-motor task.
Logar, Vito; Belič, Aleš
2011-01-01
The area of brain-computer interfaces (BCIs) represents one of the more interesting fields in neurophysiological research, since it investigates the development of the machines that perform different transformations of the brain's "thoughts" to certain pre-defined actions. Experimental studies have reported some successful implementations of BCIs; however, much of the field still remains unexplored. According to some recent reports the phase coding of informational content is an important mechanism in the brain's function and cognition, and has the potential to explain various mechanisms of the brain's data transfer, but it has yet to be scrutinized in the context of brain-computer interface. Therefore, if the mechanism of phase coding is plausible, one should be able to extract the phase-coded content, carried by brain signals, using appropriate signal-processing methods. In our previous studies we have shown that by using a phase-demodulation-based signal-processing approach it is possible to decode some relevant information on the current motor action in the brain from electroencephalographic (EEG) data. In this paper the authors would like to present a continuation of their previous work on the brain-information-decoding analysis of visuo-motor (VM) tasks. The present study shows that EEG data measured during more complex, dynamic visuo-motor (dVM) tasks carries enough information about the currently performed motor action to be successfully extracted by using the appropriate signal-processing and identification methods. The aim of this paper is therefore to present a mathematical model, which by means of the EEG measurements as its inputs predicts the course of the wrist movements as applied by each subject during the task in simulated or real time (BCI analysis). However, several modifications to the existing methodology are needed to achieve optimal decoding results and a real-time, data-processing ability. The information extracted from the EEG could, therefore, be further used for the development of a closed-loop, non-invasive, brain-computer interface. For the case of this study two types of measurements were performed, i.e., the electroencephalographic (EEG) signals and the wrist movements were measured simultaneously, during the subject's performance of a dynamic visuo-motor task. Wrist-movement predictions were computed by using the EEG data-processing methodology of double brain-rhythm filtering, double phase demodulation and double principal component analyses (PCA), each with a separate set of parameters. For the movement-prediction model a fuzzy inference system was used. The results have shown that the EEG signals measured during the dVM tasks carry enough information about the subjects' wrist movements for them to be successfully decoded using the presented methodology. Reasonably high values of the correlation coefficients suggest that the validation of the proposed approach is satisfactory. Moreover, since the causality of the rhythm filtering and the PCA transformation has been achieved, we have shown that these methods can also be used in a real-time, brain-computer interface. The study revealed that using non-causal, optimized methods yields better prediction results in comparison with the causal, non-optimized methodology; however, taking into account that the causality of these methods allows real-time processing, the minor decrease in prediction quality is acceptable. The study suggests that the methodology that was proposed in our previous studies is also valid for identifying the EEG-coded content during dVM tasks, albeit with various modifications, which allow better prediction results and real-time data processing. The results have shown that wrist movements can be predicted in simulated or real time; however, the results of the non-causal, optimized methodology (simulated) are slightly better. Nevertheless, the study has revealed that these methods should be suitable for use in the development of a non-invasive, brain-computer interface. Copyright © 2010 Elsevier B.V. All rights reserved.
Zhimeng, Li; Chuan, He; Dishan, Qiu; Jin, Liu; Manhao, Ma
2013-01-01
Aiming to the imaging tasks scheduling problem on high-altitude airship in emergency condition, the programming models are constructed by analyzing the main constraints, which take the maximum task benefit and the minimum energy consumption as two optimization objectives. Firstly, the hierarchy architecture is adopted to convert this scheduling problem into three subproblems, that is, the task ranking, value task detecting, and energy conservation optimization. Then, the algorithms are designed for the sub-problems, and the solving results are corresponding to feasible solution, efficient solution, and optimization solution of original problem, respectively. This paper makes detailed introduction to the energy-aware optimization strategy, which can rationally adjust airship's cruising speed based on the distribution of task's deadline, so as to decrease the total energy consumption caused by cruising activities. Finally, the application results and comparison analysis show that the proposed strategy and algorithm are effective and feasible. PMID:23864822
Ordering Design Tasks Based on Coupling Strengths
NASA Technical Reports Server (NTRS)
Rogers, J. L.; Bloebaum, C. L.
1994-01-01
The design process associated with large engineering systems requires an initial decomposition of the complex system into modules of design tasks which are coupled through the transference of output data. In analyzing or optimizing such a coupled system, it is essential to be able to determine which interactions figure prominently enough to significantly affect the accuracy of the system solution. Many decomposition approaches assume the capability is available to determine what design tasks and interactions exist and what order of execution will be imposed during the analysis process. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature for DeMAID (Design Manager's Aid for Intelligent Decomposition) will allow the design manager to use coupling strength information to find a proper sequence for ordering the design tasks. In addition, these coupling strengths aid in deciding if certain tasks or couplings could be removed (or temporarily suspended) from consideration to achieve computational savings without a significant loss of system accuracy. New rules are presented and two small test cases are used to show the effects of using coupling strengths in this manner.
Ordering design tasks based on coupling strengths
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Bloebaum, Christina L.
1994-01-01
The design process associated with large engineering systems requires an initial decomposition of the complex system into modules of design tasks which are coupled through the transference of output data. In analyzing or optimizing such a coupled system, it is essential to be able to determine which interactions figure prominently enough to significantly affect the accuracy of the system solution. Many decomposition approaches assume the capability is available to determine what design tasks and interactions exist and what order of execution will be imposed during the analysis process. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature for DeMAID (Design Manager's Aid for Intelligent Decomposition) will allow the design manager to use coupling strength information to find a proper sequence for ordering the design tasks. In addition, these coupling strengths aid in deciding if certain tasks or couplings could be removed (or temporarily suspended) from consideration to achieve computational savings without a significant loss of system accuracy. New rules are presented and two small test cases are used to show the effects of using coupling strengths in this manner.
Economical Unsteady High-Fidelity Aerodynamics for Structural Optimization with a Flutter Constraint
NASA Technical Reports Server (NTRS)
Bartels, Robert E.; Stanford, Bret K.
2017-01-01
Structural optimization with a flutter constraint for a vehicle designed to fly in the transonic regime is a particularly difficult task. In this speed range, the flutter boundary is very sensitive to aerodynamic nonlinearities, typically requiring high-fidelity Navier-Stokes simulations. However, the repeated application of unsteady computational fluid dynamics to guide an aeroelastic optimization process is very computationally expensive. This expense has motivated the development of methods that incorporate aspects of the aerodynamic nonlinearity, classical tools of flutter analysis, and more recent methods of optimization. While it is possible to use doublet lattice method aerodynamics, this paper focuses on the use of an unsteady high-fidelity aerodynamic reduced order model combined with successive transformations that allows for an economical way of utilizing high-fidelity aerodynamics in the optimization process. This approach is applied to the common research model wing structural design. As might be expected, the high-fidelity aerodynamics produces a heavier wing than that optimized with doublet lattice aerodynamics. It is found that the optimized lower skin of the wing using high-fidelity aerodynamics differs significantly from that using doublet lattice aerodynamics.
Affordable Development and Optimization of CERMET Fuels for NTP Ground Testing
NASA Technical Reports Server (NTRS)
Hickman, Robert R.; Broadway, Jeramie W.; Mireles, Omar R.
2014-01-01
CERMET fuel materials for Nuclear Thermal Propulsion (NTP) are currently being developed at NASA's Marshall Space Flight Center. The work is part of NASA's Advanced Space Exploration Systems Nuclear Cryogenic Propulsion Stage (NCPS) Project. The goal of the FY12-14 project is to address critical NTP technology challenges and programmatic issues to establish confidence in the affordability and viability of an NTP system. A key enabling technology for an NCPS system is the fabrication of a stable high temperature nuclear fuel form. Although much of the technology was demonstrated during previous programs, there are currently no qualified fuel materials or processes. The work at MSFC is focused on developing critical materials and process technologies for manufacturing robust, full-scale CERMET fuels. Prototypical samples are being fabricated and tested in flowing hot hydrogen to understand processing and performance relationships. As part of this initial demonstration task, a final full scale element test will be performed to validate robust designs. The next phase of the project will focus on continued development and optimization of the fuel materials to enable future ground testing. The purpose of this paper is to provide a detailed overview of the CERMET fuel materials development plan. The overall CERMET fuel development path is shown in Figure 2. The activities begin prior to ATP for a ground reactor or engine system test and include materials and process optimization, hot hydrogen screening, material property testing, and irradiation testing. The goal of the development is to increase the maturity of the fuel form and reduce risk. One of the main accomplishmens of the current AES FY12-14 project was to develop dedicated laboratories at MSFC for the fabrication and testing of full length fuel elements. This capability will enable affordable, near term development and optimization of the CERMET fuels for future ground testing. Figure 2 provides a timeline of the development and optimization tasks for the AES FY15-17 follow on program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makhlouf M. Makhlouf; Diran Apelian
The objective of this project is to develop a technology for clean metal processing that is capable of consistently providing a metal cleanliness level that is fit for a given application. The program has five tasks: Development of melt cleanliness assessment technology, development of melt contamination avoidance technology, development of high temperature phase separation technology, establishment of a correlation between the level of melt cleanliness and as cast mechanical properties, and transfer of technology to the industrial sector. Within the context of the first task, WPI has developed a standardized Reduced Pressure Test that has been endorsed by AFS asmore » a recommended practice. In addition, within the context of task1, WPI has developed a melt cleanliness sensor based on the principles of electromagnetic separation. An industrial partner is commercializing the sensor. Within the context of the second task, WPI has developed environmentally friendly fluxes that do not contain fluorine. Within the context of the third task, WPI modeled the process of rotary degassing and verified the model predictions with experimental data. This model may be used to optimize the performance of industrial rotary degassers. Within the context of the fourth task, WPI has correlated the level of melt cleanliness at various foundries, including a sand casting foundry, a permanent mold casting foundry, and a die casting foundry, to the casting process and the resultant mechanical properties. This is useful in tailoring the melt cleansing operations at foundries to the particular casting process and the desired properties of cast components.« less
The director task: A test of Theory-of-Mind use or selective attention?
Rubio-Fernández, Paula
2017-08-01
Over two decades, the director task has increasingly been employed as a test of the use of Theory of Mind in communication, first in psycholinguistics and more recently in social cognition research. A new version of this task was designed to test two independent hypotheses. First, optimal performance in the director task, as established by the standard metrics of interference, is possible by using selective attention alone, and not necessarily Theory of Mind. Second, pragmatic measures of Theory-of-Mind use can reveal that people actively represent the director's mental states, contrary to recent claims that they only use domain-general cognitive processes to perform this task. The results of this study support both hypotheses and provide a new interactive paradigm to reliably test Theory-of-Mind use in referential communication.
Machine Learning Techniques in Optimal Design
NASA Technical Reports Server (NTRS)
Cerbone, Giuseppe
1992-01-01
Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution to the problem, is then obtained by solving in parallel each of the sub-problems in the set and computing the one with the minimum cost. In addition to speeding up the optimization process, our use of learning methods also relieves the expert from the burden of identifying rules that exactly pinpoint optimal candidate sub-problems. In real engineering tasks it is usually too costly to the engineers to derive such rules. Therefore, this paper also contributes to a further step towards the solution of the knowledge acquisition bottleneck [Feigenbaum, 1977] which has somewhat impaired the construction of rulebased expert systems.
Shape design of internal cooling passages within a turbine blade
NASA Astrophysics Data System (ADS)
Nowak, Grzegorz; Nowak, Iwona
2012-04-01
The article concerns the optimization of the shape and location of non-circular passages cooling the blade of a gas turbine. To model the shape, four Bezier curves which form a closed profile of the passage were used. In order to match the shape of the passage to the blade profile, a technique was put forward to copy and scale the profile fragments into the component, and build the outline of the passage on the basis of them. For so-defined cooling passages, optimization calculations were carried out with a view to finding their optimal shape and location in terms of the assumed objectives. The task was solved as a multi-objective problem with the use of the Pareto method, for a cooling system composed of four and five passages. The tool employed for the optimization was the evolutionary algorithm. The article presents the impact of the population on the task convergence, and discusses the impact of different optimization objectives on the Pareto optimal solutions obtained. Due to the problem of different impacts of individual objectives on the position of the solution front which was noticed during the calculations, a two-step optimization procedure was introduced. Also, comparative optimization calculations for the scalar objective function were carried out and set up against the non-dominated solutions obtained in the Pareto approach. The optimization process resulted in a configuration of the cooling system that allows a significant reduction in the temperature of the blade and its thermal stress.
Huang, Cheng-Ya; Chang, Gwo-Ching; Tsai, Yi-Ying; Hwang, Ing-Shiou
2016-01-01
Increase in postural-demand resources does not necessarily degrade a concurrent motor task, according to the adaptive resource-sharing hypothesis of postural-suprapostural dual-tasking. This study investigated how brain networks are organized to optimize a suprapostural motor task when the postural load increases and shifts postural control into a less automatic process. Fourteen volunteers executed a designated force-matching task from a level surface (a relative automatic process in posture) and from a stabilometer board while maintaining balance at a target angle (a relatively controlled process in posture). Task performance of the postural and suprapostural tasks, synchronization likelihood (SL) of scalp EEG, and graph-theoretical metrics were assessed. Behavioral results showed that the accuracy and reaction time of force-matching from a stabilometer board were not affected, despite a significant increase in postural sway. However, force-matching in the stabilometer condition showed greater local and global efficiencies of the brain networks than force-matching in the level-surface condition. Force-matching from a stabilometer board was also associated with greater frontal cluster coefficients, greater mean SL of the frontal and sensorimotor areas, and smaller mean SL of the parietal-occipital cortex than force-matching from a level surface. The contrast of supra-threshold links in the upper alpha and beta bands between the two stance conditions validated load-induced facilitation of inter-regional connections between the frontal and sensorimotor areas, but that contrast also indicated connection suppression between the right frontal-temporal and the parietal-occipital areas for the stabilometer stance condition. In conclusion, an increase in stance difficulty alters the neurocognitive processes in executing a postural-suprapostural task. Suprapostural performance is not degraded by increase in postural load, due to (1) increased effectiveness of information transfer, (2) an anterior shift of processing resources toward frontal executive function, and (3) cortical dissociation of control hubs in the parietal-occipital cortex for neural economy. PMID:27594830
Task driven optimal leg trajectories in insect-scale legged microrobots
NASA Astrophysics Data System (ADS)
Doshi, Neel; Goldberg, Benjamin; Jayaram, Kaushik; Wood, Robert
Origami inspired layered manufacturing techniques and 3D-printing have enabled the development of highly articulated legged robots at the insect-scale, including the 1.43g Harvard Ambulatory MicroRobot (HAMR). Research on these platforms has expanded its focus from manufacturing aspects to include design optimization and control for application-driven tasks. Consequently, the choice of gait selection, body morphology, leg trajectory, foot design, etc. have become areas of active research. HAMR has two controlled degrees-of-freedom per leg, making it an ideal candidate for exploring leg trajectory. We will discuss our work towards optimizing HAMR's leg trajectories for two different tasks: climbing using electroadhesives and level ground running (5-10 BL/s). These tasks demonstrate the ability of single platform to adapt to vastly different locomotive scenarios: quasi-static climbing with controlled ground contact, and dynamic running with un-controlled ground contact. We will utilize trajectory optimization methods informed by existing models and experimental studies to determine leg trajectories for each task. We also plan to discuss how task specifications and choice of objective function have contributed to the shape of these optimal leg trajectories.
Service Bundle Recommendation for Person-Centered Care Planning in Cities.
Kotoulas, Spyros; Daly, Elizabeth; Tommasi, Pierpaolo; Kishimoto, Akihiro; Lopez, Vanessa; Stephenson, Martin; Botea, Adi; Sbodio, Marco; Marinescu, Radu; Rooney, Ronan
2016-01-01
Providing appropriate support for the most vulnerable individuals carries enormous societal significance and economic burden. Yet, finding the right balance between costs, estimated effectiveness and the experience of the care recipient is a daunting task that requires considering vast amount of information. We present a system that helps care teams choose the optimal combination of providers for a set of services. We draw from techniques in Open Data processing, semantic processing, faceted exploration, visual analytics, transportation analytics and multi-objective optimization. We present an implementation of the system using data from New York City and illustrate the feasibility these technologies to guide care workers in care planning.
An automated model-based aim point distribution system for solar towers
NASA Astrophysics Data System (ADS)
Schwarzbözl, Peter; Rong, Amadeus; Macke, Ansgar; Säck, Jan-Peter; Ulmer, Steffen
2016-05-01
Distribution of heliostat aim points is a major task during central receiver operation, as the flux distribution produced by the heliostats varies continuously with time. Known methods for aim point distribution are mostly based on simple aim point patterns and focus on control strategies to meet local temperature and flux limits of the receiver. Lowering the peak flux on the receiver to avoid hot spots and maximizing thermal output are obviously competing targets that call for a comprehensive optimization process. This paper presents a model-based method for online aim point optimization that includes the current heliostat field mirror quality derived through an automated deflectometric measurement process.
Attention control learning in the decision space using state estimation
NASA Astrophysics Data System (ADS)
Gharaee, Zahra; Fatehi, Alireza; Mirian, Maryam S.; Nili Ahmadabadi, Majid
2016-05-01
The main goal of this paper is modelling attention while using it in efficient path planning of mobile robots. The key challenge in concurrently aiming these two goals is how to make an optimal, or near-optimal, decision in spite of time and processing power limitations, which inherently exist in a typical multi-sensor real-world robotic application. To efficiently recognise the environment under these two limitations, attention of an intelligent agent is controlled by employing the reinforcement learning framework. We propose an estimation method using estimated mixture-of-experts task and attention learning in perceptual space. An agent learns how to employ its sensory resources, and when to stop observing, by estimating its perceptual space. In this paper, static estimation of the state space in a learning task problem, which is examined in the WebotsTM simulator, is performed. Simulation results show that a robot learns how to achieve an optimal policy with a controlled cost by estimating the state space instead of continually updating sensory information.
Optimizing a mobile robot control system using GPU acceleration
NASA Astrophysics Data System (ADS)
Tuck, Nat; McGuinness, Michael; Martin, Fred
2012-01-01
This paper describes our attempt to optimize a robot control program for the Intelligent Ground Vehicle Competition (IGVC) by running computationally intensive portions of the system on a commodity graphics processing unit (GPU). The IGVC Autonomous Challenge requires a control program that performs a number of different computationally intensive tasks ranging from computer vision to path planning. For the 2011 competition our Robot Operating System (ROS) based control system would not run comfortably on the multicore CPU on our custom robot platform. The process of profiling the ROS control program and selecting appropriate modules for porting to run on a GPU is described. A GPU-targeting compiler, Bacon, is used to speed up development and help optimize the ported modules. The impact of the ported modules on overall performance is discussed. We conclude that GPU optimization can free a significant amount of CPU resources with minimal effort for expensive user-written code, but that replacing heavily-optimized library functions is more difficult, and a much less efficient use of time.
Strategic Adaptation to Task Characteristics, Incentives, and Individual Differences in Dual-Tasking
Janssen, Christian P.; Brumby, Duncan P.
2015-01-01
We investigate how good people are at multitasking by comparing behavior to a prediction of the optimal strategy for dividing attention between two concurrent tasks. In our experiment, 24 participants had to interleave entering digits on a keyboard with controlling a randomly moving cursor with a joystick. The difficulty of the tracking task was systematically varied as a within-subjects factor. Participants were also exposed to different explicit reward functions that varied the relative importance of the tracking task relative to the typing task (between-subjects). Results demonstrate that these changes in task characteristics and monetary incentives, together with individual differences in typing ability, influenced how participants choose to interleave tasks. This change in strategy then affected their performance on each task. A computational cognitive model was used to predict performance for a wide set of alternative strategies for how participants might have possibly interleaved tasks. This allowed for predictions of optimal performance to be derived, given the constraints placed on performance by the task and cognition. A comparison of human behavior with the predicted optimal strategy shows that participants behaved near optimally. Our findings have implications for the design and evaluation of technology for multitasking situations, as consideration should be given to the characteristics of the task, but also to how different users might use technology depending on their individual characteristics and their priorities. PMID:26161851
Optimizing sterilization logistics in hospitals.
van de Klundert, Joris; Muls, Philippe; Schadd, Maarten
2008-03-01
This paper deals with the optimization of the flow of sterile instruments in hospitals which takes place between the sterilization department and the operating theatre. This topic is especially of interest in view of the current attempts of hospitals to cut cost by outsourcing sterilization tasks. Oftentimes, outsourcing implies placing the sterilization unit at a larger distance, hence introducing a longer logistic loop, which may result in lower instrument availability, and higher cost. This paper discusses the optimization problems that have to be solved when redesigning processes so as to improve material availability and reduce cost. We consider changing the logistic management principles, use of visibility information, and optimizing the composition of the nets of sterile materials.
Gang, G J; Siewerdsen, J H; Stayman, J W
2017-02-11
This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index ( d' ) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength ( β ) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.
Application of genetic algorithm in integrated setup planning and operation sequencing
NASA Astrophysics Data System (ADS)
Kafashi, Sajad; Shakeri, Mohsen
2011-01-01
Process planning is an essential component for linking design and manufacturing process. Setup planning and operation sequencing is two main tasks in process planning. Many researches solved these two problems separately. Considering the fact that the two functions are complementary, it is necessary to integrate them more tightly so that performance of a manufacturing system can be improved economically and competitively. This paper present a generative system and genetic algorithm (GA) approach to process plan the given part. The proposed approach and optimization methodology analyses the TAD (tool approach direction), tolerance relation between features and feature precedence relations to generate all possible setups and operations using workshop resource database. Based on these technological constraints the GA algorithm approach, which adopts the feature-based representation, optimizes the setup plan and sequence of operations using cost indices. Case study show that the developed system can generate satisfactory results in optimizing the setup planning and operation sequencing simultaneously in feasible condition.
Automated Guidance from Physiological Sensing to Reduce Thermal-Work Strain Levels on a Novel Task
USDA-ARS?s Scientific Manuscript database
This experiment demonstrated that automated pace guidance generated from real-time physiological monitoring allowed least stressful completion of a timed (60 minute limit) 5 mile treadmill exercise. An optimal pacing policy was estimated from a Markov decision process that balanced the goals of the...
MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, G; Pan, X; Stayman, J
2014-06-15
Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within themore » reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical applications. Learning Objectives: Learn the general methodologies associated with model-based 3D image reconstruction. Learn the potential advantages in image quality and dose associated with model-based image reconstruction. Learn the challenges associated with computational load and image quality assessment for such reconstruction methods. Learn how imaging task can be incorporated as a means to drive optimal image acquisition and reconstruction techniques. Learn how model-based reconstruction methods can incorporate prior information to improve image quality, ease sampling requirements, and reduce dose.« less
2012-07-01
detection only condition followed either face detection only or dual task, thus ensuring that participants were practiced in face detection before...1 ARMY RSCH LABORATORY – HRED RDRL HRM C A DAVISON 320 MANSCEN LOOP STE 115 FORT LEONARD WOOD MO 65473 2 ARMY RSCH LABORATORY...HRED RDRL HRM DI T DAVIS J HANSBERGER BLDG 5400 RM C242 REDSTONE ARSENAL AL 35898-7290 1 ARMY RSCH LABORATORY – HRED RDRL HRS
Predictive Cache Modeling and Analysis
2011-11-01
metaheuristic /bin-packing algorithm to optimize task placement based on task communication characterization. Our previous work on task allocation showed...Cache Miss Minimization Technology To efficiently explore combinations and discover nearly-optimal task-assignment algorithms , we extended to our...it was possible to use our algorithmic techniques to decrease network bandwidth consumption by ~25%. In this effort, we adapted these existing
Cloud computing task scheduling strategy based on improved differential evolution algorithm
NASA Astrophysics Data System (ADS)
Ge, Junwei; He, Qian; Fang, Yiqiu
2017-04-01
In order to optimize the cloud computing task scheduling scheme, an improved differential evolution algorithm for cloud computing task scheduling is proposed. Firstly, the cloud computing task scheduling model, according to the model of the fitness function, and then used improved optimization calculation of the fitness function of the evolutionary algorithm, according to the evolution of generation of dynamic selection strategy through dynamic mutation strategy to ensure the global and local search ability. The performance test experiment was carried out in the CloudSim simulation platform, the experimental results show that the improved differential evolution algorithm can reduce the cloud computing task execution time and user cost saving, good implementation of the optimal scheduling of cloud computing tasks.
Albonico, Andrea; Malaspina, Manuela; Bricolo, Emanuela; Martelli, Marialuisa; Daini, Roberta
2016-11-01
Selective attention, i.e. the ability to concentrate one's limited processing resources on one aspect of the environment, is a multifaceted concept that includes different processes like spatial attention and its subcomponents of orienting and focusing. Several studies, indeed, have shown that visual tasks performance is positively influenced not only by attracting attention to the target location (orientation component), but also by the adjustment of the size of the attentional window according to task demands (focal component). Nevertheless, the relative weight of the two components in central and peripheral vision has never been studied. We conducted two experiments to explore whether different components of spatial attention have different effects in central and peripheral vision. In order to do so, participants underwent either a detection (Experiment 1) or a discrimination (Experiment 2) task where different types of cues elicited different components of spatial attention: a red dot, a small square and a big square (an optimal stimulus for the orientation component, an optimal and a sub-optimal stimulus for the focal component respectively). Response times and cue-size effects indicated a stronger effect of the small square or of the dot in different conditions, suggesting the existence of a dissociation in terms of mechanisms between the focal and the orientation components of spatial attention. Specifically, we found that the orientation component was stronger in periphery, while the focal component was noticeable only in central vision and characterized by an exogenous nature. Copyright © 2016 Elsevier B.V. All rights reserved.
Levin, Yulia; Tzelgov, Joseph
2016-02-01
A contingency learning account of the item-specific proportion congruent effect has been described as an associative stimulus-response learning process that has nothing to do with controlling the Stroop conflict. As supportive evidence, contingency learning has been demonstrated with response conflict-free stimuli, such as neutral words. However, what gives rise to response conflict and to Stroop interference in general is task conflict. The present study investigated whether task conflict can constitute a trigger or, alternatively, a booster to the contingency learning process. This was done by employing a "task conflict-free" condition (i.e., geometric shapes) and comparing it with a "task conflict" condition (i.e., neutral words). The results showed a significant contingency learning effect in both conditions, refuting the possibility that contingency learning is triggered by the presence of a task conflict. Contingency learning was also not enhanced by the task conflict experience, indicating its complete insensitivity to Stroop conflict(s). Thus, the results showed no evidence that performance optimization as a result of contingency learning is greater under conflict, implying that contingency learning is not recruited to assist the control system to overcome conflict. Copyright © 2015 Elsevier B.V. All rights reserved.
Higgins, Paul; Searchfield, Grant; Coad, Gavin
2012-06-01
The aim of this study was to determine which level-dependent hearing aid digital signal-processing strategy (DSP) participants preferred when listening to music and/or performing a speech-in-noise task. Two receiver-in-the-ear hearing aids were compared: one using 32-channel adaptive dynamic range optimization (ADRO) and the other wide dynamic range compression (WDRC) incorporating dual fast (4 channel) and slow (15 channel) processing. The manufacturers' first-fit settings based on participants' audiograms were used in both cases. Results were obtained from 18 participants on a quick speech-in-noise (QuickSIN; Killion, Niquette, Gudmundsen, Revit, & Banerjee, 2004) task and for 3 music listening conditions (classical, jazz, and rock). Participants preferred the quality of music and performed better at the QuickSIN task using the hearing aids with ADRO processing. A potential reason for the better performance of the ADRO hearing aids was less fluctuation in output with change in sound dynamics. ADRO processing has advantages for both music quality and speech recognition in noise over the multichannel WDRC processing that was used in the study. Further evaluations of which DSP aspects contribute to listener preference are required.
Accelerating Dust Storm Simulation by Balancing Task Allocation in Parallel Computing Environment
NASA Astrophysics Data System (ADS)
Gui, Z.; Yang, C.; XIA, J.; Huang, Q.; YU, M.
2013-12-01
Dust storm has serious negative impacts on environment, human health, and assets. The continuing global climate change has increased the frequency and intensity of dust storm in the past decades. To better understand and predict the distribution, intensity and structure of dust storm, a series of dust storm models have been developed, such as Dust Regional Atmospheric Model (DREAM), the NMM meteorological module (NMM-dust) and Chinese Unified Atmospheric Chemistry Environment for Dust (CUACE/Dust). The developments and applications of these models have contributed significantly to both scientific research and our daily life. However, dust storm simulation is a data and computing intensive process. Normally, a simulation for a single dust storm event may take several days or hours to run. It seriously impacts the timeliness of prediction and potential applications. To speed up the process, high performance computing is widely adopted. By partitioning a large study area into small subdomains according to their geographic location and executing them on different computing nodes in a parallel fashion, the computing performance can be significantly improved. Since spatiotemporal correlations exist in the geophysical process of dust storm simulation, each subdomain allocated to a node need to communicate with other geographically adjacent subdomains to exchange data. Inappropriate allocations may introduce imbalance task loads and unnecessary communications among computing nodes. Therefore, task allocation method is the key factor, which may impact the feasibility of the paralleling. The allocation algorithm needs to carefully leverage the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire system. This presentation introduces two algorithms for such allocation and compares them with evenly distributed allocation method. Specifically, 1) In order to get optimized solutions, a quadratic programming based modeling method is proposed. This algorithm performs well with small amount of computing tasks. However, its efficiency decreases significantly as the subdomain number and computing node number increase. 2) To compensate performance decreasing for large scale tasks, a K-Means clustering based algorithm is introduced. Instead of dedicating to get optimized solutions, this method can get relatively good feasible solutions within acceptable time. However, it may introduce imbalance communication for nodes or node-isolated subdomains. This research shows both two algorithms have their own strength and weakness for task allocation. A combination of the two algorithms is under study to obtain a better performance. Keywords: Scheduling; Parallel Computing; Load Balance; Optimization; Cost Model
Predicting explorative motor learning using decision-making and motor noise.
Chen, Xiuli; Mohr, Kieran; Galea, Joseph M
2017-04-01
A fundamental problem faced by humans is learning to select motor actions based on noisy sensory information and incomplete knowledge of the world. Recently, a number of authors have asked whether this type of motor learning problem might be very similar to a range of higher-level decision-making problems. If so, participant behaviour on a high-level decision-making task could be predictive of their performance during a motor learning task. To investigate this question, we studied performance during an explorative motor learning task and a decision-making task which had a similar underlying structure with the exception that it was not subject to motor (execution) noise. We also collected an independent measurement of each participant's level of motor noise. Our analysis showed that explorative motor learning and decision-making could be modelled as the (approximately) optimal solution to a Partially Observable Markov Decision Process bounded by noisy neural information processing. The model was able to predict participant performance in motor learning by using parameters estimated from the decision-making task and the separate motor noise measurement. This suggests that explorative motor learning can be formalised as a sequential decision-making process that is adjusted for motor noise, and raises interesting questions regarding the neural origin of explorative motor learning.
Predicting explorative motor learning using decision-making and motor noise
Galea, Joseph M.
2017-01-01
A fundamental problem faced by humans is learning to select motor actions based on noisy sensory information and incomplete knowledge of the world. Recently, a number of authors have asked whether this type of motor learning problem might be very similar to a range of higher-level decision-making problems. If so, participant behaviour on a high-level decision-making task could be predictive of their performance during a motor learning task. To investigate this question, we studied performance during an explorative motor learning task and a decision-making task which had a similar underlying structure with the exception that it was not subject to motor (execution) noise. We also collected an independent measurement of each participant’s level of motor noise. Our analysis showed that explorative motor learning and decision-making could be modelled as the (approximately) optimal solution to a Partially Observable Markov Decision Process bounded by noisy neural information processing. The model was able to predict participant performance in motor learning by using parameters estimated from the decision-making task and the separate motor noise measurement. This suggests that explorative motor learning can be formalised as a sequential decision-making process that is adjusted for motor noise, and raises interesting questions regarding the neural origin of explorative motor learning. PMID:28437451
NASA Technical Reports Server (NTRS)
Hess, R. A.
1977-01-01
A brief review of some of the more pertinent applications of analytical pilot models to the prediction of aircraft handling qualities is undertaken. The relative ease with which multiloop piloting tasks can be modeled via the optimal control formulation makes the use of optimal pilot models particularly attractive for handling qualities research. To this end, a rating hypothesis is introduced which relates the numerical pilot opinion rating assigned to a particular vehicle and task to the numerical value of the index of performance resulting from an optimal pilot modeling procedure as applied to that vehicle and task. This hypothesis is tested using data from piloted simulations and is shown to be reasonable. An example concerning a helicopter landing approach is introduced to outline the predictive capability of the rating hypothesis in multiaxis piloting tasks.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor)
1994-01-01
Real-time video presentations are provided in the field of operator-supervised automation and teleoperation, particularly in control stations having movable cameras for optimal viewing of a region of interest in robotics and teleoperations for performing different types of tasks. Movable monitors to match the corresponding camera orientations (pan, tilt, and roll) are provided in order to match the coordinate systems of all the monitors to the operator internal coordinate system. Automated control of the arrangement of cameras and monitors, and of the configuration of system parameters, is provided for optimal viewing and performance of each type of task for each operator since operators have different individual characteristics. The optimal viewing arrangement and system parameter configuration is determined and stored for each operator in performing each of many types of tasks in order to aid the automation of setting up optimal arrangements and configurations for successive tasks in real time. Factors in determining what is optimal include the operator's ability to use hand-controllers for each type of task. Robot joint locations, forces and torques are used, as well as the operator's identity, to identify the current type of task being performed in order to call up a stored optimal viewing arrangement and system parameter configuration.
Implications of holistic face processing in autism and schizophrenia
Watson, Tamara L.
2013-01-01
People with autism and schizophrenia have been shown to have a local bias in sensory processing and face recognition difficulties. A global or holistic processing strategy is known to be important when recognizing faces. Studies investigating face recognition in these populations are reviewed and show that holistic processing is employed despite lower overall performance in the tasks used. This implies that holistic processing is necessary but not sufficient for optimal face recognition and new avenues for research into face recognition based on network models of autism and schizophrenia are proposed. PMID:23847581
A Parallel Pipelined Renderer for the Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Chiueh, Tzi-Cker; Ma, Kwan-Liu
1997-01-01
This paper presents a strategy for efficiently rendering time-varying volume data sets on a distributed-memory parallel computer. Time-varying volume data take large storage space and visualizing them requires reading large files continuously or periodically throughout the course of the visualization process. Instead of using all the processors to collectively render one volume at a time, a pipelined rendering process is formed by partitioning processors into groups to render multiple volumes concurrently. In this way, the overall rendering time may be greatly reduced because the pipelined rendering tasks are overlapped with the I/O required to load each volume into a group of processors; moreover, parallelization overhead may be reduced as a result of partitioning the processors. We modify an existing parallel volume renderer to exploit various levels of rendering parallelism and to study how the partitioning of processors may lead to optimal rendering performance. Two factors which are important to the overall execution time are re-source utilization efficiency and pipeline startup latency. The optimal partitioning configuration is the one that balances these two factors. Tests on Intel Paragon computers show that in general optimal partitionings do exist for a given rendering task and result in 40-50% saving in overall rendering time.
NASA Astrophysics Data System (ADS)
Gang, Grace J.; Siewerdsen, Jeffrey H.; Webster Stayman, J.
2017-06-01
Tube current modulation (TCM) is routinely adopted on diagnostic CT scanners for dose reduction. Conventional TCM strategies are generally designed for filtered-backprojection (FBP) reconstruction to satisfy simple image quality requirements based on noise. This work investigates TCM designs for model-based iterative reconstruction (MBIR) to achieve optimal imaging performance as determined by a task-based image quality metric. Additionally, regularization is an important aspect of MBIR that is jointly optimized with TCM, and includes both the regularization strength that controls overall smoothness as well as directional weights that permits control of the isotropy/anisotropy of the local noise and resolution properties. Initial investigations focus on a known imaging task at a single location in the image volume. The framework adopts Fourier and analytical approximations for fast estimation of the local noise power spectrum (NPS) and modulation transfer function (MTF)—each carrying dependencies on TCM and regularization. For the single location optimization, the local detectability index (d‧) of the specific task was directly adopted as the objective function. A covariance matrix adaptation evolution strategy (CMA-ES) algorithm was employed to identify the optimal combination of imaging parameters. Evaluations of both conventional and task-driven approaches were performed in an abdomen phantom for a mid-frequency discrimination task in the kidney. Among the conventional strategies, the TCM pattern optimal for FBP using a minimum variance criterion yielded a worse task-based performance compared to an unmodulated strategy when applied to MBIR. Moreover, task-driven TCM designs for MBIR were found to have the opposite behavior from conventional designs for FBP, with greater fluence assigned to the less attenuating views of the abdomen and less fluence to the more attenuating lateral views. Such TCM patterns exaggerate the intrinsic anisotropy of the MTF and NPS as a result of the data weighting in MBIR. Directional penalty design was found to reinforce the same trend. The task-driven approaches outperform conventional approaches, with the maximum improvement in d‧ of 13% given by the joint optimization of TCM and regularization. This work demonstrates that the TCM optimal for MBIR is distinct from conventional strategies proposed for FBP reconstruction and strategies optimal for FBP are suboptimal and may even reduce performance when applied to MBIR. The task-driven imaging framework offers a promising approach for optimizing acquisition and reconstruction for MBIR that can improve imaging performance and/or dose utilization beyond conventional imaging strategies.
Optimization of the production process using virtual model of a workspace
NASA Astrophysics Data System (ADS)
Monica, Z.
2015-11-01
Optimization of the production process is an element of the design cycle consisting of: problem definition, modelling, simulation, optimization and implementation. Without the use of simulation techniques, the only thing which could be achieved is larger or smaller improvement of the process, not the optimization (i.e., the best result it is possible to get for the conditions under which the process works). Optimization is generally management actions that are ultimately bring savings in time, resources, and raw materials and improve the performance of a specific process. It does not matter whether it is a service or manufacturing process. Optimizing the savings generated by improving and increasing the efficiency of the processes. Optimization consists primarily of organizational activities that require very little investment, or rely solely on the changing organization of work. Modern companies operating in a market economy shows a significant increase in interest in modern methods of production management and services. This trend is due to the high competitiveness among companies that want to achieve success are forced to continually modify the ways to manage and flexible response to changing demand. Modern methods of production management, not only imply a stable position of the company in the sector, but also influence the improvement of health and safety within the company and contribute to the implementation of more efficient rules for standardization work in the company. This is why in the paper is presented the application of such developed environment like Siemens NX to create the virtual model of a production system and to simulate as well as optimize its work. The analyzed system is the robotized workcell consisting of: machine tools, industrial robots, conveyors, auxiliary equipment and buffers. In the program could be defined the control program realizing the main task in the virtual workcell. It is possible, using this tool, to optimize both the object trajectory and the cooperation process.
Optimal processor assignment for pipeline computations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Simha, Rahul; Choudhury, Alok N.; Narahari, Bhagirath
1991-01-01
The availability of large scale multitasked parallel architectures introduces the following processor assignment problem for pipelined computations. Given a set of tasks and their precedence constraints, along with their experimentally determined individual responses times for different processor sizes, find an assignment of processor to tasks. Two objectives are of interest: minimal response given a throughput requirement, and maximal throughput given a response time requirement. These assignment problems differ considerably from the classical mapping problem in which several tasks share a processor; instead, it is assumed that a large number of processors are to be assigned to a relatively small number of tasks. Efficient assignment algorithms were developed for different classes of task structures. For a p processor system and a series parallel precedence graph with n constituent tasks, an O(np2) algorithm is provided that finds the optimal assignment for the response time optimization problem; it was found that the assignment optimizing the constrained throughput in O(np2log p) time. Special cases of linear, independent, and tree graphs are also considered.
NASA Technical Reports Server (NTRS)
1976-01-01
In the conceptual design task, several feasible wind generator systems (WGS) configurations were evaluated, and the concept offering the lowest energy cost potential and minimum technical risk for utility applications was selected. In the optimization task, the selected concept was optimized utilizing a parametric computer program prepared for this purpose. In the preliminary design task, the optimized selected concept was designed and analyzed in detail. The utility requirements evaluation task examined the economic, operational, and institutional factors affecting the WGS in a utility environment, and provided additional guidance for the preliminary design effort. Results of the conceptual design task indicated that a rotor operating at constant speed, driving an AC generator through a gear transmission is the most cost effective WGS configuration. The optimization task results led to the selection of a 500 kW rating for the low power WGS and a 1500 kW rating for the high power WGS.
ERIC Educational Resources Information Center
Hazelwood, R. Jordan; Armeson, Kent E.; Hill, Elizabeth G.; Bonilha, Heather Shaw; Martin-Harris, Bonnie
2017-01-01
Purpose: The purpose of this study was to identify which swallowing task(s) yielded the worst performance during a standardized modified barium swallow study (MBSS) in order to optimize the detection of swallowing impairment. Method: This secondary data analysis of adult MBSSs estimated the probability of each swallowing task yielding the derived…
Embedded real-time image processing hardware for feature extraction and clustering
NASA Astrophysics Data System (ADS)
Chiu, Lihu; Chang, Grant
2003-08-01
Printronix, Inc. uses scanner-based image systems to perform print quality measurements for line-matrix printers. The size of the image samples and image definition required make commercial scanners convenient to use. The image processing is relatively well defined, and we are able to simplify many of the calculations into hardware equations and "c" code. The process of rapidly prototyping the system using DSP based "c" code gets the algorithms well defined early in the development cycle. Once a working system is defined, the rest of the process involves splitting the task up for the FPGA and the DSP implementation. Deciding which of the two to use, the DSP or the FPGA, is a simple matter of trial benchmarking. There are two kinds of benchmarking: One for speed, and the other for memory. The more memory intensive algorithms should run in the DSP, and the simple real time tasks can use the FPGA most effectively. Once the task is split, we can decide which platform the algorithm should be executed. This involves prototyping all the code in the DSP, then timing various blocks of the algorithm. Slow routines can be optimized using the compiler tools, and if further reduction in time is needed, into tasks that the FPGA can perform.
Systematic review automation technologies
2014-01-01
Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects. We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time. PMID:25005128
Task-driven imaging in cone-beam computed tomography.
Gang, G J; Stayman, J W; Ouadah, S; Ehtiati, T; Siewerdsen, J H
Conventional workflow in interventional imaging often ignores a wealth of prior information of the patient anatomy and the imaging task. This work introduces a task-driven imaging framework that utilizes such information to prospectively design acquisition and reconstruction techniques for cone-beam CT (CBCT) in a manner that maximizes task-based performance in subsequent imaging procedures. The framework is employed in jointly optimizing tube current modulation, orbital tilt, and reconstruction parameters in filtered backprojection reconstruction for interventional imaging. Theoretical predictors of noise and resolution relates acquisition and reconstruction parameters to task-based detectability. Given a patient-specific prior image and specification of the imaging task, an optimization algorithm prospectively identifies the combination of imaging parameters that maximizes task-based detectability. Initial investigations were performed for a variety of imaging tasks in an elliptical phantom and an anthropomorphic head phantom. Optimization of tube current modulation and view-dependent reconstruction kernel was shown to have greatest benefits for a directional task (e.g., identification of device or tissue orientation). The task-driven approach yielded techniques in which the dose and sharp kernels were concentrated in views contributing the most to the signal power associated with the imaging task. For example, detectability of a line pair detection task was improved by at least three fold compared to conventional approaches. For radially symmetric tasks, the task-driven strategy yielded results similar to a minimum variance strategy in the absence of kernel modulation. Optimization of the orbital tilt successfully avoided highly attenuating structures that can confound the imaging task by introducing noise correlations masquerading at spatial frequencies of interest. This work demonstrated the potential of a task-driven imaging framework to improve image quality and reduce dose beyond that achievable with conventional imaging approaches.
Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography
Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.
2017-01-01
Purpose This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d′) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM. PMID:28626290
Joint optimization of fluence field modulation and regularization in task-driven computed tomography
NASA Astrophysics Data System (ADS)
Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.
2017-03-01
Purpose: This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods: We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d') across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results: The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions: The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.
USHPRR FUEL FABRICATION PILLAR: FABRICATION STATUS, PROCESS OPTIMIZATIONS, AND FUTURE PLANS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wight, Jared M.; Joshi, Vineet V.; Lavender, Curt A.
The Fuel Fabrication (FF) Pillar, a project within the U.S. High Performance Research Reactor Conversion program of the National Nuclear Security Administration’s Office of Material Management and Minimization, is tasked with the scale-up and commercialization of high-density monolithic U-Mo fuel for the conversion of appropriate research reactors to use of low-enriched fuel. The FF Pillar has made significant steps to demonstrate and optimize the baseline co-rolling process using commercial-scale equipment at both the Y-12 National Security Complex (Y-12) and BWX Technologies (BWXT). These demonstrations include the fabrication of the next irradiation experiment, Mini-Plate 1 (MP-1), and casting optimizations at Y-12.more » The FF Pillar uses a detailed process flow diagram to identify potential gaps in processing knowledge or demonstration, which helps direct the strategic research agenda of the FF Pillar. This paper describes the significant progress made toward understanding the fuel characteristics, and models developed to make informed decisions, increase process yield, and decrease lifecycle waste and costs.« less
On scheduling task systems with variable service times
NASA Astrophysics Data System (ADS)
Maset, Richard G.; Banawan, Sayed A.
1993-08-01
Several strategies have been proposed for developing optimal and near-optimal schedules for task systems (jobs consisting of multiple tasks that can be executed in parallel). Most such strategies, however, implicitly assume deterministic task service times. We show that these strategies are much less effective when service times are highly variable. We then evaluate two strategies—one adaptive, one static—that have been proposed for retaining high performance despite such variability. Both strategies are extensions of critical path scheduling, which has been found to be efficient at producing near-optimal schedules. We found the adaptive approach to be quite effective.
Mousavi, Maryam; Yap, Hwa Jen; Musa, Siti Nurmaya; Tahriri, Farzad; Md Dawal, Siti Zawiah
2017-01-01
Flexible manufacturing system (FMS) enhances the firm's flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs). An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and hybrid GA-PSO) to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs' battery charge. Assessment of the numerical examples' scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software.
Yap, Hwa Jen; Musa, Siti Nurmaya; Tahriri, Farzad; Md Dawal, Siti Zawiah
2017-01-01
Flexible manufacturing system (FMS) enhances the firm’s flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs). An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and hybrid GA-PSO) to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs’ battery charge. Assessment of the numerical examples’ scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software. PMID:28263994
Digital processing of the Mariner 10 images of Venus and Mercury
NASA Technical Reports Server (NTRS)
Soha, J. M.; Lynn, D. J.; Mosher, J. A.; Elliot, D. A.
1977-01-01
An extensive effort was devoted to the digital processing of the Mariner 10 images of Venus and Mercury at the Image Processing Laboratory of the Jet Propulsion Laboratory. This effort was designed to optimize the display of the considerable quantity of information contained in the images. Several image restoration, enhancement, and transformation procedures were applied; examples of these techniques are included. A particular task was the construction of large mosaics which characterize the surface of Mercury and the atmospheric structure of Venus.
Bayer image parallel decoding based on GPU
NASA Astrophysics Data System (ADS)
Hu, Rihui; Xu, Zhiyong; Wei, Yuxing; Sun, Shaohua
2012-11-01
In the photoelectrical tracking system, Bayer image is decompressed in traditional method, which is CPU-based. However, it is too slow when the images become large, for example, 2K×2K×16bit. In order to accelerate the Bayer image decoding, this paper introduces a parallel speedup method for NVIDA's Graphics Processor Unit (GPU) which supports CUDA architecture. The decoding procedure can be divided into three parts: the first is serial part, the second is task-parallelism part, and the last is data-parallelism part including inverse quantization, inverse discrete wavelet transform (IDWT) as well as image post-processing part. For reducing the execution time, the task-parallelism part is optimized by OpenMP techniques. The data-parallelism part could advance its efficiency through executing on the GPU as CUDA parallel program. The optimization techniques include instruction optimization, shared memory access optimization, the access memory coalesced optimization and texture memory optimization. In particular, it can significantly speed up the IDWT by rewriting the 2D (Tow-dimensional) serial IDWT into 1D parallel IDWT. Through experimenting with 1K×1K×16bit Bayer image, data-parallelism part is 10 more times faster than CPU-based implementation. Finally, a CPU+GPU heterogeneous decompression system was designed. The experimental result shows that it could achieve 3 to 5 times speed increase compared to the CPU serial method.
A collimator optimization method for quantitative imaging: application to Y-90 bremsstrahlung SPECT.
Rong, Xing; Frey, Eric C
2013-08-01
Post-therapy quantitative 90Y bremsstrahlung single photon emission computed tomography (SPECT) has shown great potential to provide reliable activity estimates, which are essential for dose verification. Typically 90Y imaging is performed with high- or medium-energy collimators. However, the energy spectrum of 90Y bremsstrahlung photons is substantially different than typical for these collimators. In addition, dosimetry requires quantitative images, and collimators are not typically optimized for such tasks. Optimizing a collimator for 90Y imaging is both novel and potentially important. Conventional optimization methods are not appropriate for 90Y bremsstrahlung photons, which have a continuous and broad energy distribution. In this work, the authors developed a parallel-hole collimator optimization method for quantitative tasks that is particularly applicable to radionuclides with complex emission energy spectra. The authors applied the proposed method to develop an optimal collimator for quantitative 90Y bremsstrahlung SPECT in the context of microsphere radioembolization. To account for the effects of the collimator on both the bias and the variance of the activity estimates, the authors used the root mean squared error (RMSE) of the volume of interest activity estimates as the figure of merit (FOM). In the FOM, the bias due to the null space of the image formation process was taken in account. The RMSE was weighted by the inverse mass to reflect the application to dosimetry; for a different application, more relevant weighting could easily be adopted. The authors proposed a parameterization for the collimator that facilitates the incorporation of the important factors (geometric sensitivity, geometric resolution, and septal penetration fraction) determining collimator performance, while keeping the number of free parameters describing the collimator small (i.e., two parameters). To make the optimization results for quantitative 90Y bremsstrahlung SPECT more general, the authors simulated multiple tumors of various sizes in the liver. The authors realistically simulated human anatomy using a digital phantom and the image formation process using a previously validated and computationally efficient method for modeling the image-degrading effects including object scatter, attenuation, and the full collimator-detector response (CDR). The scatter kernels and CDR function tables used in the modeling method were generated using a previously validated Monte Carlo simulation code. The hole length, hole diameter, and septal thickness of the obtained optimal collimator were 84, 3.5, and 1.4 mm, respectively. Compared to a commercial high-energy general-purpose collimator, the optimal collimator improved the resolution and FOM by 27% and 18%, respectively. The proposed collimator optimization method may be useful for improving quantitative SPECT imaging for radionuclides with complex energy spectra. The obtained optimal collimator provided a substantial improvement in quantitative performance for the microsphere radioembolization task considered.
Flexible Fusion Structure-Based Performance Optimization Learning for Multisensor Target Tracking
Ge, Quanbo; Wei, Zhongliang; Cheng, Tianfa; Chen, Shaodong; Wang, Xiangfeng
2017-01-01
Compared with the fixed fusion structure, the flexible fusion structure with mixed fusion methods has better adjustment performance for the complex air task network systems, and it can effectively help the system to achieve the goal under the given constraints. Because of the time-varying situation of the task network system induced by moving nodes and non-cooperative target, and limitations such as communication bandwidth and measurement distance, it is necessary to dynamically adjust the system fusion structure including sensors and fusion methods in a given adjustment period. Aiming at this, this paper studies the design of a flexible fusion algorithm by using an optimization learning technology. The purpose is to dynamically determine the sensors’ numbers and the associated sensors to take part in the centralized and distributed fusion processes, respectively, herein termed sensor subsets selection. Firstly, two system performance indexes are introduced. Especially, the survivability index is presented and defined. Secondly, based on the two indexes and considering other conditions such as communication bandwidth and measurement distance, optimization models for both single target tracking and multi-target tracking are established. Correspondingly, solution steps are given for the two optimization models in detail. Simulation examples are demonstrated to validate the proposed algorithms. PMID:28481243
A Rational Analysis of the Selection Task as Optimal Data Selection.
ERIC Educational Resources Information Center
Oaksford, Mike; Chater, Nick
1994-01-01
Experimental data on human reasoning in hypothesis-testing tasks is reassessed in light of a Bayesian model of optimal data selection in inductive hypothesis testing. The rational analysis provided by the model suggests that reasoning in such tasks may be rational rather than subject to systematic bias. (SLD)
Human Information Processing and Supervisory Control.
1980-05-01
interpretation of information .............. 16 Sampling strategies .............................. 17 Speed-accuracy tradeoff ................... 23...operator is usually highly trained, and largely controls the tasks, being allowed to use what strategies he will.. Risk is incurred in ways which can...his search less than optimally effective. Hence from matters of tactics and strategy which will be discussed below, straightforward questions of
ERIC Educational Resources Information Center
Shooter, Wynn; Paisley, Karen; Sibthorp, Jim
2009-01-01
Outdoor education researchers have accumulated a notable cache of work documenting the outcomes of participation in outdoor education programs (e.g., Hattie, Marsh, Neill, & Richards, 1997; Kaplan & Talbot, 1983). While continuing this work remains an important task, some researchers are turning their attention toward understanding the process of…
ERIC Educational Resources Information Center
Abós, Ángel; Sevil, Javier; Julián, José Antonio; Abarca-Sos, Alberto; García-González, Luis
2017-01-01
Grounded in self-determination theory and achievement goal theory, this quasi-experimental study evaluated the effectiveness of a teaching intervention programme to improve predisposition towards physical education based on developing a task-oriented motivational climate and supporting basic psychological needs. The final sample consisted of 35…
Attention to sound improves auditory reliability in audio-tactile spatial optimal integration.
Vercillo, Tiziana; Gori, Monica
2015-01-01
The role of attention on multisensory processing is still poorly understood. In particular, it is unclear whether directing attention toward a sensory cue dynamically reweights cue reliability during integration of multiple sensory signals. In this study, we investigated the impact of attention in combining audio-tactile signals in an optimal fashion. We used the Maximum Likelihood Estimation (MLE) model to predict audio-tactile spatial localization on the body surface. We developed a new audio-tactile device composed by several small units, each one consisting of a speaker and a tactile vibrator independently controllable by external software. We tested participants in an attentional and a non-attentional condition. In the attentional experiment, participants performed a dual task paradigm: they were required to evaluate the duration of a sound while performing an audio-tactile spatial task. Three unisensory or multisensory stimuli, conflictual or not conflictual sounds and vibrations arranged along the horizontal axis, were presented sequentially. In the primary task participants had to evaluate in a space bisection task the position of the second stimulus (the probe) with respect to the others (the standards). In the secondary task they had to report occasionally changes in duration of the second auditory stimulus. In the non-attentional task participants had only to perform the primary task (space bisection). Our results showed an enhanced auditory precision (and auditory weights) in the auditory attentional condition with respect to the control non-attentional condition. The results of this study support the idea that modality-specific attention modulates multisensory integration.
The importance of decision onset
Grinband, Jack; Ferrera, Vincent
2015-01-01
The neural mechanisms of decision making are thought to require the integration of evidence over time until a response threshold is reached. Much work suggests that response threshold can be adjusted via top-down control as a function of speed or accuracy requirements. In contrast, the time of integration onset has received less attention and is believed to be determined mostly by afferent or preprocessing delays. However, a number of influential studies over the past decade challenge this assumption and begin to paint a multifaceted view of the phenomenology of decision onset. This review highlights the challenges involved in initiating the integration of evidence at the optimal time and the potential benefits of adjusting integration onset to task demands. The review outlines behavioral and electrophysiolgical studies suggesting that the onset of the integration process may depend on properties of the stimulus, the task, attention, and response strategy. Most importantly, the aggregate findings in the literature suggest that integration onset may be amenable to top-down regulation, and may be adjusted much like response threshold to exert cognitive control and strategically optimize the decision process to fit immediate behavioral requirements. PMID:26609111
Medial prefrontal cortex and the adaptive regulation of reinforcement learning parameters.
Khamassi, Mehdi; Enel, Pierre; Dominey, Peter Ford; Procyk, Emmanuel
2013-01-01
Converging evidence suggest that the medial prefrontal cortex (MPFC) is involved in feedback categorization, performance monitoring, and task monitoring, and may contribute to the online regulation of reinforcement learning (RL) parameters that would affect decision-making processes in the lateral prefrontal cortex (LPFC). Previous neurophysiological experiments have shown MPFC activities encoding error likelihood, uncertainty, reward volatility, as well as neural responses categorizing different types of feedback, for instance, distinguishing between choice errors and execution errors. Rushworth and colleagues have proposed that the involvement of MPFC in tracking the volatility of the task could contribute to the regulation of one of RL parameters called the learning rate. We extend this hypothesis by proposing that MPFC could contribute to the regulation of other RL parameters such as the exploration rate and default action values in case of task shifts. Here, we analyze the sensitivity to RL parameters of behavioral performance in two monkey decision-making tasks, one with a deterministic reward schedule and the other with a stochastic one. We show that there exist optimal parameter values specific to each of these tasks, that need to be found for optimal performance and that are usually hand-tuned in computational models. In contrast, automatic online regulation of these parameters using some heuristics can help producing a good, although non-optimal, behavioral performance in each task. We finally describe our computational model of MPFC-LPFC interaction used for online regulation of the exploration rate and its application to a human-robot interaction scenario. There, unexpected uncertainties are produced by the human introducing cued task changes or by cheating. The model enables the robot to autonomously learn to reset exploration in response to such uncertain cues and events. The combined results provide concrete evidence specifying how prefrontal cortical subregions may cooperate to regulate RL parameters. It also shows how such neurophysiologically inspired mechanisms can control advanced robots in the real world. Finally, the model's learning mechanisms that were challenged in the last robotic scenario provide testable predictions on the way monkeys may learn the structure of the task during the pretraining phase of the previous laboratory experiments. Copyright © 2013 Elsevier B.V. All rights reserved.
Case study: Optimizing fault model input parameters using bio-inspired algorithms
NASA Astrophysics Data System (ADS)
Plucar, Jan; Grunt, Onřej; Zelinka, Ivan
2017-07-01
We present a case study that demonstrates a bio-inspired approach in the process of finding optimal parameters for GSM fault model. This model is constructed using Petri Nets approach it represents dynamic model of GSM network environment in the suburban areas of Ostrava city (Czech Republic). We have been faced with a task of finding optimal parameters for an application that requires high amount of data transfers between the application itself and secure servers located in datacenter. In order to find the optimal set of parameters we employ bio-inspired algorithms such as Differential Evolution (DE) or Self Organizing Migrating Algorithm (SOMA). In this paper we present use of these algorithms, compare results and judge their performance in fault probability mitigation.
It looks easy! Heuristics for combinatorial optimization problems.
Chronicle, Edward P; MacGregor, James N; Ormerod, Thomas C; Burr, Alistair
2006-04-01
Human performance on instances of computationally intractable optimization problems, such as the travelling salesperson problem (TSP), can be excellent. We have proposed a boundary-following heuristic to account for this finding. We report three experiments with TSPs where the capacity to employ this heuristic was varied. In Experiment 1, participants free to use the heuristic produced solutions significantly closer to optimal than did those prevented from doing so. Experiments 2 and 3 together replicated this finding in larger problems and demonstrated that a potential confound had no effect. In all three experiments, performance was closely matched by a boundary-following model. The results implicate global rather than purely local processes. Humans may have access to simple, perceptually based, heuristics that are suited to some combinatorial optimization tasks.
HURON (HUman and Robotic Optimization Network) Multi-Agent Temporal Activity Planner/Scheduler
NASA Technical Reports Server (NTRS)
Hua, Hook; Mrozinski, Joseph J.; Elfes, Alberto; Adumitroaie, Virgil; Shelton, Kacie E.; Smith, Jeffrey H.; Lincoln, William P.; Weisbin, Charles R.
2012-01-01
HURON solves the problem of how to optimize a plan and schedule for assigning multiple agents to a temporal sequence of actions (e.g., science tasks). Developed as a generic planning and scheduling tool, HURON has been used to optimize space mission surface operations. The tool has also been used to analyze lunar architectures for a variety of surface operational scenarios in order to maximize return on investment and productivity. These scenarios include numerous science activities performed by a diverse set of agents: humans, teleoperated rovers, and autonomous rovers. Once given a set of agents, activities, resources, resource constraints, temporal constraints, and de pendencies, HURON computes an optimal schedule that meets a specified goal (e.g., maximum productivity or minimum time), subject to the constraints. HURON performs planning and scheduling optimization as a graph search in state-space with forward progression. Each node in the graph contains a state instance. Starting with the initial node, a graph is automatically constructed with new successive nodes of each new state to explore. The optimization uses a set of pre-conditions and post-conditions to create the children states. The Python language was adopted to not only enable more agile development, but to also allow the domain experts to easily define their optimization models. A graphical user interface was also developed to facilitate real-time search information feedback and interaction by the operator in the search optimization process. The HURON package has many potential uses in the fields of Operations Research and Management Science where this technology applies to many commercial domains requiring optimization to reduce costs. For example, optimizing a fleet of transportation truck routes, aircraft flight scheduling, and other route-planning scenarios involving multiple agent task optimization would all benefit by using HURON.
An expert system for integrated structural analysis and design optimization for aerospace structures
NASA Technical Reports Server (NTRS)
1992-01-01
The results of a research study on the development of an expert system for integrated structural analysis and design optimization is presented. An Object Representation Language (ORL) was developed first in conjunction with a rule-based system. This ORL/AI shell was then used to develop expert systems to provide assistance with a variety of structural analysis and design optimization tasks, in conjunction with procedural modules for finite element structural analysis and design optimization. The main goal of the research study was to provide expertise, judgment, and reasoning capabilities in the aerospace structural design process. This will allow engineers performing structural analysis and design, even without extensive experience in the field, to develop error-free, efficient and reliable structural designs very rapidly and cost-effectively. This would not only improve the productivity of design engineers and analysts, but also significantly reduce time to completion of structural design. An extensive literature survey in the field of structural analysis, design optimization, artificial intelligence, and database management systems and their application to the structural design process was first performed. A feasibility study was then performed, and the architecture and the conceptual design for the integrated 'intelligent' structural analysis and design optimization software was then developed. An Object Representation Language (ORL), in conjunction with a rule-based system, was then developed using C++. Such an approach would improve the expressiveness for knowledge representation (especially for structural analysis and design applications), provide ability to build very large and practical expert systems, and provide an efficient way for storing knowledge. Functional specifications for the expert systems were then developed. The ORL/AI shell was then used to develop a variety of modules of expert systems for a variety of modeling, finite element analysis, and design optimization tasks in the integrated aerospace structural design process. These expert systems were developed to work in conjunction with procedural finite element structural analysis and design optimization modules (developed in-house at SAT, Inc.). The complete software, AutoDesign, so developed, can be used for integrated 'intelligent' structural analysis and design optimization. The software was beta-tested at a variety of companies, used by a range of engineers with different levels of background and expertise. Based on the feedback obtained by such users, conclusions were developed and are provided.
An expert system for integrated structural analysis and design optimization for aerospace structures
NASA Astrophysics Data System (ADS)
1992-04-01
The results of a research study on the development of an expert system for integrated structural analysis and design optimization is presented. An Object Representation Language (ORL) was developed first in conjunction with a rule-based system. This ORL/AI shell was then used to develop expert systems to provide assistance with a variety of structural analysis and design optimization tasks, in conjunction with procedural modules for finite element structural analysis and design optimization. The main goal of the research study was to provide expertise, judgment, and reasoning capabilities in the aerospace structural design process. This will allow engineers performing structural analysis and design, even without extensive experience in the field, to develop error-free, efficient and reliable structural designs very rapidly and cost-effectively. This would not only improve the productivity of design engineers and analysts, but also significantly reduce time to completion of structural design. An extensive literature survey in the field of structural analysis, design optimization, artificial intelligence, and database management systems and their application to the structural design process was first performed. A feasibility study was then performed, and the architecture and the conceptual design for the integrated 'intelligent' structural analysis and design optimization software was then developed. An Object Representation Language (ORL), in conjunction with a rule-based system, was then developed using C++. Such an approach would improve the expressiveness for knowledge representation (especially for structural analysis and design applications), provide ability to build very large and practical expert systems, and provide an efficient way for storing knowledge. Functional specifications for the expert systems were then developed. The ORL/AI shell was then used to develop a variety of modules of expert systems for a variety of modeling, finite element analysis, and design optimization tasks in the integrated aerospace structural design process. These expert systems were developed to work in conjunction with procedural finite element structural analysis and design optimization modules (developed in-house at SAT, Inc.). The complete software, AutoDesign, so developed, can be used for integrated 'intelligent' structural analysis and design optimization. The software was beta-tested at a variety of companies, used by a range of engineers with different levels of background and expertise. Based on the feedback obtained by such users, conclusions were developed and are provided.
Compact Heat Exchanger Design and Testing for Advanced Reactors and Advanced Power Cycles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Xiaodong; Zhang, Xiaoqin; Christensen, Richard
The goal of the proposed research is to demonstrate the thermal hydraulic performance of innovative surface geometries in compact heat exchangers used as intermediate heat exchangers (IHXs) and recuperators for the supercritical carbon dioxide (s-CO 2) Brayton cycle. Printed-circuit heat exchangers (PCHEs) are the primary compact heat exchangers of interest. The overall objectives are: To develop optimized PCHE designs for different working fluid combinations including helium to s-CO 2, liquid salt to s-CO 2, sodium to s-CO 2, and liquid salt to helium; To experimentally and numerically investigate thermal performance, thermal stress and failure mechanism of PCHEs under various transients;more » and To study diffusion bonding techniques for elevated-temperature alloys and examine post-test material integrity of the PCHEs. The project objectives were accomplished by defining and executing five different tasks corresponding to these specific objectives. The first task involved a thorough literature review and a selection of IHX candidates with different surface geometries as well as a summary of prototypic operational conditions. The second task involved optimization of PCHE design with numerical analyses of thermal-hydraulic performances and mechanical integrity. The subsequent task dealt with the development of testing facilities and engineering design of PCHE to be tested in s-CO 2 fluid conditions. The next task involved experimental investigation and validation of the thermal-hydraulic performances and thermal stress distribution of prototype PCHEs manufactured with particular surface geometries. The last task involved an investigation of diffusion bonding process and posttest destructive testing to validate mechanical design methods adopted in the design process. The experimental work utilized the two test facilities at The Ohio State University (OSU) including one existing High-Temperature Helium Test Facility (HTHF) and the newly developed s-CO 2 test loop (STL) facility and s-CO 2 test facility at University of Wisconsin – Madison (UW).« less
Optimal External Wrench Distribution During a Multi-Contact Sit-to-Stand Task.
Bonnet, Vincent; Azevedo-Coste, Christine; Robert, Thomas; Fraisse, Philippe; Venture, Gentiane
2017-07-01
This paper aims at developing and evaluating a new practical method for the real-time estimate of joint torques and external wrenches during multi-contact sit-to-stand (STS) task using kinematics data only. The proposed method allows also identifying subject specific body inertial segment parameters that are required to perform inverse dynamics. The identification phase is performed using simple and repeatable motions. Thanks to an accurately identified model the estimate of the total external wrench can be used as an input to solve an under-determined multi-contact problem. It is solved using a constrained quadratic optimization process minimizing a hybrid human-like energetic criterion. The weights of this hybrid cost function are adjusted and a sensitivity analysis is performed in order to reproduce robustly human external wrench distribution. The results showed that the proposed method could successfully estimate the external wrenches under buttocks, feet, and hands during STS tasks (RMS error lower than 20 N and 6 N.m). The simplicity and generalization abilities of the proposed method allow paving the way of future diagnosis solutions and rehabilitation applications, including in-home use.
[Conceptual approach to formation of a modern system of medical provision].
Belevitin, A B; Miroshnichenko, Iu V; Bunin, S A; Goriachev, A B; Krasavin, K D
2009-09-01
Within the frame of forming of a new face of medical service of the Armed Forces, were determined the principle approaches to optimization of the process of development of the system of medical supply. It was proposed to use the following principles: principle of hierarchic structuring, principle of purposeful orientation, principle of vertical task sharing, principle of horizontal task sharing, principle of complex simulation, principle of permanent perfection. The main direction of optimization of structure and composition of system of medical supply of the Armed Forces are: forming of modern institutes of medical supply--centers of support by technique and facilities on the base of central, regional storehouses, and attachment of several functions of organs of military government to them; creation of medical supply office on the base military hospitals, being basing treatment-prophylaxis institutes, in adjusted territorial zones of responsibility for the purpose of realization of complex of tasks of supplying the units and institutes, attached to them on medical support, by medical equipment. Building of medical support system is realized on three levels: Center - Military region (NAVY region) - territorial zone of responsibility.
Fragment-based design of kinase inhibitors: a practical guide.
Erickson, Jon A
2015-01-01
Fragment-based drug design has become an important strategy for drug design and development over the last decade. It has been used with particular success in the development of kinase inhibitors, which are one of the most widely explored classes of drug targets today. The application of fragment-based methods to discovering and optimizing kinase inhibitors can be a complicated and daunting task; however, a general process has emerged that has been highly fruitful. Here a practical outline of the fragment process used in kinase inhibitor design and development is laid out with specific examples. A guide to the overall process from initial discovery through fragment screening, including the difficulties in detection, to the computational methods available for use in optimization of the discovered fragments is reported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
E.T.; James P. Meagher; Prasad Apte
2002-12-31
This topical report summarizes work accomplished for the Program from November 1, 2001 to December 31, 2002 in the following task areas: Task 1: Materials Development; Task 2: Composite Development; Task 4: Reactor Design and Process Optimization; Task 8: Fuels and Engine Testing; 8.1 International Diesel Engine Program; 8.2 Nuvera Fuel Cell Program; and Task 10: Program Management. Major progress has been made towards developing high temperature, high performance, robust, oxygen transport elements. In addition, a novel reactor design has been proposed that co-produces hydrogen, lowers cost and improves system operability. Fuel and engine testing is progressing well, but wasmore » delayed somewhat due to the hiatus in program funding in 2002. The Nuvera fuel cell portion of the program was completed on schedule and delivered promising results regarding low emission fuels for transportation fuel cells. The evaluation of ultra-clean diesel fuels continues in single cylinder (SCTE) and multiple cylinder (MCTE) test rigs at International Truck and Engine. FT diesel and a BP oxygenate showed significant emissions reductions in comparison to baseline petroleum diesel fuels. Overall through the end of 2002 the program remains under budget, but behind schedule in some areas.« less
A control-theory model for human decision-making
NASA Technical Reports Server (NTRS)
Levison, W. H.; Tanner, R. B.
1971-01-01
A model for human decision making is an adaptation of an optimal control model for pilot/vehicle systems. The models for decision and control both contain concepts of time delay, observation noise, optimal prediction, and optimal estimation. The decision making model was intended for situations in which the human bases his decision on his estimate of the state of a linear plant. Experiments are described for the following task situations: (a) single decision tasks, (b) two-decision tasks, and (c) simultaneous manual control and decision making. Using fixed values for model parameters, single-task and two-task decision performance can be predicted to within an accuracy of 10 percent. Agreement is less good for the simultaneous decision and control situation.
Optimal beamforming in ultrasound using the ideal observer.
Abbey, Craig K; Nguyen, Nghia Q; Insana, Michael F
2010-08-01
Beamforming of received pulse-echo data generally involves the compression of signals from multiple channels within an aperture. This compression is irreversible, and therefore allows the possibility that information relevant for performing a diagnostic task is irretrievably lost. The purpose of this study was to evaluate information transfer in beamforming using a previously developed ideal observer model to quantify diagnostic information relevant to performing a task. We describe an elaborated statistical model of image formation for fixed-focus transmission and single-channel reception within a moving aperture, and we use this model on a panel of tasks related to breast sonography to evaluate receive-beamforming approaches that optimize the transfer of information. Under the assumption that acquisition noise is well described as an additive wide-band Gaussian white-noise process, we show that signal compression across receive-aperture channels after a 2-D matched-filtering operation results in no loss of diagnostic information. Across tasks, the matched-filter beamformer results in more information than standard delay-and-sum beamforming in the subsequent radio-frequency signal by a factor of two. We also show that for this matched filter, 68% of the information gain can be attributed to the phase of the matched-filter and 21% can be attributed to the amplitude. A 1-D matched filtering along axial lines shows no advantage over delay-andsum, suggesting an important role for incorporating correlations across different aperture windows in beamforming. We also show that a post-compression processing before the computation of an envelope is necessary to pass the diagnostic information in the beamformed radio-frequency signal to the final envelope image.
Cache-Oblivious parallel SIMD Viterbi decoding for sequence search in HMMER.
Ferreira, Miguel; Roma, Nuno; Russo, Luis M S
2014-05-30
HMMER is a commonly used bioinformatics tool based on Hidden Markov Models (HMMs) to analyze and process biological sequences. One of its main homology engines is based on the Viterbi decoding algorithm, which was already highly parallelized and optimized using Farrar's striped processing pattern with Intel SSE2 instruction set extension. A new SIMD vectorization of the Viterbi decoding algorithm is proposed, based on an SSE2 inter-task parallelization approach similar to the DNA alignment algorithm proposed by Rognes. Besides this alternative vectorization scheme, the proposed implementation also introduces a new partitioning of the Markov model that allows a significantly more efficient exploitation of the cache locality. Such optimization, together with an improved loading of the emission scores, allows the achievement of a constant processing throughput, regardless of the innermost-cache size and of the dimension of the considered model. The proposed optimized vectorization of the Viterbi decoding algorithm was extensively evaluated and compared with the HMMER3 decoder to process DNA and protein datasets, proving to be a rather competitive alternative implementation. Being always faster than the already highly optimized ViterbiFilter implementation of HMMER3, the proposed Cache-Oblivious Parallel SIMD Viterbi (COPS) implementation provides a constant throughput and offers a processing speedup as high as two times faster, depending on the model's size.
Learning Efficient Sparse and Low Rank Models.
Sprechmann, P; Bronstein, A M; Sapiro, G
2015-09-01
Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimony-promoting terms. The inherently sequential structure and data-dependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring real-time performance or involving large-scale data. Another limitation encountered by these modeling techniques is the difficulty of their inclusion in discriminative learning scenarios. In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a learned deterministic fixed-complexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms. These architectures learn to approximate the exact parsimonious representation at a fraction of the complexity of the standard optimization methods. We also show that appropriate training regimes allow to naturally extend parsimonious models to discriminative settings. State-of-the-art results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speed-up compared to the exact optimization algorithms.
Optimization: Old Dogs and New Tasks
ERIC Educational Resources Information Center
Kaplan, Jennifer J.; Otten, Samuel
2012-01-01
This article introduces an optimization task with a ready-made motivating question that may be paraphrased as follows: "Are you smarter than a Welsh corgi?" The authors present the task along with descriptions of the ways in which two groups of students approached it. These group vignettes reveal as much about the nature of calculus students'…
Self-Efficacy and Interest: Experimental Studies of Optimal Incompetence.
ERIC Educational Resources Information Center
Silvia, Paul J.
2003-01-01
To test the optimal incompetence hypothesis (high self-efficacy lowers task interest), 30 subjects rated interest, perceived difficulty, and confidence of success in different tasks. In study 2, 33 subjects completed a dart-game task in easy, moderate, and difficult conditions. In both, interest was a quadratic function of self-efficacy,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gang, G; Stayman, J; Ouadah, S
2015-06-15
Purpose: This work introduces a task-driven imaging framework that utilizes a patient-specific anatomical model, mathematical definition of the imaging task, and a model of the imaging system to prospectively design acquisition and reconstruction techniques that maximize task-based imaging performance. Utility of the framework is demonstrated in the joint optimization of tube current modulation and view-dependent reconstruction kernel in filtered-backprojection reconstruction and non-circular orbit design in model-based reconstruction. Methods: The system model is based on a cascaded systems analysis of cone-beam CT capable of predicting the spatially varying noise and resolution characteristics as a function of the anatomical model and amore » wide range of imaging parameters. Detectability index for a non-prewhitening observer model is used as the objective function in a task-driven optimization. The combination of tube current and reconstruction kernel modulation profiles were identified through an alternating optimization algorithm where tube current was updated analytically followed by a gradient-based optimization of reconstruction kernel. The non-circular orbit is first parameterized as a linear combination of bases functions and the coefficients were then optimized using an evolutionary algorithm. The task-driven strategy was compared with conventional acquisitions without modulation, using automatic exposure control, and in a circular orbit. Results: The task-driven strategy outperformed conventional techniques in all tasks investigated, improving the detectability of a spherical lesion detection task by an average of 50% in the interior of a pelvis phantom. The non-circular orbit design successfully mitigated photon starvation effects arising from a dense embolization coil in a head phantom, improving the conspicuity of an intracranial hemorrhage proximal to the coil. Conclusion: The task-driven imaging framework leverages a knowledge of the imaging task within a patient-specific anatomical model to optimize image acquisition and reconstruction techniques, thereby improving imaging performance beyond that achievable with conventional approaches. 2R01-CA-112163; R01-EB-017226; U01-EB-018758; Siemens Healthcare (Forcheim, Germany)« less
Heuristic algorithms for solving of the tool routing problem for CNC cutting machines
NASA Astrophysics Data System (ADS)
Chentsov, P. A.; Petunin, A. A.; Sesekin, A. N.; Shipacheva, E. N.; Sholohov, A. E.
2015-11-01
The article is devoted to the problem of minimizing the path of the cutting tool to shape cutting machines began. This problem can be interpreted as a generalized traveling salesman problem. Earlier version of the dynamic programming method to solve this problem was developed. Unfortunately, this method allows to process an amount not exceeding thirty circuits. In this regard, the task of constructing quasi-optimal route becomes relevant. In this paper we propose options for quasi-optimal greedy algorithms. Comparison of the results of exact and approximate algorithms is given.
The impact of Parkinson's disease and subthalamic deep brain stimulation on reward processing.
Evens, Ricarda; Stankevich, Yuliya; Dshemuchadse, Maja; Storch, Alexander; Wolz, Martin; Reichmann, Heinz; Schlaepfer, Thomas E; Goschke, Thomas; Lueken, Ulrike
2015-08-01
Due to its position in cortico-subthalamic and cortico-striatal pathways, the subthalamic nucleus (STN) is considered to play a crucial role not only in motor, but also in cognitive and motivational functions. In the present study we aimed to characterize how different aspects of reward processing are affected by disease and deep brain stimulation of the STN (DBS-STN) in patients with idiopathic Parkinson's disease (PD). We compared 33 PD patients treated with DBS-STN under best medical treatment (DBS-on, medication-on) to 33 PD patients without DBS, but optimized pharmacological treatment and 34 age-matched healthy controls. We then investigated DBS-STN effects using a postoperative stimulation-on/ -off design. The task set included a delay discounting task, a task to assess changes in incentive salience attribution, and the Iowa Gambling Task. The presence of PD was associated with increased incentive salience attribution and devaluation of delayed rewards. Acute DBS-STN increased risky choices in the Iowa Gambling Task under DBS-on condition, but did not further affect incentive salience attribution or the evaluation of delayed rewards. Findings indicate that acute DBS-STN affects specific aspects of reward processing, including the weighting of gains and losses, while larger-scale effects of disease or medication are predominant in others reward-related functions. Copyright © 2015 Elsevier Ltd. All rights reserved.
Swarm based mean-variance mapping optimization (MVMOS) for solving economic dispatch
NASA Astrophysics Data System (ADS)
Khoa, T. H.; Vasant, P. M.; Singh, M. S. Balbir; Dieu, V. N.
2014-10-01
The economic dispatch (ED) is an essential optimization task in the power generation system. It is defined as the process of allocating the real power output of generation units to meet required load demand so as their total operating cost is minimized while satisfying all physical and operational constraints. This paper introduces a novel optimization which named as Swarm based Mean-variance mapping optimization (MVMOS). The technique is the extension of the original single particle mean-variance mapping optimization (MVMO). Its features make it potentially attractive algorithm for solving optimization problems. The proposed method is implemented for three test power systems, including 3, 13 and 20 thermal generation units with quadratic cost function and the obtained results are compared with many other methods available in the literature. Test results have indicated that the proposed method can efficiently implement for solving economic dispatch.
Optimal multisensory decision-making in a reaction-time task.
Drugowitsch, Jan; DeAngelis, Gregory C; Klier, Eliana M; Angelaki, Dora E; Pouget, Alexandre
2014-06-14
Humans and animals can integrate sensory evidence from various sources to make decisions in a statistically near-optimal manner, provided that the stimulus presentation time is fixed across trials. Little is known about whether optimality is preserved when subjects can choose when to make a decision (reaction-time task), nor when sensory inputs have time-varying reliability. Using a reaction-time version of a visual/vestibular heading discrimination task, we show that behavior is clearly sub-optimal when quantified with traditional optimality metrics that ignore reaction times. We created a computational model that accumulates evidence optimally across both cues and time, and trades off accuracy with decision speed. This model quantitatively explains subjects's choices and reaction times, supporting the hypothesis that subjects do, in fact, accumulate evidence optimally over time and across sensory modalities, even when the reaction time is under the subject's control.
Task analysis of information technology-mediated medication management in outpatient care.
van Stiphout, F; Zwart-van Rijkom, J E F; Maggio, L A; Aarts, J E C M; Bates, D W; van Gelder, T; Jansen, P A F; Schraagen, J M C; Egberts, A C G; ter Braak, E W M T
2015-09-01
Educating physicians in the procedural as well as cognitive skills of information technology (IT)-mediated medication management could be one of the missing links for the improvement of patient safety. We aimed to compose a framework of tasks that need to be addressed to optimize medication management in outpatient care. Formal task analysis: decomposition of a complex task into a set of subtasks. First, we obtained a general description of the medication management process from exploratory interviews. Secondly, we interviewed experts in-depth to further define tasks and subtasks. Outpatient care in different fields of medicine in six teaching and academic medical centres in the Netherlands and the United States. 20 experts. Tasks were divided up into procedural, cognitive and macrocognitive tasks and categorized into the three components of dynamic decision making. The medication management process consists of three components: (i) reviewing the medication situation; (ii) composing a treatment plan; and (iii) accomplishing and communicating a treatment and surveillance plan. Subtasks include multiple cognitive tasks such as composing a list of current medications and evaluating the reliability of sources, and procedural tasks such as documenting current medication. The identified macrocognitive tasks were: planning, integration of IT in workflow, managing uncertainties and responsibilities, and problem detection. All identified procedural, cognitive and macrocognitive skills should be included when designing education for IT-mediated medication management. The resulting framework supports the design of educational interventions to improve IT-mediated medication management in outpatient care. © 2015 The Authors. British Journal of Clinical Pharmacology published by John Wiley & Sons Ltd on behalf of The British Pharmacological Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sreepathi, Sarat; D'Azevedo, Eduardo; Philip, Bobby
On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phasemore » of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.« less
Ipser, Alberta; Agolli, Vlera; Bajraktari, Anisa; Al-Alawi, Fatimah; Djaafara, Nurfitriani; Freeman, Elliot D.
2017-01-01
Are sight and sound out of synch? Signs that they are have been dismissed for over two centuries as an artefact of attentional and response bias, to which traditional subjective methods are prone. To avoid such biases, we measured performance on objective tasks that depend implicitly on achieving good lip-synch. We measured the McGurk effect (in which incongruent lip-voice pairs evoke illusory phonemes), and also identification of degraded speech, while manipulating audiovisual asynchrony. Peak performance was found at an average auditory lag of ~100 ms, but this varied widely between individuals. Participants’ individual optimal asynchronies showed trait-like stability when the same task was re-tested one week later, but measures based on different tasks did not correlate. This discounts the possible influence of common biasing factors, suggesting instead that our different tasks probe different brain networks, each subject to their own intrinsic auditory and visual processing latencies. Our findings call for renewed interest in the biological causes and cognitive consequences of individual sensory asynchronies, leading potentially to fresh insights into the neural representation of sensory timing. A concrete implication is that speech comprehension might be enhanced, by first measuring each individual’s optimal asynchrony and then applying a compensatory auditory delay. PMID:28429784
NASA Astrophysics Data System (ADS)
Yu, F.; Chen, H.; Tu, K.; Wen, Q.; He, J.; Gu, X.; Wang, Z.
2018-04-01
Facing the monitoring needs of emergency responses to major disasters, combining the disaster information acquired at the first time after the disaster and the dynamic simulation result of the disaster chain evolution process, the overall plan for coordinated planning of spaceborne, airborne and ground observation resources have been designed. Based on the analysis of the characteristics of major disaster observation tasks, the key technologies of spaceborne, airborne and ground collaborative observation project are studied. For different disaster response levels, the corresponding workflow tasks are designed. On the basis of satisfying different types of disaster monitoring demands, the existing multi-satellite collaborative observation planning algorithms are compared, analyzed, and optimized.
Optimal processing for gel electrophoresis images: Applying Monte Carlo Tree Search in GelApp.
Nguyen, Phi-Vu; Ghezal, Ali; Hsueh, Ya-Chih; Boudier, Thomas; Gan, Samuel Ken-En; Lee, Hwee Kuan
2016-08-01
In biomedical research, gel band size estimation in electrophoresis analysis is a routine process. To facilitate and automate this process, numerous software have been released, notably the GelApp mobile app. However, the band detection accuracy is limited due to a band detection algorithm that cannot adapt to the variations in input images. To address this, we used the Monte Carlo Tree Search with Upper Confidence Bound (MCTS-UCB) method to efficiently search for optimal image processing pipelines for the band detection task, thereby improving the segmentation algorithm. Incorporating this into GelApp, we report a significant enhancement of gel band detection accuracy by 55.9 ± 2.0% for protein polyacrylamide gels, and 35.9 ± 2.5% for DNA SYBR green agarose gels. This implementation is a proof-of-concept in demonstrating MCTS-UCB as a strategy to optimize general image segmentation. The improved version of GelApp-GelApp 2.0-is freely available on both Google Play Store (for Android platform), and Apple App Store (for iOS platform). © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Impaired decision-making under risk in individuals with alcohol dependence
Brevers, Damien; Bechara, Antoine; Cleeremans, Axel; Kornreich, Charles; Verbanck, Paul; Noël, Xavier
2014-01-01
Background Alcohol dependence is associated with poor decision-making under ambiguity, that is, when decisions are to be made in the absence of known probabilities of reward and loss. However, little is known regarding decisions made by individuals with alcohol dependence in the context of known probabilities (decision under risk). In this study, we investigated the relative contribution of these distinct aspects of decision making to alcohol dependence. Methods Thirty recently detoxified and sober asymptomatic alcohol-dependent individuals, and thirty healthy control participants were tested for decision-making under ambiguity (using the Iowa Gambling Task), and decision-making under-risk (using the Cups Task and Coin Flipping Task). We also tested their capacities for working memory storage (Digit-span Forward), and dual-tasking (Operation-span Task). Results Compared to healthy control participants, alcohol-dependent individuals made disadvantageous decisions on the Iowa Gambling Task, reflecting poor decisions under ambiguity. They also made more risky choices on the Cups and Coin Flipping Tasks reflecting poor decision-making under risk. In addition, alcohol-dependent participants showed some working memory impairments, as measured by the dual tasking, and the degree of this impairment correlated with high-risk decision-making, thus suggesting a relationship between processes sub-serving working memory and risky decisions. Conclusion These results suggest that alcohol dependent individuals are impaired in their ability to decide optimally in multiple facets of uncertainty (i.e., both risk and ambiguity), and that at least some aspects of these deficits are linked to poor working memory processes. PMID:24948198
Automated Visual Cognitive Tasks for Recording Neural Activity Using a Floor Projection Maze
Kent, Brendon W.; Yang, Fang-Chi; Burwell, Rebecca D.
2014-01-01
Neuropsychological tasks used in primates to investigate mechanisms of learning and memory are typically visually guided cognitive tasks. We have developed visual cognitive tasks for rats using the Floor Projection Maze1,2 that are optimized for visual abilities of rats permitting stronger comparisons of experimental findings with other species. In order to investigate neural correlates of learning and memory, we have integrated electrophysiological recordings into fully automated cognitive tasks on the Floor Projection Maze1,2. Behavioral software interfaced with an animal tracking system allows monitoring of the animal's behavior with precise control of image presentation and reward contingencies for better trained animals. Integration with an in vivo electrophysiological recording system enables examination of behavioral correlates of neural activity at selected epochs of a given cognitive task. We describe protocols for a model system that combines automated visual presentation of information to rodents and intracranial reward with electrophysiological approaches. Our model system offers a sophisticated set of tools as a framework for other cognitive tasks to better isolate and identify specific mechanisms contributing to particular cognitive processes. PMID:24638057
Gang, G J; Siewerdsen, J H; Stayman, J W
2016-02-01
This work applies task-driven optimization to design CT tube current modulation and directional regularization in penalized-likelihood (PL) reconstruction. The relative performance of modulation schemes commonly adopted for filtered-backprojection (FBP) reconstruction were also evaluated for PL in comparison. We adopt a task-driven imaging framework that utilizes a patient-specific anatomical model and information of the imaging task to optimize imaging performance in terms of detectability index ( d' ). This framework leverages a theoretical model based on implicit function theorem and Fourier approximations to predict local spatial resolution and noise characteristics of PL reconstruction as a function of the imaging parameters to be optimized. Tube current modulation was parameterized as a linear combination of Gaussian basis functions, and regularization was based on the design of (directional) pairwise penalty weights for the 8 in-plane neighboring voxels. Detectability was optimized using a covariance matrix adaptation evolutionary strategy algorithm. Task-driven designs were compared to conventional tube current modulation strategies for a Gaussian detection task in an abdomen phantom. The task-driven design yielded the best performance, improving d' by ~20% over an unmodulated acquisition. Contrary to FBP, PL reconstruction using automatic exposure control and modulation based on minimum variance (in FBP) performed worse than the unmodulated case, decreasing d' by 16% and 9%, respectively. This work shows that conventional tube current modulation schemes suitable for FBP can be suboptimal for PL reconstruction. Thus, the proposed task-driven optimization provides additional opportunities for improved imaging performance and dose reduction beyond that achievable with conventional acquisition and reconstruction.
Ghiasi, Mohammad Sadegh; Arjmand, Navid; Boroushaki, Mehrdad; Farahmand, Farzam
2016-03-01
A six-degree-of-freedom musculoskeletal model of the lumbar spine was developed to predict the activity of trunk muscles during light, moderate and heavy lifting tasks in standing posture. The model was formulated into a multi-objective optimization problem, minimizing the sum of the cubed muscle stresses and maximizing the spinal stability index. Two intelligent optimization algorithms, i.e., the vector evaluated particle swarm optimization (VEPSO) and nondominated sorting genetic algorithm (NSGA), were employed to solve the optimization problem. The optimal solution for each task was then found in the way that the corresponding in vivo intradiscal pressure could be reproduced. Results indicated that both algorithms predicted co-activity in the antagonistic abdominal muscles, as well as an increase in the stability index when going from the light to the heavy task. For all of the light, moderate and heavy tasks, the muscles' activities predictions of the VEPSO and the NSGA were generally consistent and in the same order of the in vivo electromyography data. The proposed methodology is thought to provide improved estimations for muscle activities by considering the spinal stability and incorporating the in vivo intradiscal pressure data.
Diverse task scheduling for individualized requirements in cloud manufacturing
NASA Astrophysics Data System (ADS)
Zhou, Longfei; Zhang, Lin; Zhao, Chun; Laili, Yuanjun; Xu, Lida
2018-03-01
Cloud manufacturing (CMfg) has emerged as a new manufacturing paradigm that provides ubiquitous, on-demand manufacturing services to customers through network and CMfg platforms. In CMfg system, task scheduling as an important means of finding suitable services for specific manufacturing tasks plays a key role in enhancing the system performance. Customers' requirements in CMfg are highly individualized, which leads to diverse manufacturing tasks in terms of execution flows and users' preferences. We focus on diverse manufacturing tasks and aim to address their scheduling issue in CMfg. First of all, a mathematical model of task scheduling is built based on analysis of the scheduling process in CMfg. To solve this scheduling problem, we propose a scheduling method aiming for diverse tasks, which enables each service demander to obtain desired manufacturing services. The candidate service sets are generated according to subtask directed graphs. An improved genetic algorithm is applied to searching for optimal task scheduling solutions. The effectiveness of the scheduling method proposed is verified by a case study with individualized customers' requirements. The results indicate that the proposed task scheduling method is able to achieve better performance than some usual algorithms such as simulated annealing and pattern search.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Advani, S.H.; Lee, T.S.; Moon, H.
1992-10-01
The analysis of pertinent energy components or affiliated characteristic times for hydraulic stimulation processes serves as an effective tool for fracture configuration designs optimization, and control. This evaluation, in conjunction with parametric sensitivity studies, provides a rational base for quantifying dominant process mechanisms and the roles of specified reservoir properties relative to controllable hydraulic fracture variables for a wide spectrum of treatment scenarios. Results are detailed for the following multi-task effort: (a) Application of characteristic time concept and parametric sensitivity studies for specialized fracture geometries (rectangular, penny-shaped, elliptical) and three-layered elliptic crack models (in situ stress, elastic moduli, and fracturemore » toughness contrasts). (b) Incorporation of leak-off effects for models investigated in (a). (c) Simulation of generalized hydraulic fracture models and investigation of the role of controllable vaxiables and uncontrollable system properties. (d) Development of guidelines for hydraulic fracture design and optimization.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Advani, S.H.; Lee, T.S.; Moon, H.
1992-10-01
The analysis of pertinent energy components or affiliated characteristic times for hydraulic stimulation processes serves as an effective tool for fracture configuration designs optimization, and control. This evaluation, in conjunction with parametric sensitivity studies, provides a rational base for quantifying dominant process mechanisms and the roles of specified reservoir properties relative to controllable hydraulic fracture variables for a wide spectrum of treatment scenarios. Results are detailed for the following multi-task effort: (a) Application of characteristic time concept and parametric sensitivity studies for specialized fracture geometries (rectangular, penny-shaped, elliptical) and three-layered elliptic crack models (in situ stress, elastic moduli, and fracturemore » toughness contrasts). (b) Incorporation of leak-off effects for models investigated in (a). (c) Simulation of generalized hydraulic fracture models and investigation of the role of controllable vaxiables and uncontrollable system properties. (d) Development of guidelines for hydraulic fracture design and optimization.« less
Optimal deployment of resources for maximizing impact in spreading processes
2017-01-01
The effective use of limited resources for controlling spreading processes on networks is of prime significance in diverse contexts, ranging from the identification of “influential spreaders” for maximizing information dissemination and targeted interventions in regulatory networks, to the development of mitigation policies for infectious diseases and financial contagion in economic systems. Solutions for these optimization tasks that are based purely on topological arguments are not fully satisfactory; in realistic settings, the problem is often characterized by heterogeneous interactions and requires interventions in a dynamic fashion over a finite time window via a restricted set of controllable nodes. The optimal distribution of available resources hence results from an interplay between network topology and spreading dynamics. We show how these problems can be addressed as particular instances of a universal analytical framework based on a scalable dynamic message-passing approach and demonstrate the efficacy of the method on a variety of real-world examples. PMID:28900013
Aptitude Sensitive Instruction: The Role of Media Attributes in Optimizing Transfer of Training.
ERIC Educational Resources Information Center
French, Margaret
The supplantation approach of this study hypothesized that media attributes may serve to bridge the processing link between learner aptitude capacity and the demands of a concept attainment task. Subjects were 492 males aged 16-21, drawn from a College of Technical and Further Education in Melbourne, Australia. All subjects were trade apprentices,…
ERIC Educational Resources Information Center
Curtindale, Lori; Laurie-Rose, Cynthia; Bennett-Murphy, Laura; Hull, Sarah
2007-01-01
Applying optimal stimulation theory, the present study explored the development of sustained attention as a dynamic process. It examined the interaction of modality and temperament over time in children and adults. Second-grade children and college-aged adults performed auditory and visual vigilance tasks. Using the Carey temperament…
Task Scheduling in Desktop Grids: Open Problems
NASA Astrophysics Data System (ADS)
Chernov, Ilya; Nikitina, Natalia; Ivashko, Evgeny
2017-12-01
We survey the areas of Desktop Grid task scheduling that seem to be insufficiently studied so far and are promising for efficiency, reliability, and quality of Desktop Grid computing. These topics include optimal task grouping, "needle in a haystack" paradigm, game-theoretical scheduling, domain-imposed approaches, special optimization of the final stage of the batch computation, and Enterprise Desktop Grids.
Scheduling algorithms for automatic control systems for technological processes
NASA Astrophysics Data System (ADS)
Chernigovskiy, A. S.; Tsarev, R. Yu; Kapulin, D. V.
2017-01-01
Wide use of automatic process control systems and the usage of high-performance systems containing a number of computers (processors) give opportunities for creation of high-quality and fast production that increases competitiveness of an enterprise. Exact and fast calculations, control computation, and processing of the big data arrays - all of this requires the high level of productivity and, at the same time, minimum time of data handling and result receiving. In order to reach the best time, it is necessary not only to use computing resources optimally, but also to design and develop the software so that time gain will be maximal. For this purpose task (jobs or operations), scheduling techniques for the multi-machine/multiprocessor systems are applied. Some of basic task scheduling methods for the multi-machine process control systems are considered in this paper, their advantages and disadvantages come to light, and also some usage considerations, in case of the software for automatic process control systems developing, are made.
Research on polycrystalline thin film submodules based on CuInSe sub 2 materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Catalano, A.; Arya, R.; Carr, L.
1992-05-01
This report describes progress during the first year of a three-year research program to develop 12%-efficient CuInSe{sub 2} (CIS) submodules with area greater than 900 cm{sup 2}. To meet this objective, the program was divided into five tasks: (1) windows, contacts, substrates; (2) absorber material; (3) device structure; (4) submodule design and encapsulation; and (5) process optimization. In the first year of the program, work was concentrated on the first three tasks with an objective to demonstrate a 9%-efficient CIS solar cell. 7 refs.
The Motivation-Cognition Interface in Learning and Decision-Making.
Maddox, W Todd; Markman, Arthur B
2010-04-01
In this article we discuss how incentive motivations and task demands affect performance. We present a three-factor framework that suggests that performance is determined from the interaction of global incentives, local incentives, and the psychological processes needed to achieve optimal task performance. We review work that examines the implications of the motivation-cognition interface in classification, choice and on phenomena such as stereotype threat and performance pressure. We show that under some conditions stereotype threat and pressure accentuate performance. We discuss the implications of this work for neuropsychological assessment, and outline a number of challenges for future research.
The Effect of Visual Information on the Manual Approach and Landing
NASA Technical Reports Server (NTRS)
Wewerinke, P. H.
1982-01-01
The effect of visual information in combination with basic display information on the approach performance. A pre-experimental model analysis was performed in terms of the optimal control model. The resulting aircraft approach performance predictions were compared with the results of a moving base simulator program. The results illustrate that the model provides a meaningful description of the visual (scene) perception process involved in the complex (multi-variable, time varying) manual approach task with a useful predictive capability. The theoretical framework was shown to allow a straight-forward investigation of the complex interaction of a variety of task variables.
Firmware Development Improves System Efficiency
NASA Technical Reports Server (NTRS)
Chern, E. James; Butler, David W.
1993-01-01
Most manufacturing processes require physical pointwise positioning of the components or tools from one location to another. Typical mechanical systems utilize either stop-and-go or fixed feed-rate procession to accomplish the task. The first approach achieves positional accuracy but prolongs overall time and increases wear on the mechanical system. The second approach sustains the throughput but compromises positional accuracy. A computer firmware approach has been developed to optimize this point wise mechanism by utilizing programmable interrupt controls to synchronize engineering processes 'on the fly'. This principle has been implemented in an eddy current imaging system to demonstrate the improvement. Software programs were developed that enable a mechanical controller card to transmit interrupts to a system controller as a trigger signal to initiate an eddy current data acquisition routine. The advantages are: (1) optimized manufacturing processes, (2) increased throughput of the system, (3) improved positional accuracy, and (4) reduced wear and tear on the mechanical system.
Online adaptation and over-trial learning in macaque visuomotor control.
Braun, Daniel A; Aertsen, Ad; Paz, Rony; Vaadia, Eilon; Rotter, Stefan; Mehring, Carsten
2011-01-01
When faced with unpredictable environments, the human motor system has been shown to develop optimized adaptation strategies that allow for online adaptation during the control process. Such online adaptation is to be contrasted to slower over-trial learning that corresponds to a trial-by-trial update of the movement plan. Here we investigate the interplay of both processes, i.e., online adaptation and over-trial learning, in a visuomotor experiment performed by macaques. We show that simple non-adaptive control schemes fail to perform in this task, but that a previously suggested adaptive optimal feedback control model can explain the observed behavior. We also show that over-trial learning as seen in learning and aftereffect curves can be explained by learning in a radial basis function network. Our results suggest that both the process of over-trial learning and the process of online adaptation are crucial to understand visuomotor learning.
Online Adaptation and Over-Trial Learning in Macaque Visuomotor Control
Braun, Daniel A.; Aertsen, Ad; Paz, Rony; Vaadia, Eilon; Rotter, Stefan; Mehring, Carsten
2011-01-01
When faced with unpredictable environments, the human motor system has been shown to develop optimized adaptation strategies that allow for online adaptation during the control process. Such online adaptation is to be contrasted to slower over-trial learning that corresponds to a trial-by-trial update of the movement plan. Here we investigate the interplay of both processes, i.e., online adaptation and over-trial learning, in a visuomotor experiment performed by macaques. We show that simple non-adaptive control schemes fail to perform in this task, but that a previously suggested adaptive optimal feedback control model can explain the observed behavior. We also show that over-trial learning as seen in learning and aftereffect curves can be explained by learning in a radial basis function network. Our results suggest that both the process of over-trial learning and the process of online adaptation are crucial to understand visuomotor learning. PMID:21720526
Integrated aerodynamic-structural design of a forward-swept transport wing
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.; Grossman, Bernard; Kao, Pi-Jen; Polen, David M.; Sobieszczanski-Sobieski, Jaroslaw
1989-01-01
The introduction of composite materials is having a profound effect on aircraft design. Since these materials permit the designer to tailor material properties to improve structural, aerodynamic and acoustic performance, they require an integrated multidisciplinary design process. Futhermore, because of the complexity of the design process, numerical optimization methods are required. The utilization of integrated multidisciplinary design procedures for improving aircraft design is not currently feasible because of software coordination problems and the enormous computational burden. Even with the expected rapid growth of supercomputers and parallel architectures, these tasks will not be practical without the development of efficient methods for cross-disciplinary sensitivities and efficient optimization procedures. The present research is part of an on-going effort which is focused on the processes of simultaneous aerodynamic and structural wing design as a prototype for design integration. A sequence of integrated wing design procedures has been developed in order to investigate various aspects of the design process.
Galileo - The Serial-Production AIT Challenge
NASA Technical Reports Server (NTRS)
Ragnit, Ulrike; Brunner, Otto
2008-01-01
The Galileo Project is one of the most demanding projects of ESA, being Europe's autarkic navigation system and a constellation composed of 30 satellites. This presentation points out the different phases of the project up to the full operational capability and the corresponding launch options with respect to launch vehicles as well as launch configurations. One of the biggest challenges is to set up a small serial 'production line' for the overall integration and test campaign of satellites. This production line demands an optimization of all relevant tasks, taking into account also backup and recovery actions. A comprehensive AIT concept is required, reflecting a tightly merged facility layout and work flow design. In addition a common data management system is needed to handle all spacecraft related documentation and to have a direct input-out flow for all activities, phases and positions at the same time. Process optimization is a well known field of engineering in all small high tech production lines, nevertheless serial production of satellites are still not the daily task in space business and therefore new concepts have to be put in place. Therefore, and in order to meet the satellites overall system optimization, a thorough interface between unit/subsystem manufacturing and satellite AIT must be realized to ensure a smooth flow and to avoid any process interruption, which would directly lead to a schedule impact.
Optimizing the number of steps in learning tasks for complex skills.
Nadolski, Rob J; Kirschner, Paul A; van Merriënboer, Jeroen J G
2005-06-01
Carrying out whole tasks is often too difficult for novice learners attempting to acquire complex skills. The common solution is to split up the tasks into a number of smaller steps. The number of steps must be optimized for efficient and effective learning. The aim of the study is to investigate the relation between the number of steps provided to learners and the quality of their learning of complex skills. It is hypothesized that students receiving an optimized number of steps will learn better than those receiving either the whole task in only one step or those receiving a large number of steps. Participants were 35 sophomore law students studying at Dutch universities, mean age=22.8 years (SD=3.5), 63% were female. Participants were randomly assigned to 1 of 3 computer-delivered versions of a multimedia programme on how to prepare and carry out a law plea. The versions differed only in the number of learning steps provided. Videotaped plea-performance results were determined, various related learning measures were acquired and all computer actions were logged and analyzed. Participants exposed to an intermediate (i.e. optimized) number of steps outperformed all others on the compulsory learning task. No differences in performance on a transfer task were found. A high number of steps proved to be less efficient for carrying out the learning task. An intermediate number of steps is the most effective, proving that the number of steps can be optimized for improving learning.
Leitman, David I; Wolf, Daniel H; Loughead, James; Valdez, Jeffrey N; Kohler, Christian G; Brensinger, Colleen; Elliott, Mark A; Turetsky, Bruce I; Gur, Raquel E; Gur, Ruben C
2011-01-01
Schizophrenia patients display impaired performance and brain activity during facial affect recognition. These impairments may reflect stimulus-driven perceptual decrements and evaluative processing abnormalities. We differentiated these two processes by contrasting responses to identical stimuli presented under different contexts. Seventeen healthy controls and 16 schizophrenia patients performed an fMRI facial affect detection task. Subjects identified an affective target presented amongst foils of differing emotions. We hypothesized that targeting affiliative emotions (happiness, sadness) would create a task demand context distinct from that generated when targeting threat emotions (anger, fear). We compared affiliative foil stimuli within a congruent affiliative context with identical stimuli presented in an incongruent threat context. Threat foils were analysed in the same manner. Controls activated right orbitofrontal cortex (OFC)/ventrolateral prefrontal cortex (VLPFC) more to affiliative foils in threat contexts than to identical stimuli within affiliative contexts. Patients displayed reduced OFC/VLPFC activation to all foils, and no activation modulation by context. This lack of context modulation coincided with a 2-fold decrement in foil detection efficiency. Task demands produce contextual effects during facial affective processing in regions activated during affect evaluation. In schizophrenia, reduced modulation of OFC/VLPFC by context coupled with reduced behavioural efficiency suggests impaired ventral prefrontal control mechanisms that optimize affective appraisal.
Cloud computing task scheduling strategy based on differential evolution and ant colony optimization
NASA Astrophysics Data System (ADS)
Ge, Junwei; Cai, Yu; Fang, Yiqiu
2018-05-01
This paper proposes a task scheduling strategy DEACO based on the combination of Differential Evolution (DE) and Ant Colony Optimization (ACO), aiming at the single problem of optimization objective in cloud computing task scheduling, this paper combines the shortest task completion time, cost and load balancing. DEACO uses the solution of the DE to initialize the initial pheromone of ACO, reduces the time of collecting the pheromone in ACO in the early, and improves the pheromone updating rule through the load factor. The proposed algorithm is simulated on cloudsim, and compared with the min-min and ACO. The experimental results show that DEACO is more superior in terms of time, cost, and load.
Group interaction and flight crew performance
NASA Technical Reports Server (NTRS)
Foushee, H. Clayton; Helmreich, Robert L.
1988-01-01
The application of human-factors analysis to the performance of aircraft-operation tasks by the crew as a group is discussed in an introductory review and illustrated with anecdotal material. Topics addressed include the function of a group in the operational environment, the classification of group performance factors (input, process, and output parameters), input variables and the flight crew process, and the effect of process variables on performance. Consideration is given to aviation safety issues, techniques for altering group norms, ways of increasing crew effort and coordination, and the optimization of group composition.
Radac, Mircea-Bogdan; Precup, Radu-Emil; Petriu, Emil M
2015-11-01
This paper proposes a novel model-free trajectory tracking of multiple-input multiple-output (MIMO) systems by the combination of iterative learning control (ILC) and primitives. The optimal trajectory tracking solution is obtained in terms of previously learned solutions to simple tasks called primitives. The library of primitives that are stored in memory consists of pairs of reference input/controlled output signals. The reference input primitives are optimized in a model-free ILC framework without using knowledge of the controlled process. The guaranteed convergence of the learning scheme is built upon a model-free virtual reference feedback tuning design of the feedback decoupling controller. Each new complex trajectory to be tracked is decomposed into the output primitives regarded as basis functions. The optimal reference input for the control system to track the desired trajectory is next recomposed from the reference input primitives. This is advantageous because the optimal reference input is computed straightforward without the need to learn from repeated executions of the tracking task. In addition, the optimization problem specific to trajectory tracking of square MIMO systems is decomposed in a set of optimization problems assigned to each separate single-input single-output control channel that ensures a convenient model-free decoupling. The new model-free primitive-based ILC approach is capable of planning, reasoning, and learning. A case study dealing with the model-free control tuning for a nonlinear aerodynamic system is included to validate the new approach. The experimental results are given.
Super-Memorizers Are Not Super-Recognizers
Ramon, Meike; Miellet, Sebastien; Dzieciol, Anna M.; Konrad, Boris Nikolai
2016-01-01
Humans have a natural expertise in recognizing faces. However, the nature of the interaction between this critical visual biological skill and memory is yet unclear. Here, we had the unique opportunity to test two individuals who have had exceptional success in the World Memory Championships, including several world records in face-name association memory. We designed a range of face processing tasks to determine whether superior/expert face memory skills are associated with distinctive perceptual strategies for processing faces. Superior memorizers excelled at tasks involving associative face-name learning. Nevertheless, they were as impaired as controls in tasks probing the efficiency of the face system: face inversion and the other-race effect. Super memorizers did not show increased hippocampal volumes, and exhibited optimal generic eye movement strategies when they performed complex multi-item face-name associations. Our data show that the visual computations of the face system are not malleable and are robust to acquired expertise involving extensive training of associative memory. PMID:27008627
Super-Memorizers Are Not Super-Recognizers.
Ramon, Meike; Miellet, Sebastien; Dzieciol, Anna M; Konrad, Boris Nikolai; Dresler, Martin; Caldara, Roberto
2016-01-01
Humans have a natural expertise in recognizing faces. However, the nature of the interaction between this critical visual biological skill and memory is yet unclear. Here, we had the unique opportunity to test two individuals who have had exceptional success in the World Memory Championships, including several world records in face-name association memory. We designed a range of face processing tasks to determine whether superior/expert face memory skills are associated with distinctive perceptual strategies for processing faces. Superior memorizers excelled at tasks involving associative face-name learning. Nevertheless, they were as impaired as controls in tasks probing the efficiency of the face system: face inversion and the other-race effect. Super memorizers did not show increased hippocampal volumes, and exhibited optimal generic eye movement strategies when they performed complex multi-item face-name associations. Our data show that the visual computations of the face system are not malleable and are robust to acquired expertise involving extensive training of associative memory.
A detailed comparison of optimality and simplicity in perceptual decision-making
Shen, Shan; Ma, Wei Ji
2017-01-01
Two prominent ideas in the study of decision-making have been that organisms behave near-optimally, and that they use simple heuristic rules. These principles might be operating in different types of tasks, but this possibility cannot be fully investigated without a direct, rigorous comparison within a single task. Such a comparison was lacking in most previous studies, because a) the optimal decision rule was simple; b) no simple suboptimal rules were considered; c) it was unclear what was optimal, or d) a simple rule could closely approximate the optimal rule. Here, we used a perceptual decision-making task in which the optimal decision rule is well-defined and complex, and makes qualitatively distinct predictions from many simple suboptimal rules. We find that all simple rules tested fail to describe human behavior, that the optimal rule accounts well for the data, and that several complex suboptimal rules are indistinguishable from the optimal one. Moreover, we found evidence that the optimal model is close to the true model: first, the better the trial-to-trial predictions of a suboptimal model agree with those of the optimal model, the better that suboptimal model fits; second, our estimate of the Kullback-Leibler divergence between the optimal model and the true model is not significantly different from zero. When observers receive no feedback, the optimal model still describes behavior best, suggesting that sensory uncertainty is implicitly represented and taken into account. Beyond the task and models studied here, our results have implications for best practices of model comparison. PMID:27177259
2013-01-01
Background Approximately 50% of patients with major depressive disorder (MDD) do not respond optimally to antidepressant treatments. Given this is a large proportion of the patient population, pretreatment tests that predict which patients will respond to which types of treatment could save time, money and patient burden. Brain imaging offers a means to identify treatment predictors that are grounded in the neurobiology of the treatment and the pathophysiology of MDD. Methods/Design The international Study to Predict Optimized Treatment in Depression is a multi-center, parallel model, randomized clinical trial with an embedded imaging sub-study to identify such predictors. We focus on brain circuits implicated in major depressive disorder and its treatment. In the full trial, depressed participants are randomized to receive escitalopram, sertraline or venlafaxine-XR (open-label). They are assessed using standardized multiple clinical, cognitive-emotional behavioral, electroencephalographic and genetic measures at baseline and at eight weeks post-treatment. Overall, 2,016 depressed participants (18 to 65 years old) will enter the study, of whom a target of 10% will be recruited into the brain imaging sub-study (approximately 67 participants in each treatment arm) and 67 controls. The imaging sub-study is conducted at the University of Sydney and at Stanford University. Structural studies include high-resolution three-dimensional T1-weighted, diffusion tensor and T2/Proton Density scans. Functional studies include standardized functional magnetic resonance imaging (MRI) with three cognitive tasks (auditory oddball, a continuous performance task, and Go-NoGo) and two emotion tasks (unmasked conscious and masked non-conscious emotion processing tasks). After eight weeks of treatment, the functional MRI is repeated with the above tasks. We will establish the methods in the first 30 patients. Then we will identify predictors in the first half (n = 102), test the findings in the second half, and then extend the analyses to the total sample. Trial registration International Study to Predict Optimized Treatment - in Depression (iSPOT-D). ClinicalTrials.gov, NCT00693849. PMID:23866851
Grieve, Stuart M; Korgaonkar, Mayuresh S; Etkin, Amit; Harris, Anthony; Koslow, Stephen H; Wisniewski, Stephen; Schatzberg, Alan F; Nemeroff, Charles B; Gordon, Evian; Williams, Leanne M
2013-07-18
Approximately 50% of patients with major depressive disorder (MDD) do not respond optimally to antidepressant treatments. Given this is a large proportion of the patient population, pretreatment tests that predict which patients will respond to which types of treatment could save time, money and patient burden. Brain imaging offers a means to identify treatment predictors that are grounded in the neurobiology of the treatment and the pathophysiology of MDD. The international Study to Predict Optimized Treatment in Depression is a multi-center, parallel model, randomized clinical trial with an embedded imaging sub-study to identify such predictors. We focus on brain circuits implicated in major depressive disorder and its treatment. In the full trial, depressed participants are randomized to receive escitalopram, sertraline or venlafaxine-XR (open-label). They are assessed using standardized multiple clinical, cognitive-emotional behavioral, electroencephalographic and genetic measures at baseline and at eight weeks post-treatment. Overall, 2,016 depressed participants (18 to 65 years old) will enter the study, of whom a target of 10% will be recruited into the brain imaging sub-study (approximately 67 participants in each treatment arm) and 67 controls. The imaging sub-study is conducted at the University of Sydney and at Stanford University. Structural studies include high-resolution three-dimensional T1-weighted, diffusion tensor and T2/Proton Density scans. Functional studies include standardized functional magnetic resonance imaging (MRI) with three cognitive tasks (auditory oddball, a continuous performance task, and Go-NoGo) and two emotion tasks (unmasked conscious and masked non-conscious emotion processing tasks). After eight weeks of treatment, the functional MRI is repeated with the above tasks. We will establish the methods in the first 30 patients. Then we will identify predictors in the first half (n=102), test the findings in the second half, and then extend the analyses to the total sample. International Study to Predict Optimized Treatment--in Depression (iSPOT-D). ClinicalTrials.gov, NCT00693849.
Dispositional mindfulness is associated with reduced implicit learning.
Stillman, Chelsea M; Feldman, Halley; Wambach, Caroline G; Howard, James H; Howard, Darlene V
2014-08-01
Behavioral and neuroimaging evidence suggest that mindfulness exerts its salutary effects by disengaging habitual processes supported by subcortical regions and increasing effortful control processes supported by the frontal lobes. Here we investigated whether individual differences in dispositional mindfulness relate to performance on implicit sequence learning tasks in which optimal learning may in fact be impeded by the engagement of effortful control processes. We report results from two studies where participants completed a widely used questionnaire assessing mindfulness and one of two implicit sequence learning tasks. Learning was quantified using two commonly used measures of sequence learning. In both studies we detected a negative relationship between mindfulness and sequence learning, and the relationship was consistent across both learning measures. Our results, the first to show a negative relationship between mindfulness and implicit sequence learning, suggest that the beneficial effects of mindfulness do not extend to all cognitive functions. Copyright © 2014 Elsevier Inc. All rights reserved.
Prototypes, Exemplars, and the Natural History of Categorization
Smith, J. David
2013-01-01
The article explores—from a utility/adaptation perspective—the role of prototype and exemplar processes in categorization. The author surveys important category tasks within the categorization literature from the perspective of the optimality of applying prototype and exemplar processes. Formal simulations reveal that organisms will often (not always!) receive more useful signals about category belongingness if they average their exemplar experience into a prototype and use this as the comparative standard for categorization. This survey then provides the theoretical context for considering the evolution of cognitive systems for categorization. In the article’s final sections, the author reviews recent research on the performance of nonhuman primates and humans in the tasks analyzed in the article. Diverse species share operating principles, default commitments, and processing weaknesses in categorization. From these commonalities, it may be possible to infer some properties of the categorization ecology these species generally experienced during cognitive evolution. PMID:24005828
Gulati, Tanuj; Ramanathan, Dhakshin; Wong, Chelsea; Ganguly, Karunesh
2017-01-01
Brain-Machine Interfaces can allow neural control over assistive devices. They also provide an important platform to study neural plasticity. Recent studies indicate that optimal engagement of learning is essential for robust neuroprosthetic control. However, little is known about the neural processes that may consolidate a neuroprosthetic skill. Based on the growing body of evidence linking slow-wave activity (SWA) during sleep to consolidation, we examined if there is ‘offline’ processing after neuroprosthetic learning. Using a rodent model, here we show that after successful learning, task-related units specifically experienced increased locking and coherency to SWA during sleep. Moreover, spike-spike coherence among these units was significantly enhanced. These changes were not present with poor skill acquisition or after control awake periods, demonstrating specificity of our observations to learning. Interestingly, time spent in SWA predicted performance gains. Thus, SWA appears to play a role in offline processing after neuroprosthetic learning. PMID:24997761
Optimization of image processing algorithms on mobile platforms
NASA Astrophysics Data System (ADS)
Poudel, Pramod; Shirvaikar, Mukul
2011-03-01
This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.
Farmer, George D; Janssen, Christian P; Nguyen, Anh T; Brumby, Duncan P
2018-04-01
We test people's ability to optimize performance across two concurrent tasks. Participants performed a number entry task while controlling a randomly moving cursor with a joystick. Participants received explicit feedback on their performance on these tasks in the form of a single combined score. This payoff function was varied between conditions to change the value of one task relative to the other. We found that participants adapted their strategy for interleaving the two tasks, by varying how long they spent on one task before switching to the other, in order to achieve the near maximum payoff available in each condition. In a second experiment, we show that this behavior is learned quickly (within 2-3 min over several discrete trials) and remained stable for as long as the payoff function did not change. The results of this work show that people are adaptive and flexible in how they prioritize and allocate attention in a dual-task setting. However, it also demonstrates some of the limits regarding people's ability to optimize payoff functions. Copyright © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
Psychophysical Models for Signal Detection with Time Varying Uncertainty. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Gai, E.
1975-01-01
Psychophysical models for the behavior of the human operator in detection tasks which include change in detectability, correlation between observations and deferred decisions are developed. Classical Signal Detection Theory (SDT) is discussed and its emphasis on the sensory processes is contrasted to decision strategies. The analysis of decision strategies utilizes detection tasks with time varying signal strength. The classical theory is modified to include such tasks and several optimal decision strategies are explored. Two methods of classifying strategies are suggested. The first method is similar to the analysis of ROC curves, while the second is based on the relation between the criterion level (CL) and the detectability. Experiments to verify the analysis of tasks with changes of signal strength are designed. The results show that subjects are aware of changes in detectability and tend to use strategies that involve changes in the CL's.
Efficient parallel architecture for highly coupled real-time linear system applications
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Homaifar, Abdollah; Barua, Soumavo
1988-01-01
A systematic procedure is developed for exploiting the parallel constructs of computation in a highly coupled, linear system application. An overall top-down design approach is adopted. Differential equations governing the application under consideration are partitioned into subtasks on the basis of a data flow analysis. The interconnected task units constitute a task graph which has to be computed in every update interval. Multiprocessing concepts utilizing parallel integration algorithms are then applied for efficient task graph execution. A simple scheduling routine is developed to handle task allocation while in the multiprocessor mode. Results of simulation and scheduling are compared on the basis of standard performance indices. Processor timing diagrams are developed on the basis of program output accruing to an optimal set of processors. Basic architectural attributes for implementing the system are discussed together with suggestions for processing element design. Emphasis is placed on flexible architectures capable of accommodating widely varying application specifics.
Optimizing The Number Of Steps In Learning Tasks For Complex Skills
ERIC Educational Resources Information Center
Nadolski, Rob J.; Kirschner, Paul A.; van Merrienboer, Jeroen J.G.
2005-01-01
Background: Carrying out whole tasks is often too difficult for novice learners attempting to acquire complex skills. The common solution is to split up the tasks into a number of smaller steps. The number of steps must be optimized for efficient and effective learning. Aim: The aim of the study is to investigate the relation between the number of…
An invertebrate embryologist's guide to routine processing of confocal images.
von Dassow, George
2014-01-01
It is almost impossible to use a confocal microscope without encountering the need to transform the raw data through image processing. Adherence to a set of straightforward guidelines will help ensure that image manipulations are both credible and repeatable. Meanwhile, attention to optimal data collection parameters will greatly simplify image processing, not only for convenience but for quality and credibility as well. Here I describe how to conduct routine confocal image processing tasks, including creating 3D animations or stereo images, false coloring or merging channels, background suppression, and compressing movie files for display.
Vakil, Eli; Hassin-Baer, Sharon; Karni, Avi
2014-05-01
There are inconsistent results in the research literature relating to whether a procedural memory dysfunction exists as a core deficit in Parkinson׳s disease (PD). To address this issue, we examined the acquisition and long-term retention of a cognitive skill in patients with moderately severe PD. To this end, we used a computerized version of the Tower of Hanoi Puzzle. Sixteen patients with PD (11 males, age 60.9±10.26 years, education 13.8±3.5 years, disease duration 8.6±4.7 years, UPDRS III "On" score 16±5.3) were compared with 20 healthy individuals matched for age, gender, education and MMSE scores. The patients were assessed while taking their anti-Parkinsonian medication. All participants underwent three consecutive practice sessions, 24-48h apart, and a retention-test session six months later. A computerized version of the Tower of Hanoi Puzzle, with four disks, was used for training. Participants completed the task 18 times in each session. Number of moves (Nom) to solution, and time per move (Tpm), were used as measures of acquisition and retention of the learned skill. Robust learning, a significant reduction in Nom and a concurrent decrease in Tpm, were found across all three training sessions, in both groups. Moreover, both patients and controls showed significant savings for both measures at six months post-training. However, while their Tpm was no slower than that of controls, patients with PD required more Nom (in 3rd and 4th sessions) and tended to stabilize on less-than-optimal solutions. The results do not support the notion of a core deficit in gaining speed (fluency) or generating procedural memory in PD. However, PD patients settled on less-than-optimal solutions of the task, i.e., less efficient task solving process. The results are consistent with animal studies of the effects of dopamine depletion on task exploration. Thus, patients with PD may have a problem in exploring for optimal task solution rather than in skill acquisition and retention per se. Copyright © 2014. Published by Elsevier Ltd.
McCaffery, Kirsten J; Dixon, Ann; Hayen, Andrew; Jansen, Jesse; Smith, Sian; Simpson, Judy M
2012-01-01
To test optimal graphic risk communication formats for presenting small probabilities using graphics with a denominator of 1000 to adults with lower education and literacy. A randomized experimental study, which took place in adult basic education classes in Sydney, Australia. The participants were 120 adults with lower education and literacy. An experimental computer-based manipulation compared 1) pictographs in 2 forms, shaded "blocks" and unshaded "dots"; and 2) bar charts across different orientations (horizontal/vertical) and numerator size (small <100, medium 100-499, large 500-999). Accuracy (size of error) and ease of processing (reaction time) were assessed on a gist task (estimating the larger chance of survival) and a verbatim task (estimating the size of difference). Preferences for different graph types were also assessed. Accuracy on the gist task was very high across all conditions (>95%) and not tested further. For the verbatim task, optimal graph type depended on the numerator size. For small numerators, pictographs resulted in fewer errors than bar charts (blocks: odds ratio [OR] = 0.047, 95% confidence interval [CI] = 0.023-0.098; dots: OR = 0.049, 95% CI = 0.024-0.099). For medium and large numerators, bar charts were more accurate (e.g., medium dots: OR = 4.29, 95% CI = 2.9-6.35). Pictographs were generally processed faster for small numerators (e.g., blocks: 14.9 seconds v. bars: 16.2 seconds) and bar charts for medium or large numerators (e.g., large blocks: 41.6 seconds v. 26.7 seconds). Vertical formats were processed slightly faster than horizontal graphs with no difference in accuracy. Most participants preferred bar charts (64%); however, there was no relationship with performance. For adults with low education and literacy, pictographs are likely to be the best format to use when displaying small numerators (<100/1000) and bar charts for larger numerators (>100/1000).
Acquisition of a visual discrimination and reversal learning task by Labrador retrievers.
Lazarowski, Lucia; Foster, Melanie L; Gruen, Margaret E; Sherman, Barbara L; Case, Beth C; Fish, Richard E; Milgram, Norton W; Dorman, David C
2014-05-01
Optimal cognitive ability is likely important for military working dogs (MWD) trained to detect explosives. An assessment of a dog's ability to rapidly learn discriminations might be useful in the MWD selection process. In this study, visual discrimination and reversal tasks were used to assess cognitive performance in Labrador retrievers selected for an explosives detection program using a modified version of the Toronto General Testing Apparatus (TGTA), a system developed for assessing performance in a battery of neuropsychological tests in canines. The results of the current study revealed that, as previously found with beagles tested using the TGTA, Labrador retrievers (N = 16) readily acquired both tasks and learned the discrimination task significantly faster than the reversal task. The present study confirmed that the modified TGTA system is suitable for cognitive evaluations in Labrador retriever MWDs and can be used to further explore effects of sex, phenotype, age, and other factors in relation to canine cognition and learning, and may provide an additional screening tool for MWD selection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez, A; Little, K; Chung, J
Purpose: To validate the use of a Channelized Hotelling Observer (CHO) model for guiding image processing parameter selection and enable improved nodule detection in digital chest radiography. Methods: In a previous study, an anthropomorphic chest phantom was imaged with and without PMMA simulated nodules using a GE Discovery XR656 digital radiography system. The impact of image processing parameters was then explored using a CHO with 10 Laguerre-Gauss channels. In this work, we validate the CHO’s trend in nodule detectability as a function of two processing parameters by conducting a signal-known-exactly, multi-reader-multi-case (MRMC) ROC observer study. Five naive readers scored confidencemore » of nodule visualization in 384 images with 50% nodule prevalence. The image backgrounds were regions-of-interest extracted from 6 normal patient scans, and the digitally inserted simulated nodules were obtained from phantom data in previous work. Each patient image was processed with both a near-optimal and a worst-case parameter combination, as determined by the CHO for nodule detection. The same 192 ROIs were used for each image processing method, with 32 randomly selected lung ROIs per patient image. Finally, the MRMC data was analyzed using the freely available iMRMC software of Gallas et al. Results: The image processing parameters which were optimized for the CHO led to a statistically significant improvement (p=0.049) in human observer AUC from 0.78 to 0.86, relative to the image processing implementation which produced the lowest CHO performance. Conclusion: Differences in user-selectable image processing methods on a commercially available digital radiography system were shown to have a marked impact on performance of human observers in the task of lung nodule detection. Further, the effect of processing on humans was similar to the effect on CHO performance. Future work will expand this study to include a wider range of detection/classification tasks and more observers, including experienced chest radiologists.« less
Two-phase strategy of controlling motor coordination determined by task performance optimality.
Shimansky, Yury P; Rand, Miya K
2013-02-01
A quantitative model of optimal coordination between hand transport and grip aperture has been derived in our previous studies of reach-to-grasp movements without utilizing explicit knowledge of the optimality criterion or motor plant dynamics. The model's utility for experimental data analysis has been demonstrated. Here we show how to generalize this model for a broad class of reaching-type, goal-directed movements. The model allows for measuring the variability of motor coordination and studying its dependence on movement phase. The experimentally found characteristics of that dependence imply that execution noise is low and does not affect motor coordination significantly. From those characteristics it is inferred that the cost of neural computations required for information acquisition and processing is included in the criterion of task performance optimality as a function of precision demand for state estimation and decision making. The precision demand is an additional optimized control variable that regulates the amount of neurocomputational resources activated dynamically. It is shown that an optimal control strategy in this case comprises two different phases. During the initial phase, the cost of neural computations is significantly reduced at the expense of reducing the demand for their precision, which results in speed-accuracy tradeoff violation and significant inter-trial variability of motor coordination. During the final phase, neural computations and thus motor coordination are considerably more precise to reduce the cost of errors in making a contact with the target object. The generality of the optimal coordination model and the two-phase control strategy is illustrated on several diverse examples.
Metaheuristic Optimization and its Applications in Earth Sciences
NASA Astrophysics Data System (ADS)
Yang, Xin-She
2010-05-01
A common but challenging task in modelling geophysical and geological processes is to handle massive data and to minimize certain objectives. This can essentially be considered as an optimization problem, and thus many new efficient metaheuristic optimization algorithms can be used. In this paper, we will introduce some modern metaheuristic optimization algorithms such as genetic algorithms, harmony search, firefly algorithm, particle swarm optimization and simulated annealing. We will also discuss how these algorithms can be applied to various applications in earth sciences, including nonlinear least-squares, support vector machine, Kriging, inverse finite element analysis, and data-mining. We will present a few examples to show how different problems can be reformulated as optimization. Finally, we will make some recommendations for choosing various algorithms to suit various problems. References 1) D. H. Wolpert and W. G. Macready, No free lunch theorems for optimization, IEEE Trans. Evolutionary Computation, Vol. 1, 67-82 (1997). 2) X. S. Yang, Nature-Inspired Metaheuristic Algorithms, Luniver Press, (2008). 3) X. S. Yang, Mathematical Modelling for Earth Sciences, Dunedin Academic Press, (2008).
Manipulation and handling processes off-line programming and optimization with use of K-Roset
NASA Astrophysics Data System (ADS)
Gołda, G.; Kampa, A.
2017-08-01
Contemporary trends in development of efficient, flexible manufacturing systems require practical implementation of modern “Lean production” concepts for maximizing customer value through minimizing all wastes in manufacturing and logistics processes. Every FMS is built on the basis of automated and robotized production cells. Except flexible CNC machine tools and other equipments, the industrial robots are primary elements of the system. In the studies, authors look for wastes of time and cost in real tasks of robots, during manipulation processes. According to aspiration for optimization of handling and manipulation processes with use of the robots, the application of modern off-line programming methods and computer simulation, is the best solution and it is only way to minimize unnecessary movements and other instructions. The modelling process of robotized production cell and offline programming of Kawasaki robots in AS-Language will be described. The simulation of robotized workstation will be realized with use of virtual reality software K-Roset. Authors show the process of industrial robot’s programs improvement and optimization in terms of minimizing the number of useless manipulator movements and unnecessary instructions. This is realized in order to shorten the time of production cycles. This will also reduce costs of handling, manipulations and technological process.
Reasoning and memory: People make varied use of the information available in working memory.
Hardman, Kyle O; Cowan, Nelson
2016-05-01
Working memory (WM) is used for storing information in a highly accessible state so that other mental processes, such as reasoning, can use that information. Some WM tasks require that participants not only store information, but also reason about that information to perform optimally on the task. In this study, we used visual WM tasks that had both storage and reasoning components to determine both how ideally people are able to reason about information in WM and if there is a relationship between information storage and reasoning. We developed novel psychological process models of the tasks that allowed us to estimate for each participant both how much information they had in WM and how efficiently they reasoned about that information. Our estimates of information use showed that participants are not all ideal information users or minimal information users, but rather that there are individual differences in the thoroughness of information use in our WM tasks. However, we found that our participants tended to be more ideal than minimal. One implication of this work is that to accurately estimate the amount of information in WM, it is important to also estimate how efficiently that information is used. This new analysis contributes to the theoretical premise that human rationality may be bounded by the complexity of task demands. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Reasoning and memory: People make varied use of the information available in working memory
Hardman, Kyle O.; Cowan, Nelson
2015-01-01
Working memory (WM) is used for storing information in a highly-accessible state so that other mental processes, such as reasoning, can use that information. Some WM tasks require that participants not only store information, but also reason about that information in order to perform optimally on the task. In this study, we used visual WM tasks that had both storage and reasoning components in order to determine both how ideally people are able to reason about information in WM and if there is a relationship between information storage and reasoning. We developed novel psychological process models of the tasks that allowed us to estimate for each participant both how much information they had in WM and how efficiently they reasoned about that information. Our estimates of information use showed that participants are not all ideal information users or minimal information users, but rather that there are individual differences in the thoroughness of information use in our WM tasks. However, we found that our participants tended to be more ideal than minimal. One implication of this work is that in order to accurately estimate the amount of information in WM, it is important to also estimate how efficiently that information is used. This new analysis contributes to the theoretical premise that human rationality may be bounded by the complexity of task demands. PMID:26569436
Zeeb, Fiona D; Winstanley, Catharine A
2013-04-10
An inability to adjust choice preferences in response to changes in reward value may underlie key symptoms of many psychiatric disorders, including chemical and behavioral addictions. We developed the rat gambling task (rGT) to investigate the neurobiology underlying complex decision-making processes. As in the Iowa Gambling task, the optimal strategy is to avoid choosing larger, riskier rewards and to instead favor options associated with smaller rewards but less loss and, ultimately, greater long-term gain. Given the demonstrated importance of the orbitofrontal cortex (OFC) and basolateral amygdala (BLA) in acquisition of the rGT and Iowa Gambling task, we used a contralateral disconnection lesion procedure to assess whether functional connectivity between these regions is necessary for optimal decision-making. Disrupting the OFC-BLA pathway retarded acquisition of the rGT. Devaluing the reinforcer by inducing sensory-specific satiety altered decision-making in control groups. In contrast, disconnected rats did not update their choice preference following reward devaluation, either when the devalued reward was still delivered or when animals needed to rely on stored representations of reward value (i.e., during extinction). However, all rats exhibited decreased premature responding and slower response latencies after satiety manipulations. Hence, disconnecting the OFC and BLA did not affect general behavioral changes caused by reduced motivation, but instead prevented alterations in the value of a specific reward from contributing appropriately to cost-benefit decision-making. These results highlight the role of the OFC-BLA pathway in the decision-making process and suggest that communication between these areas is vital for the appropriate assessment of reward value to influence choice.
High-Frequency Binaural Beats Increase Cognitive Flexibility: Evidence from Dual-Task Crosstalk.
Hommel, Bernhard; Sellaro, Roberta; Fischer, Rico; Borg, Saskia; Colzato, Lorenza S
2016-01-01
Increasing evidence suggests that cognitive-control processes can be configured to optimize either persistence of information processing (by amplifying competition between decision-making alternatives and top-down biasing of this competition) or flexibility (by dampening competition and biasing). We investigated whether high-frequency binaural beats, an auditory illusion suspected to act as a cognitive enhancer, have an impact on cognitive-control configuration. We hypothesized that binaural beats in the gamma range bias the cognitive-control style toward flexibility, which in turn should increase the crosstalk between tasks in a dual-task paradigm. We replicated earlier findings that the reaction time in the first-performed task is sensitive to the compatibility between the responses in the first and the second task-an indication of crosstalk. As predicted, exposing participants to binaural beats in the gamma range increased this effect as compared to a control condition in which participants were exposed to a continuous tone of 340 Hz. These findings provide converging evidence that the cognitive-control style can be systematically biased by inducing particular internal states; that high-frequency binaural beats bias the control style toward more flexibility; and that different styles are implemented by changing the strength of local competition and top-down bias.
Task-dependent modulation of the visual sensory thalamus assists visual-speech recognition.
Díaz, Begoña; Blank, Helen; von Kriegstein, Katharina
2018-05-14
The cerebral cortex modulates early sensory processing via feed-back connections to sensory pathway nuclei. The functions of this top-down modulation for human behavior are poorly understood. Here, we show that top-down modulation of the visual sensory thalamus (the lateral geniculate body, LGN) is involved in visual-speech recognition. In two independent functional magnetic resonance imaging (fMRI) studies, LGN response increased when participants processed fast-varying features of articulatory movements required for visual-speech recognition, as compared to temporally more stable features required for face identification with the same stimulus material. The LGN response during the visual-speech task correlated positively with the visual-speech recognition scores across participants. In addition, the task-dependent modulation was present for speech movements and did not occur for control conditions involving non-speech biological movements. In face-to-face communication, visual speech recognition is used to enhance or even enable understanding what is said. Speech recognition is commonly explained in frameworks focusing on cerebral cortex areas. Our findings suggest that task-dependent modulation at subcortical sensory stages has an important role for communication: Together with similar findings in the auditory modality the findings imply that task-dependent modulation of the sensory thalami is a general mechanism to optimize speech recognition. Copyright © 2018. Published by Elsevier Inc.
OPTIMIZING EXPOSURE MEASUREMENT TECHNIQUES
The research reported in this task description addresses one of a series of interrelated NERL tasks with the common goal of optimizing the predictive power of low cost, reliable exposure measurements for the planned Interagency National Children's Study (NCS). Specifically, we w...
Ergonomics and design: its principles applied in the industry.
Tavares, Ademario Santos; Silva, Francisco Nilson da
2012-01-01
Industrial Design encompasses both product development and optimization of production process. In this sense, Ergonomics plays a fundamental role, because its principles, methods and techniques can help operators to carry out their tasks most successfully. A case study carried out in an industry shows that the interaction among Design, Production Engineering and Materials Engineering departments may improve some aspects concerned security, comfort, efficiency and performance. In this process, Ergonomics had shown to be of essential importance to strategic decision making to the improvement of production section.
Optimizing spectral CT parameters for material classification tasks
NASA Astrophysics Data System (ADS)
Rigie, D. S.; La Rivière, P. J.
2016-06-01
In this work, we propose a framework for optimizing spectral CT imaging parameters and hardware design with regard to material classification tasks. Compared with conventional CT, many more parameters must be considered when designing spectral CT systems and protocols. These choices will impact material classification performance in a non-obvious, task-dependent way with direct implications for radiation dose reduction. In light of this, we adapt Hotelling Observer formalisms typically applied to signal detection tasks to the spectral CT, material-classification problem. The result is a rapidly computable metric that makes it possible to sweep out many system configurations, generating parameter optimization curves (POC’s) that can be used to select optimal settings. The proposed model avoids restrictive assumptions about the basis-material decomposition (e.g. linearity) and incorporates signal uncertainty with a stochastic object model. This technique is demonstrated on dual-kVp and photon-counting systems for two different, clinically motivated material classification tasks (kidney stone classification and plaque removal). We show that the POC’s predicted with the proposed analytic model agree well with those derived from computationally intensive numerical simulation studies.
Optimizing Spectral CT Parameters for Material Classification Tasks
Rigie, D. S.; La Rivière, P. J.
2017-01-01
In this work, we propose a framework for optimizing spectral CT imaging parameters and hardware design with regard to material classification tasks. Compared with conventional CT, many more parameters must be considered when designing spectral CT systems and protocols. These choices will impact material classification performance in a non-obvious, task-dependent way with direct implications for radiation dose reduction. In light of this, we adapt Hotelling Observer formalisms typically applied to signal detection tasks to the spectral CT, material-classification problem. The result is a rapidly computable metric that makes it possible to sweep out many system configurations, generating parameter optimization curves (POC’s) that can be used to select optimal settings. The proposed model avoids restrictive assumptions about the basis-material decomposition (e.g. linearity) and incorporates signal uncertainty with a stochastic object model. This technique is demonstrated on dual-kVp and photon-counting systems for two different, clinically motivated material classification tasks (kidney stone classification and plaque removal). We show that the POC’s predicted with the proposed analytic model agree well with those derived from computationally intensive numerical simulation studies. PMID:27227430
Li, Lian-Hui; Mo, Rong
2015-01-01
The production task queue has a great significance for manufacturing resource allocation and scheduling decision. Man-made qualitative queue optimization method has a poor effect and makes the application difficult. A production task queue optimization method is proposed based on multi-attribute evaluation. According to the task attributes, the hierarchical multi-attribute model is established and the indicator quantization methods are given. To calculate the objective indicator weight, criteria importance through intercriteria correlation (CRITIC) is selected from three usual methods. To calculate the subjective indicator weight, BP neural network is used to determine the judge importance degree, and then the trapezoid fuzzy scale-rough AHP considering the judge importance degree is put forward. The balanced weight, which integrates the objective weight and the subjective weight, is calculated base on multi-weight contribution balance model. The technique for order preference by similarity to an ideal solution (TOPSIS) improved by replacing Euclidean distance with relative entropy distance is used to sequence the tasks and optimize the queue by the weighted indicator value. A case study is given to illustrate its correctness and feasibility.
Li, Lian-hui; Mo, Rong
2015-01-01
The production task queue has a great significance for manufacturing resource allocation and scheduling decision. Man-made qualitative queue optimization method has a poor effect and makes the application difficult. A production task queue optimization method is proposed based on multi-attribute evaluation. According to the task attributes, the hierarchical multi-attribute model is established and the indicator quantization methods are given. To calculate the objective indicator weight, criteria importance through intercriteria correlation (CRITIC) is selected from three usual methods. To calculate the subjective indicator weight, BP neural network is used to determine the judge importance degree, and then the trapezoid fuzzy scale-rough AHP considering the judge importance degree is put forward. The balanced weight, which integrates the objective weight and the subjective weight, is calculated base on multi-weight contribution balance model. The technique for order preference by similarity to an ideal solution (TOPSIS) improved by replacing Euclidean distance with relative entropy distance is used to sequence the tasks and optimize the queue by the weighted indicator value. A case study is given to illustrate its correctness and feasibility. PMID:26414758
Cache-Oblivious parallel SIMD Viterbi decoding for sequence search in HMMER
2014-01-01
Background HMMER is a commonly used bioinformatics tool based on Hidden Markov Models (HMMs) to analyze and process biological sequences. One of its main homology engines is based on the Viterbi decoding algorithm, which was already highly parallelized and optimized using Farrar’s striped processing pattern with Intel SSE2 instruction set extension. Results A new SIMD vectorization of the Viterbi decoding algorithm is proposed, based on an SSE2 inter-task parallelization approach similar to the DNA alignment algorithm proposed by Rognes. Besides this alternative vectorization scheme, the proposed implementation also introduces a new partitioning of the Markov model that allows a significantly more efficient exploitation of the cache locality. Such optimization, together with an improved loading of the emission scores, allows the achievement of a constant processing throughput, regardless of the innermost-cache size and of the dimension of the considered model. Conclusions The proposed optimized vectorization of the Viterbi decoding algorithm was extensively evaluated and compared with the HMMER3 decoder to process DNA and protein datasets, proving to be a rather competitive alternative implementation. Being always faster than the already highly optimized ViterbiFilter implementation of HMMER3, the proposed Cache-Oblivious Parallel SIMD Viterbi (COPS) implementation provides a constant throughput and offers a processing speedup as high as two times faster, depending on the model’s size. PMID:24884826
EEG Dynamics Reflect the Distinct Cognitive Process of Optic Problem Solving
She, Hsiao-Ching; Jung, Tzyy-Ping; Chou, Wen-Chi; Huang, Li-Yu; Wang, Chia-Yu; Lin, Guan-Yu
2012-01-01
This study explores the changes in electroencephalographic (EEG) activity associated with the performance of solving an optics maze problem. College students (N = 37) were instructed to construct three solutions to the optical maze in a Web-based learning environment, which required some knowledge of physics. The subjects put forth their best effort to minimize the number of convexes and mirrors needed to guide the image of an object from the entrance to the exit of the maze. This study examines EEG changes in different frequency bands accompanying varying demands on the cognitive process of providing solutions. Results showed that the mean power of θ, α1, α2, and β1 significantly increased as the number of convexes and mirrors used by the students decreased from solution 1 to 3. Moreover, the mean power of θ and α1 significantly increased when the participants constructed their personal optimal solution (the least total number of mirrors and lens used by students) compared to their non-personal optimal solution. In conclusion, the spectral power of frontal, frontal midline and posterior theta, posterior alpha, and temporal beta increased predominantly as the task demands and task performance increased. PMID:22815800
Assessing performance in complex team environments.
Whitmore, Jeffrey N
2005-07-01
This paper provides a brief introduction to team performance assessment. It highlights some critical aspects leading to the successful measurement of team performance in realistic console operations; discusses the idea of process and outcome measures; presents two types of team data collection systems; and provides an example of team performance assessment. Team performance assessment is a complicated endeavor relative to assessing individual performance. Assessing team performance necessitates a clear understanding of each operator's task, both at the individual and team level, and requires planning for efficient data capture and analysis. Though team performance assessment requires considerable effort, the results can be very worthwhile. Most tasks performed in Command and Control environments are team tasks, and understanding this type of performance is becoming increasingly important to the evaluation of mission success and for overall system optimization.
2011-01-01
Background Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task. PMID:21867520
NASA Astrophysics Data System (ADS)
DeSena, J. T.; Martin, S. R.; Clarke, J. C.; Dutrow, D. A.; Newman, A. J.
2012-06-01
As the number and diversity of sensing assets available for intelligence, surveillance and reconnaissance (ISR) operations continues to expand, the limited ability of human operators to effectively manage, control and exploit the ISR ensemble is exceeded, leading to reduced operational effectiveness. Automated support both in the processing of voluminous sensor data and sensor asset control can relieve the burden of human operators to support operation of larger ISR ensembles. In dynamic environments it is essential to react quickly to current information to avoid stale, sub-optimal plans. Our approach is to apply the principles of feedback control to ISR operations, "closing the loop" from the sensor collections through automated processing to ISR asset control. Previous work by the authors demonstrated non-myopic multiple platform trajectory control using a receding horizon controller in a closed feedback loop with a multiple hypothesis tracker applied to multi-target search and track simulation scenarios in the ground and space domains. This paper presents extensions in both size and scope of the previous work, demonstrating closed-loop control, involving both platform routing and sensor pointing, of a multisensor, multi-platform ISR ensemble tasked with providing situational awareness and performing search, track and classification of multiple moving ground targets in irregular warfare scenarios. The closed-loop ISR system is fullyrealized using distributed, asynchronous components that communicate over a network. The closed-loop ISR system has been exercised via a networked simulation test bed against a scenario in the Afghanistan theater implemented using high-fidelity terrain and imagery data. In addition, the system has been applied to space surveillance scenarios requiring tracking of space objects where current deliberative, manually intensive processes for managing sensor assets are insufficiently responsive. Simulation experiment results are presented. The algorithm to jointly optimize sensor schedules against search, track, and classify is based on recent work by Papageorgiou and Raykin on risk-based sensor management. It uses a risk-based objective function and attempts to minimize and balance the risks of misclassifying and losing track on an object. It supports the requirement to generate tasking for metric and feature data concurrently and synergistically, and account for both tracking accuracy and object characterization, jointly, in computing reward and cost for optimizing tasking decisions.
SENSOR++: Simulation of Remote Sensing Systems from Visible to Thermal Infrared
NASA Astrophysics Data System (ADS)
Paproth, C.; Schlüßler, E.; Scherbaum, P.; Börner, A.
2012-07-01
During the development process of a remote sensing system, the optimization and the verification of the sensor system are important tasks. To support these tasks, the simulation of the sensor and its output is valuable. This enables the developers to test algorithms, estimate errors, and evaluate the capabilities of the whole sensor system before the final remote sensing system is available and produces real data. The presented simulation concept, SENSOR++, consists of three parts. The first part is the geometric simulation which calculates where the sensor looks at by using a ray tracing algorithm. This also determines whether the observed part of the scene is shadowed or not. The second part describes the radiometry and results in the spectral at-sensor radiance from the visible spectrum to the thermal infrared according to the simulated sensor type. In the case of earth remote sensing, it also includes a model of the radiative transfer through the atmosphere. The final part uses the at-sensor radiance to generate digital images by using an optical and an electronic sensor model. Using SENSOR++ for an optimization requires the additional application of task-specific data processing algorithms. The principle of the simulation approach is explained, all relevant concepts of SENSOR++ are discussed, and first examples of its use are given, for example a camera simulation for a moon lander. Finally, the verification of SENSOR++ is demonstrated.
Learning and inference using complex generative models in a spatial localization task.
Bejjanki, Vikranth R; Knill, David C; Aslin, Richard N
2016-01-01
A large body of research has established that, under relatively simple task conditions, human observers integrate uncertain sensory information with learned prior knowledge in an approximately Bayes-optimal manner. However, in many natural tasks, observers must perform this sensory-plus-prior integration when the underlying generative model of the environment consists of multiple causes. Here we ask if the Bayes-optimal integration seen with simple tasks also applies to such natural tasks when the generative model is more complex, or whether observers rely instead on a less efficient set of heuristics that approximate ideal performance. Participants localized a "hidden" target whose position on a touch screen was sampled from a location-contingent bimodal generative model with different variances around each mode. Over repeated exposure to this task, participants learned the a priori locations of the target (i.e., the bimodal generative model), and integrated this learned knowledge with uncertain sensory information on a trial-by-trial basis in a manner consistent with the predictions of Bayes-optimal behavior. In particular, participants rapidly learned the locations of the two modes of the generative model, but the relative variances of the modes were learned much more slowly. Taken together, our results suggest that human performance in a more complex localization task, which requires the integration of sensory information with learned knowledge of a bimodal generative model, is consistent with the predictions of Bayes-optimal behavior, but involves a much longer time-course than in simpler tasks.
Hybrid glowworm swarm optimization for task scheduling in the cloud environment
NASA Astrophysics Data System (ADS)
Zhou, Jing; Dong, Shoubin
2018-06-01
In recent years many heuristic algorithms have been proposed to solve task scheduling problems in the cloud environment owing to their optimization capability. This article proposes a hybrid glowworm swarm optimization (HGSO) based on glowworm swarm optimization (GSO), which uses a technique of evolutionary computation, a strategy of quantum behaviour based on the principle of neighbourhood, offspring production and random walk, to achieve more efficient scheduling with reasonable scheduling costs. The proposed HGSO reduces the redundant computation and the dependence on the initialization of GSO, accelerates the convergence and more easily escapes from local optima. The conducted experiments and statistical analysis showed that in most cases the proposed HGSO algorithm outperformed previous heuristic algorithms to deal with independent tasks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolly, S; Mutic, S; Anastasio, M
Purpose: Traditionally, image quality in radiation therapy is assessed subjectively or by utilizing physically-based metrics. Some model observers exist for task-based medical image quality assessment, but almost exclusively for diagnostic imaging tasks. As opposed to disease diagnosis, the task for image observers in radiation therapy is to utilize the available images to design and deliver a radiation dose which maximizes patient disease control while minimizing normal tissue damage. The purpose of this study was to design and implement a new computer simulation model observer to enable task-based image quality assessment in radiation therapy. Methods: A modular computer simulation framework wasmore » developed to resemble the radiotherapy observer by simulating an end-to-end radiation therapy treatment. Given images and the ground-truth organ boundaries from a numerical phantom as inputs, the framework simulates an external beam radiation therapy treatment and quantifies patient treatment outcomes using the previously defined therapeutic operating characteristic (TOC) curve. As a preliminary demonstration, TOC curves were calculated for various CT acquisition and reconstruction parameters, with the goal of assessing and optimizing simulation CT image quality for radiation therapy. Sources of randomness and bias within the system were analyzed. Results: The relationship between CT imaging dose and patient treatment outcome was objectively quantified in terms of a singular value, the area under the TOC (AUTOC) curve. The AUTOC decreases more rapidly for low-dose imaging protocols. AUTOC variation introduced by the dose optimization algorithm was approximately 0.02%, at the 95% confidence interval. Conclusion: A model observer has been developed and implemented to assess image quality based on radiation therapy treatment efficacy. It enables objective determination of appropriate imaging parameter values (e.g. imaging dose). Framework flexibility allows for incorporation of additional modules to include any aspect of the treatment process, and therefore has great potential for both assessment and optimization within radiation therapy.« less
Beads task vs. box task: The specificity of the jumping to conclusions bias.
Balzan, Ryan P; Ephraums, Rachel; Delfabbro, Paul; Andreou, Christina
2017-09-01
Previous research involving the probabilistic reasoning 'beads task' has consistently demonstrated a jumping-to-conclusions (JTC) bias, where individuals with delusions make decisions based on limited evidence. However, recent studies have suggested that miscomprehension may be confounding the beads task. The current study aimed to test the conventional beads task against a conceptually simpler probabilistic reasoning "box task" METHODS: One hundred non-clinical participants completed both the beads task and the box task, and the Peters et al. Delusions Inventory (PDI) to assess for delusion-proneness. The number of 'draws to decision' was assessed for both tasks. Additionally, the total amount of on-screen evidence was manipulated for the box task, and two new box task measures were assessed (i.e., 'proportion of evidence requested' and 'deviation from optimal solution'). Despite being conceptually similar, the two tasks did not correlate, and participants requested significantly less information on the beads task relative to the box task. High-delusion-prone participants did not demonstrate hastier decisions on either task; in fact, for box task, this group was observed to be significantly more conservative than low-delusion-prone group. Neither task was incentivized; results need replication with a clinical sample. Participants, and particularly those identified as high-delusion-prone, displayed a more conservative style of responding on the novel box task, relative to the beads task. The two tasks, whilst conceptually similar, appear to be tapping different cognitive processes. The implications of these results are discussed in relation to the JTC bias and the theoretical mechanisms thought to underlie it. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dandina N. Rao; Subhash C. Ayirala; Madhav M. Kulkarni
This is the final report describing the evolution of the project ''Development and Optimization of Gas-Assisted Gravity Drainage (GAGD) Process for Improved Light Oil Recovery'' from its conceptual stage in 2002 to the field implementation of the developed technology in 2006. This comprehensive report includes all the experimental research, models developments, analyses of results, salient conclusions and the technology transfer efforts. As planned in the original proposal, the project has been conducted in three separate and concurrent tasks: Task 1 involved a physical model study of the new GAGD process, Task 2 was aimed at further developing the vanishing interfacialmore » tension (VIT) technique for gas-oil miscibility determination, and Task 3 was directed at determining multiphase gas-oil drainage and displacement characteristics in reservoir rocks at realistic pressures and temperatures. The project started with the task of recruiting well-qualified graduate research assistants. After collecting and reviewing the literature on different aspects of the project such gas injection EOR, gravity drainage, miscibility characterization, and gas-oil displacement characteristics in porous media, research plans were developed for the experimental work to be conducted under each of the three tasks. Based on the literature review and dimensional analysis, preliminary criteria were developed for the design of the partially-scaled physical model. Additionally, the need for a separate transparent model for visual observation and verification of the displacement and drainage behavior under gas-assisted gravity drainage was identified. Various materials and methods (ceramic porous material, Stucco, Portland cement, sintered glass beads) were attempted in order to fabricate a satisfactory visual model. In addition to proving the effectiveness of the GAGD process (through measured oil recoveries in the range of 65 to 87% IOIP), the visual models demonstrated three possible multiphase mechanisms at work, namely, Darcy-type displacement until gas breakthrough, gravity drainage after breakthrough and film-drainage in gas-invaded zones throughout the duration of the process. The partially-scaled physical model was used in a series of experiments to study the effects of wettability, gas-oil miscibility, secondary versus tertiary mode gas injection, and the presence of fractures on GAGD oil recovery. In addition to yielding recoveries of up to 80% IOIP, even in the immiscible gas injection mode, the partially-scaled physical model confirmed the positive influence of fractures and oil-wet characteristics in enhancing oil recoveries over those measured in the homogeneous (unfractured) water-wet models. An interesting observation was that a single logarithmic relationship between the oil recovery and the gravity number was obeyed by the physical model, the high-pressure corefloods and the field data.« less
Optimal allocation model of construction land based on two-level system optimization theory
NASA Astrophysics Data System (ADS)
Liu, Min; Liu, Yanfang; Xia, Yuping; Lei, Qihong
2007-06-01
The allocation of construction land is an important task in land-use planning. Whether implementation of planning decisions is a success or not, usually depends on a reasonable and scientific distribution method. Considering the constitution of land-use planning system and planning process in China, multiple levels and multiple objective decision problems is its essence. Also, planning quantity decomposition is a two-level system optimization problem and an optimal resource allocation decision problem between a decision-maker in the topper and a number of parallel decision-makers in the lower. According the characteristics of the decision-making process of two-level decision-making system, this paper develops an optimal allocation model of construction land based on two-level linear planning. In order to verify the rationality and the validity of our model, Baoan district of Shenzhen City has been taken as a test case. Under the assistance of the allocation model, construction land is allocated to ten townships of Baoan district. The result obtained from our model is compared to that of traditional method, and results show that our model is reasonable and usable. In the end, the paper points out the shortcomings of the model and further research directions.
Sriram, Vinay K; Montgomery, Doug
2017-07-01
The Internet is subject to attacks due to vulnerabilities in its routing protocols. One proposed approach to attain greater security is to cryptographically protect network reachability announcements exchanged between Border Gateway Protocol (BGP) routers. This study proposes and evaluates the performance and efficiency of various optimization algorithms for validation of digitally signed BGP updates. In particular, this investigation focuses on the BGPSEC (BGP with SECurity extensions) protocol, currently under consideration for standardization in the Internet Engineering Task Force. We analyze three basic BGPSEC update processing algorithms: Unoptimized, Cache Common Segments (CCS) optimization, and Best Path Only (BPO) optimization. We further propose and study cache management schemes to be used in conjunction with the CCS and BPO algorithms. The performance metrics used in the analyses are: (1) routing table convergence time after BGPSEC peering reset or router reboot events and (2) peak-second signature verification workload. Both analytical modeling and detailed trace-driven simulation were performed. Results show that the BPO algorithm is 330% to 628% faster than the unoptimized algorithm for routing table convergence in a typical Internet core-facing provider edge router.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Catalano, A.; Arya, R.; Carr, L.
1992-05-01
This report describes progress during the first year of a three-year research program to develop 12%-efficient CuInSe{sub 2} (CIS) submodules with area greater than 900 cm{sup 2}. To meet this objective, the program was divided into five tasks: (1) windows, contacts, substrates; (2) absorber material; (3) device structure; (4) submodule design and encapsulation; and (5) process optimization. In the first year of the program, work was concentrated on the first three tasks with an objective to demonstrate a 9%-efficient CIS solar cell. 7 refs.
Efficient algorithms for a class of partitioning problems
NASA Technical Reports Server (NTRS)
Iqbal, M. Ashraf; Bokhari, Shahid H.
1990-01-01
The problem of optimally partitioning the modules of chain- or tree-like tasks over chain-structured or host-satellite multiple computer systems is addressed. This important class of problems includes many signal processing and industrial control applications. Prior research has resulted in a succession of faster exact and approximate algorithms for these problems. Polynomial exact and approximate algorithms are described for this class that are better than any of the previously reported algorithms. The approach is based on a preprocessing step that condenses the given chain or tree structured task into a monotonic chain or tree. The partitioning of this monotonic take can then be carried out using fast search techniques.
Zeeb, Fiona D; Baarendse, P J J; Vanderschuren, L J M J; Winstanley, Catharine A
2015-12-01
Studies employing the Iowa Gambling Task (IGT) demonstrated that areas of the frontal cortex, including the ventromedial prefrontal cortex, orbitofrontal cortex (OFC), dorsolateral prefrontal cortex, and anterior cingulate cortex (ACC), are involved in the decision-making process. However, the precise role of these regions in maintaining optimal choice is not clear. We used the rat gambling task (rGT), a rodent analogue of the IGT, to determine whether inactivation of or altered dopamine signalling within discrete cortical sub-regions disrupts decision-making. Following training on the rGT, animals were implanted with guide cannulae aimed at the prelimbic (PrL) or infralimbic (IL) cortices, the OFC, or the ACC. Prior to testing, rats received an infusion of saline or a combination of baclofen and muscimol (0.125 μg of each/side) to inactivate the region and an infusion of a dopamine D2 receptor antagonist (0, 0.1, 0.3, and 1.0 μg/side). Rats tended to increase their choice of a disadvantageous option and decrease their choice of the optimal option following inactivation of either the IL or PrL cortex. In contrast, OFC or ACC inactivation did not affect decision-making. Infusion of a dopamine D2 receptor antagonist into any sub-region did not alter choice preference. Online activity of the IL or PrL cortex is important for maintaining an optimal decision-making strategy, but optimal performance on the rGT does not require frontal cortex dopamine D2 receptor activation. Additionally, these results demonstrate that the roles of different cortical regions in cost-benefit decision-making may be dissociated using the rGT.
Visual Perceptual Learning and Models.
Dosher, Barbara; Lu, Zhong-Lin
2017-09-15
Visual perceptual learning through practice or training can significantly improve performance on visual tasks. Originally seen as a manifestation of plasticity in the primary visual cortex, perceptual learning is more readily understood as improvements in the function of brain networks that integrate processes, including sensory representations, decision, attention, and reward, and balance plasticity with system stability. This review considers the primary phenomena of perceptual learning, theories of perceptual learning, and perceptual learning's effect on signal and noise in visual processing and decision. Models, especially computational models, play a key role in behavioral and physiological investigations of the mechanisms of perceptual learning and for understanding, predicting, and optimizing human perceptual processes, learning, and performance. Performance improvements resulting from reweighting or readout of sensory inputs to decision provide a strong theoretical framework for interpreting perceptual learning and transfer that may prove useful in optimizing learning in real-world applications.
Energetic arousal and language: predictions from the computational theory of quantifiers processing.
Zajenkowski, Marcin
2013-10-01
The author examines the relationship between energetic arousal (EA) and the processing of sentences containing natural-language quantifiers. Previous studies and theories have shown that energy may differentially affect various cognitive functions. Recent investigations devoted to quantifiers strongly support the theory that various types of quantifiers involve different cognitive functions in the sentence-picture verification task. In the present study, 201 students were presented with a sentence-picture verification task consisting of simple propositions containing a quantifier that referred to the color of a car on display. Color pictures of cars accompanied the propositions. In addition, the level of participants' EA was measured before and after the verification task. It was found that EA and performance on proportional quantifiers (e.g., "More than half of the cars are red") are in an inverted U-shaped relationship. This result may be explained by the fact that proportional sentences engage working memory to a high degree, and previous models of EA-cognition associations have been based on the assumption that tasks that require parallel attentional and memory processes are best performed when energy is moderate. The research described in the present article has several applications, as it shows the optimal human conditions for verbal comprehension. For instance, it may be important in workplace design to control the level of arousal experienced by office staff when work is mostly related to the processing of complex texts. Energy level may be influenced by many factors, such as noise, time of day, or thermal conditions.
Design optimization for active twist rotor blades
NASA Astrophysics Data System (ADS)
Mok, Ji Won
This dissertation introduces the process of optimizing active twist rotor blades in the presence of embedded anisotropic piezo-composite actuators. Optimum design of active twist blades is a complex task, since it involves a rich design space with tightly coupled design variables. The study presents the development of an optimization framework for active helicopter rotor blade cross-sectional design. This optimization framework allows for exploring a rich and highly nonlinear design space in order to optimize the active twist rotor blades. Different analytical components are combined in the framework: cross-sectional analysis (UM/VABS), an automated mesh generator, a beam solver (DYMORE), a three-dimensional local strain recovery module, and a gradient based optimizer within MATLAB. Through the mathematical optimization problem, the static twist actuation performance of a blade is maximized while satisfying a series of blade constraints. These constraints are associated with locations of the center of gravity and elastic axis, blade mass per unit span, fundamental rotating blade frequencies, and the blade strength based on local three-dimensional strain fields under worst loading conditions. Through pre-processing, limitations of the proposed process have been studied. When limitations were detected, resolution strategies were proposed. These include mesh overlapping, element distortion, trailing edge tab modeling, electrode modeling and foam implementation of the mesh generator, and the initial point sensibility of the current optimization scheme. Examples demonstrate the effectiveness of this process. Optimization studies were performed on the NASA/Army/MIT ATR blade case. Even though that design was built and shown significant impact in vibration reduction, the proposed optimization process showed that the design could be improved significantly. The second example, based on a model scale of the AH-64D Apache blade, emphasized the capability of this framework to explore the nonlinear design space of complex planform. Especially for this case, detailed design is carried out to make the actual blade manufacturable. The proposed optimization framework is shown to be an effective tool to design high authority active twist blades to reduce vibration in future helicopter rotor blades.
Cerebellar tDCS Modulates Neural Circuits during Semantic Prediction: A Combined tDCS-fMRI Study.
D'Mello, Anila M; Turkeltaub, Peter E; Stoodley, Catherine J
2017-02-08
It has been proposed that the cerebellum acquires internal models of mental processes that enable prediction, allowing for the optimization of behavior. In language, semantic prediction speeds speech production and comprehension. Right cerebellar lobules VI and VII (including Crus I/II) are engaged during a variety of language processes and are functionally connected with cerebral cortical language networks. Further, right posterolateral cerebellar neuromodulation modifies behavior during predictive language processing. These data are consistent with a role for the cerebellum in semantic processing and semantic prediction. We combined transcranial direct current stimulation (tDCS) and fMRI to assess the behavioral and neural consequences of cerebellar tDCS during a sentence completion task. Task-based and resting-state fMRI data were acquired in healthy human adults ( n = 32; μ = 23.1 years) both before and after 20 min of 1.5 mA anodal ( n = 18) or sham ( n = 14) tDCS applied to the right posterolateral cerebellum. In the sentence completion task, the first four words of the sentence modulated the predictability of the final target word. In some sentences, the preceding context strongly predicted the target word, whereas other sentences were nonpredictive. Completion of predictive sentences increased activation in right Crus I/II of the cerebellum. Relative to sham tDCS, anodal tDCS increased activation in right Crus I/II during semantic prediction and enhanced resting-state functional connectivity between hubs of the reading/language networks. These results are consistent with a role for the right posterolateral cerebellum beyond motor aspects of language, and suggest that cerebellar internal models of linguistic stimuli support semantic prediction. SIGNIFICANCE STATEMENT Cerebellar involvement in language tasks and language networks is now well established, yet the specific cerebellar contribution to language processing remains unclear. It is thought that the cerebellum acquires internal models of mental processes that enable prediction, allowing for the optimization of behavior. Here we combined neuroimaging and neuromodulation to provide evidence that the cerebellum is specifically involved in semantic prediction during sentence processing. We found that activation within right Crus I/II was enhanced when semantic predictions were made, and we show that modulation of this region with transcranial direct current stimulation alters both activation patterns and functional connectivity within whole-brain language networks. For the first time, these data show that cerebellar neuromodulation impacts activation patterns specifically during predictive language processing. Copyright © 2017 the authors 0270-6474/17/371604-10$15.00/0.
An optimization tool for satellite equipment layout
NASA Astrophysics Data System (ADS)
Qin, Zheng; Liang, Yan-gang; Zhou, Jian-ping
2018-01-01
Selection of the satellite equipment layout with performance constraints is a complex task which can be viewed as a constrained multi-objective optimization and a multiple criteria decision making problem. The layout design of a satellite cabin involves the process of locating the required equipment in a limited space, thereby satisfying various behavioral constraints of the interior and exterior environments. The layout optimization of satellite cabin in this paper includes the C.G. offset, the moments of inertia and the space debris impact risk of the system, of which the impact risk index is developed to quantify the risk to a satellite cabin of coming into contact with space debris. In this paper an optimization tool for the integration of CAD software as well as the optimization algorithms is presented, which is developed to automatically find solutions for a three-dimensional layout of equipment in satellite. The effectiveness of the tool is also demonstrated by applying to the layout optimization of a satellite platform.
Differences in Multitask Resource Reallocation After Change in Task Values.
Matton, Nadine; Paubel, Pierre; Cegarra, Julien; Raufaste, Eric
2016-12-01
The objective was to characterize multitask resource reallocation strategies when managing subtasks with various assigned values. When solving a resource conflict in multitasking, Salvucci and Taatgen predict a globally rational strategy will be followed that favors the most urgent subtask and optimizes global performance. However, Katidioti and Taatgen identified a locally rational strategy that optimizes only a subcomponent of the whole task, leading to detrimental consequences on global performance. Moreover, the question remains open whether expertise would have an impact on the choice of the strategy. We adopted a multitask environment used for pilot selection with a change in emphasis on two out of four subtasks while all subtasks had to be maintained over a minimum performance. A laboratory eye-tracking study contrasted 20 recently selected pilot students considered as experienced with this task and 15 university students considered as novices. When two subtasks were emphasized, novices focused their resources particularly on one high-value subtask and failed to prevent both low-value subtasks falling below minimum performance. On the contrary, experienced people delayed the processing of one low-value subtask but managed to optimize global performance. In a multitasking environment where some subtasks are emphasized, novices follow a locally rational strategy whereas experienced participants follow a globally rational strategy. During complex training, trainees are only able to adjust their resource allocation strategy to subtask emphasis changes once they are familiar with the multitasking environment. © 2016, Human Factors and Ergonomics Society.
Tommasino, Paolo; Campolo, Domenico
2017-02-03
In this work, we address human-like motor planning in redundant manipulators. Specifically, we want to capture postural synergies such as Donders' law, experimentally observed in humans during kinematically redundant tasks, and infer a minimal set of parameters to implement similar postural synergies in a kinematic model. For the model itself, although the focus of this paper is to solve redundancy by implementing postural strategies derived from experimental data, we also want to ensure that such postural control strategies do not interfere with other possible forms of motion control (in the task-space), i.e. solving the posture/movement problem. The redundancy problem is framed as a constrained optimization problem, traditionally solved via the method of Lagrange multipliers. The posture/movement problem can be tackled via the separation principle which, derived from experimental evidence, posits that the brain processes static torques (i.e. posture-dependent, such as gravitational torques) separately from dynamic torques (i.e. velocity-dependent). The separation principle has traditionally been applied at a joint torque level. Our main contribution is to apply the separation principle to Lagrange multipliers, which act as task-space force fields, leading to a task-space separation principle. In this way, we can separate postural control (implementing Donders' law) from various types of tasks-space movement planners. As an example, the proposed framework is applied to the (redundant) task of pointing with the human wrist. Nonlinear inverse optimization (NIO) is used to fit the model parameters and to capture motor strategies displayed by six human subjects during pointing tasks. The novelty of our NIO approach is that (i) the fitted motor strategy, rather than raw data, is used to filter and down-sample human behaviours; (ii) our framework is used to efficiently simulate model behaviour iteratively, until it converges towards the experimental human strategies.
Feasibility of Active Machine Learning for Multiclass Compound Classification.
Lang, Tobias; Flachsenberg, Florian; von Luxburg, Ulrike; Rarey, Matthias
2016-01-25
A common task in the hit-to-lead process is classifying sets of compounds into multiple, usually structural classes, which build the groundwork for subsequent SAR studies. Machine learning techniques can be used to automate this process by learning classification models from training compounds of each class. Gathering class information for compounds can be cost-intensive as the required data needs to be provided by human experts or experiments. This paper studies whether active machine learning can be used to reduce the required number of training compounds. Active learning is a machine learning method which processes class label data in an iterative fashion. It has gained much attention in a broad range of application areas. In this paper, an active learning method for multiclass compound classification is proposed. This method selects informative training compounds so as to optimally support the learning progress. The combination with human feedback leads to a semiautomated interactive multiclass classification procedure. This method was investigated empirically on 15 compound classification tasks containing 86-2870 compounds in 3-38 classes. The empirical results show that active learning can solve these classification tasks using 10-80% of the data which would be necessary for standard learning techniques.
Scalable splitting algorithms for big-data interferometric imaging in the SKA era
NASA Astrophysics Data System (ADS)
Onose, Alexandru; Carrillo, Rafael E.; Repetti, Audrey; McEwen, Jason D.; Thiran, Jean-Philippe; Pesquet, Jean-Christophe; Wiaux, Yves
2016-11-01
In the context of next-generation radio telescopes, like the Square Kilometre Array (SKA), the efficient processing of large-scale data sets is extremely important. Convex optimization tasks under the compressive sensing framework have recently emerged and provide both enhanced image reconstruction quality and scalability to increasingly larger data sets. We focus herein mainly on scalability and propose two new convex optimization algorithmic structures able to solve the convex optimization tasks arising in radio-interferometric imaging. They rely on proximal splitting and forward-backward iterations and can be seen, by analogy, with the CLEAN major-minor cycle, as running sophisticated CLEAN-like iterations in parallel in multiple data, prior, and image spaces. Both methods support any convex regularization function, in particular, the well-studied ℓ1 priors promoting image sparsity in an adequate domain. Tailored for big-data, they employ parallel and distributed computations to achieve scalability, in terms of memory and computational requirements. One of them also exploits randomization, over data blocks at each iteration, offering further flexibility. We present simulation results showing the feasibility of the proposed methods as well as their advantages compared to state-of-the-art algorithmic solvers. Our MATLAB code is available online on GitHub.
Ji, Julie L; Holmes, Emily A; Blackwell, Simon E
2017-01-01
Optimism is associated with positive outcomes across many health domains, from cardiovascular disease to depression. However, we know little about cognitive processes underlying optimism in psychopathology. The present study tested whether the ability to vividly imagine positive events in one's future was associated with dispositional optimism in a sample of depressed adults. Cross-sectional and longitudinal analyses were conducted, using baseline (all participants, N=150) and follow-up data (participants in the control condition only, N=63) from a clinical trial (Blackwell et al., 2015). Vividness of positive prospective imagery, assessed on a laboratory-administered task at baseline, was significantly associated with both current optimism levels at baseline and future (seven months later) optimism levels, including when controlling for potential confounds. Even when depressed, those individuals able to envision a brighter future were more optimistic, and regained optimism more quickly over time, than those less able to do so at baseline. Strategies to increase the vividness of positive prospective imagery may aid development of mental health interventions to boost optimism. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Enhanced intelligence through optimized TCPED concepts for airborne ISR
NASA Astrophysics Data System (ADS)
Spitzer, M.; Kappes, E.; Böker, D.
2012-06-01
Current multinational operations show an increased demand for high quality actionable intelligence for different operational levels and users. In order to achieve sufficient availability, quality and reliability of information, various ISR assets are orchestrated within operational theatres. Especially airborne Intelligence, Surveillance and Reconnaissance (ISR) assets provide - due to their endurance, non-intrusiveness, robustness, wide spectrum of sensors and flexibility to mission changes - significant intelligence coverage of areas of interest. An efficient and balanced utilization of airborne ISR assets calls for advanced concepts for the entire ISR process framework including the Tasking, Collection, Processing, Exploitation and Dissemination (TCPED). Beyond this, the employment of current visualization concepts, shared information bases and information customer profiles, as well as an adequate combination of ISR sensors with different information age and dynamic (online) retasking process elements provides the optimization of interlinked TCPED processes towards higher process robustness, shorter process duration, more flexibility between ISR missions and, finally, adequate "entry points" for information requirements by operational users and commands. In addition, relevant Trade-offs of distributed and dynamic TCPED processes are examined and future trends are depicted.
Li, Heng; Su, Xiaofan; Wang, Jing; Kan, Han; Han, Tingting; Zeng, Yajie; Chai, Xinyu
2018-01-01
Current retinal prostheses can only generate low-resolution visual percepts constituted of limited phosphenes which are elicited by an electrode array and with uncontrollable color and restricted grayscale. Under this visual perception, prosthetic recipients can just complete some simple visual tasks, but more complex tasks like face identification/object recognition are extremely difficult. Therefore, it is necessary to investigate and apply image processing strategies for optimizing the visual perception of the recipients. This study focuses on recognition of the object of interest employing simulated prosthetic vision. We used a saliency segmentation method based on a biologically plausible graph-based visual saliency model and a grabCut-based self-adaptive-iterative optimization framework to automatically extract foreground objects. Based on this, two image processing strategies, Addition of Separate Pixelization and Background Pixel Shrink, were further utilized to enhance the extracted foreground objects. i) The results showed by verification of psychophysical experiments that under simulated prosthetic vision, both strategies had marked advantages over Direct Pixelization in terms of recognition accuracy and efficiency. ii) We also found that recognition performance under two strategies was tied to the segmentation results and was affected positively by the paired-interrelated objects in the scene. The use of the saliency segmentation method and image processing strategies can automatically extract and enhance foreground objects, and significantly improve object recognition performance towards recipients implanted a high-density implant. Copyright © 2017 Elsevier B.V. All rights reserved.
Application of the Deming management method to equipment-inspection processes.
Campbell, C A
1996-01-01
The Biomedical Engineering staff at the Washington Hospital Center has designed an inspection process that optimizes timely completion of scheduled equipment inspections. The method used to revise the process was primarily Deming, but certainly the method incorporates the re-engineering concept of questioning the basic assumptions around which the original process was designed. This effort involved a review of the existing process in its entirety by task groups made up of representatives from all involved departments. Complete success in all areas has remained elusive. However, the lower variability of inspection completion ratios follows Deming's description of a successfully revised process. Further CQI efforts targeted at specific areas with low completion ratios will decrease this variability even further.
Li, Ke; Gomez-Cardona, Daniel; Hsieh, Jiang; Lubner, Meghan G.; Pickhardt, Perry J.; Chen, Guang-Hong
2015-01-01
Purpose: For a given imaging task and patient size, the optimal selection of x-ray tube potential (kV) and tube current-rotation time product (mAs) is pivotal in achieving the maximal radiation dose reduction while maintaining the needed diagnostic performance. Although contrast-to-noise (CNR)-based strategies can be used to optimize kV/mAs for computed tomography (CT) imaging systems employing the linear filtered backprojection (FBP) reconstruction method, a more general framework needs to be developed for systems using the nonlinear statistical model-based iterative reconstruction (MBIR) method. The purpose of this paper is to present such a unified framework for the optimization of kV/mAs selection for both FBP- and MBIR-based CT systems. Methods: The optimal selection of kV and mAs was formulated as a constrained optimization problem to minimize the objective function, Dose(kV,mAs), under the constraint that the achievable detectability index d′(kV,mAs) is not lower than the prescribed value of d℞′ for a given imaging task. Since it is difficult to analytically model the dependence of d′ on kV and mAs for the highly nonlinear MBIR method, this constrained optimization problem is solved with comprehensive measurements of Dose(kV,mAs) and d′(kV,mAs) at a variety of kV–mAs combinations, after which the overlay of the dose contours and d′ contours is used to graphically determine the optimal kV–mAs combination to achieve the lowest dose while maintaining the needed detectability for the given imaging task. As an example, d′ for a 17 mm hypoattenuating liver lesion detection task was experimentally measured with an anthropomorphic abdominal phantom at four tube potentials (80, 100, 120, and 140 kV) and fifteen mA levels (25 and 50–700) with a sampling interval of 50 mA at a fixed rotation time of 0.5 s, which corresponded to a dose (CTDIvol) range of [0.6, 70] mGy. Using the proposed method, the optimal kV and mA that minimized dose for the prescribed detectability level of d℞′=16 were determined. As another example, the optimal kV and mA for an 8 mm hyperattenuating liver lesion detection task were also measured using the developed framework. Both an in vivo animal and human subject study were used as demonstrations of how the developed framework can be applied to the clinical work flow. Results: For the first task, the optimal kV and mAs were measured to be 100 and 500, respectively, for FBP, which corresponded to a dose level of 24 mGy. In comparison, the optimal kV and mAs for MBIR were 80 and 150, respectively, which corresponded to a dose level of 4 mGy. The topographies of the iso-d′ map and the iso-CNR map were the same for FBP; thus, the use of d′- and CNR-based optimization methods generated the same results for FBP. However, the topographies of the iso-d′ and iso-CNR map were significantly different in MBIR; the CNR-based method overestimated the performance of MBIR, predicting an overly aggressive dose reduction factor. For the second task, the developed framework generated the following optimization results: for FBP, kV = 140, mA = 350, dose = 37.5 mGy; for MBIR, kV = 120, mA = 250, dose = 18.8 mGy. Again, the CNR-based method overestimated the performance of MBIR. Results of the preliminary in vivo studies were consistent with those of the phantom experiments. Conclusions: A unified and task-driven kV/mAs optimization framework has been developed in this work. The framework is applicable to both linear and nonlinear CT systems such as those using the MBIR method. As expected, the developed framework can be reduced to the conventional CNR-based kV/mAs optimization frameworks if the system is linear. For MBIR-based nonlinear CT systems, however, the developed task-based kV/mAs optimization framework is needed to achieve the maximal dose reduction while maintaining the desired diagnostic performance. PMID:26328971
Simen, Patrick; Contreras, David; Buck, Cara; Hu, Peter; Holmes, Philip; Cohen, Jonathan D
2009-12-01
The drift-diffusion model (DDM) implements an optimal decision procedure for stationary, 2-alternative forced-choice tasks. The height of a decision threshold applied to accumulating information on each trial determines a speed-accuracy tradeoff (SAT) for the DDM, thereby accounting for a ubiquitous feature of human performance in speeded response tasks. However, little is known about how participants settle on particular tradeoffs. One possibility is that they select SATs that maximize a subjective rate of reward earned for performance. For the DDM, there exist unique, reward-rate-maximizing values for its threshold and starting point parameters in free-response tasks that reward correct responses (R. Bogacz, E. Brown, J. Moehlis, P. Holmes, & J. D. Cohen, 2006). These optimal values vary as a function of response-stimulus interval, prior stimulus probability, and relative reward magnitude for correct responses. We tested the resulting quantitative predictions regarding response time, accuracy, and response bias under these task manipulations and found that grouped data conformed well to the predictions of an optimally parameterized DDM.
Parallel evolution of image processing tools for multispectral imagery
NASA Astrophysics Data System (ADS)
Harvey, Neal R.; Brumby, Steven P.; Perkins, Simon J.; Porter, Reid B.; Theiler, James P.; Young, Aaron C.; Szymanski, John J.; Bloch, Jeffrey J.
2000-11-01
We describe the implementation and performance of a parallel, hybrid evolutionary-algorithm-based system, which optimizes image processing tools for feature-finding tasks in multi-spectral imagery (MSI) data sets. Our system uses an integrated spatio-spectral approach and is capable of combining suitably-registered data from different sensors. We investigate the speed-up obtained by parallelization of the evolutionary process via multiple processors (a workstation cluster) and develop a model for prediction of run-times for different numbers of processors. We demonstrate our system on Landsat Thematic Mapper MSI , covering the recent Cerro Grande fire at Los Alamos, NM, USA.
Evaluation of an Integrated Multi-Task Machine Learning System with Humans in the Loop
2007-01-01
machine learning components natural language processing, and optimization...was examined with a test explicitly developed to measure the impact of integrated machine learning when used by a human user in a real world setting...study revealed that integrated machine learning does produce a positive impact on overall performance. This paper also discusses how specific machine learning components contributed to human-system
CFD Analysis of a Penta-hulled, Air-Entrapment, High-Speed Planning Vessel
2008-03-01
INTRODUCTION A. BACKGROUND The 2007 Total Ship Systems Engineering (TSSE) class was tasked with designing a new riverine craft or specialized...the concept of operations, for our defined system architecture (combined Specialized Command and Control Craft / Mobile Operating Base). This also...of an integration process that requires both systems and equipment optimization while meeting predetermined requirements set for by the Concept of
Embodied Interactions in Human-Machine Decision Making for Situation Awareness Enhancement Systems
2016-06-09
characterize differences in spatial navigation strategies in a complex task, the Traveling Salesman Problem (TSP). For the second year, we developed...visual processing, leading to better solutions for spatial optimization problems . I will develop a framework to determine which body expressions best...methods include systematic characterization of gestures during complex problem solving. 15. SUBJECT TERMS Embodied interaction, gestures, one-shot
NASA Technical Reports Server (NTRS)
Hess, R. A.
1976-01-01
Paramount to proper utilization of electronic displays is a method for determining pilot-centered display requirements. Display design should be viewed fundamentally as a guidance and control problem which has interactions with the designer's knowledge of human psychomotor activity. From this standpoint, reliable analytical models of human pilots as information processors and controllers can provide valuable insight into the display design process. A relatively straightforward, nearly algorithmic procedure for deriving model-based, pilot-centered display requirements was developed and is presented. The optimal or control theoretic pilot model serves as the backbone of the design methodology, which is specifically directed toward the synthesis of head-down, electronic, cockpit display formats. Some novel applications of the optimal pilot model are discussed. An analytical design example is offered which defines a format for the electronic display to be used in a UH-1H helicopter in a landing approach task involving longitudinal and lateral degrees of freedom.
Noise properties and task-based evaluation of diffraction-enhanced imaging
Brankov, Jovan G.; Saiz-Herranz, Alejandro; Wernick, Miles N.
2014-01-01
Abstract. Diffraction-enhanced imaging (DEI) is an emerging x-ray imaging method that simultaneously yields x-ray attenuation and refraction images and holds great promise for soft-tissue imaging. The DEI has been mainly studied using synchrotron sources, but efforts have been made to transition the technology to more practical implementations using conventional x-ray sources. The main technical challenge of this transition lies in the relatively lower x-ray flux obtained from conventional sources, leading to photon-limited data contaminated by Poisson noise. Several issues that must be understood in order to design and optimize DEI imaging systems with respect to noise performance are addressed. Specifically, we: (a) develop equations describing the noise properties of DEI images, (b) derive the conditions under which the DEI algorithm is statistically optimal, (c) characterize the imaging performance that can be obtained as measured by task-based metrics, and (d) consider image-processing steps that may be employed to mitigate noise effects. PMID:26158056
Culture Moderates Biases in Search Decisions.
Pattaratanakun, Jake A; Mak, Vincent
2015-08-01
Prior studies suggest that people often search insufficiently in sequential-search tasks compared with the predictions of benchmark optimal strategies that maximize expected payoff. However, those studies were mostly conducted in individualist Western cultures; Easterners from collectivist cultures, with their higher susceptibility to escalation of commitment induced by sunk search costs, could exhibit a reversal of this undersearch bias by searching more than optimally, but only when search costs are high. We tested our theory in four experiments. In our pilot experiment, participants generally undersearched when search cost was low, but only Eastern participants oversearched when search cost was high. In Experiments 1 and 2, we obtained evidence for our hypothesized effects via a cultural-priming manipulation on bicultural participants in which we manipulated the language used in the program interface. We obtained further process evidence for our theory in Experiment 3, in which we made sunk costs nonsalient in the search task-as expected, cross-cultural effects were largely mitigated. © The Author(s) 2015.
Multi-optimization Criteria-based Robot Behavioral Adaptability and Motion Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pin, Francois G.
2002-06-01
Robotic tasks are typically defined in Task Space (e.g., the 3-D World), whereas robots are controlled in Joint Space (motors). The transformation from Task Space to Joint Space must consider the task objectives (e.g., high precision, strength optimization, torque optimization), the task constraints (e.g., obstacles, joint limits, non-holonomic constraints, contact or tool task constraints), and the robot kinematics configuration (e.g., tools, type of joints, mobile platform, manipulator, modular additions, locked joints). Commercially available robots are optimized for a specific set of tasks, objectives and constraints and, therefore, their control codes are extremely specific to a particular set of conditions. Thus,more » there exist a multiplicity of codes, each handling a particular set of conditions, but none suitable for use on robots with widely varying tasks, objectives, constraints, or environments. On the other hand, most DOE missions and tasks are typically ''batches of one''. Attempting to use commercial codes for such work requires significant personnel and schedule costs for re-programming or adding code to the robots whenever a change in task objective, robot configuration, number and type of constraint, etc. occurs. The objective of our project is to develop a ''generic code'' to implement this Task-space to Joint-Space transformation that would allow robot behavior adaptation, in real time (at loop rate), to changes in task objectives, number and type of constraints, modes of controls, kinematics configuration (e.g., new tools, added module). Our specific goal is to develop a single code for the general solution of under-specified systems of algebraic equations that is suitable for solving the inverse kinematics of robots, is useable for all types of robots (mobile robots, manipulators, mobile manipulators, etc.) with no limitation on the number of joints and the number of controlled Task-Space variables, can adapt to real time changes in number and type of constraints and in task objectives, and can adapt to changes in kinematics configurations (change of module, change of tool, joint failure adaptation, etc.).« less
Exploring the quantum speed limit with computer games
NASA Astrophysics Data System (ADS)
Sørensen, Jens Jakob W. H.; Pedersen, Mads Kock; Munch, Michael; Haikka, Pinja; Jensen, Jesper Halkjær; Planke, Tilo; Andreasen, Morten Ginnerup; Gajdacz, Miroslav; Mølmer, Klaus; Lieberoth, Andreas; Sherson, Jacob F.
2016-04-01
Humans routinely solve problems of immense computational complexity by intuitively forming simple, low-dimensional heuristic strategies. Citizen science (or crowd sourcing) is a way of exploiting this ability by presenting scientific research problems to non-experts. ‘Gamification’—the application of game elements in a non-game context—is an effective tool with which to enable citizen scientists to provide solutions to research problems. The citizen science games Foldit, EteRNA and EyeWire have been used successfully to study protein and RNA folding and neuron mapping, but so far gamification has not been applied to problems in quantum physics. Here we report on Quantum Moves, an online platform gamifying optimization problems in quantum physics. We show that human players are able to find solutions to difficult problems associated with the task of quantum computing. Players succeed where purely numerical optimization fails, and analyses of their solutions provide insights into the problem of optimization of a more profound and general nature. Using player strategies, we have thus developed a few-parameter heuristic optimization method that efficiently outperforms the most prominent established numerical methods. The numerical complexity associated with time-optimal solutions increases for shorter process durations. To understand this better, we produced a low-dimensional rendering of the optimization landscape. This rendering reveals why traditional optimization methods fail near the quantum speed limit (that is, the shortest process duration with perfect fidelity). Combined analyses of optimization landscapes and heuristic solution strategies may benefit wider classes of optimization problems in quantum physics and beyond.
Exploring the quantum speed limit with computer games.
Sørensen, Jens Jakob W H; Pedersen, Mads Kock; Munch, Michael; Haikka, Pinja; Jensen, Jesper Halkjær; Planke, Tilo; Andreasen, Morten Ginnerup; Gajdacz, Miroslav; Mølmer, Klaus; Lieberoth, Andreas; Sherson, Jacob F
2016-04-14
Humans routinely solve problems of immense computational complexity by intuitively forming simple, low-dimensional heuristic strategies. Citizen science (or crowd sourcing) is a way of exploiting this ability by presenting scientific research problems to non-experts. 'Gamification'--the application of game elements in a non-game context--is an effective tool with which to enable citizen scientists to provide solutions to research problems. The citizen science games Foldit, EteRNA and EyeWire have been used successfully to study protein and RNA folding and neuron mapping, but so far gamification has not been applied to problems in quantum physics. Here we report on Quantum Moves, an online platform gamifying optimization problems in quantum physics. We show that human players are able to find solutions to difficult problems associated with the task of quantum computing. Players succeed where purely numerical optimization fails, and analyses of their solutions provide insights into the problem of optimization of a more profound and general nature. Using player strategies, we have thus developed a few-parameter heuristic optimization method that efficiently outperforms the most prominent established numerical methods. The numerical complexity associated with time-optimal solutions increases for shorter process durations. To understand this better, we produced a low-dimensional rendering of the optimization landscape. This rendering reveals why traditional optimization methods fail near the quantum speed limit (that is, the shortest process duration with perfect fidelity). Combined analyses of optimization landscapes and heuristic solution strategies may benefit wider classes of optimization problems in quantum physics and beyond.
A Tool for Conditions Tag Management in ATLAS
NASA Astrophysics Data System (ADS)
Sharmazanashvili, A.; Batiashvili, G.; Gvaberidze, G.; Shekriladze, L.; Formica, A.; Atlas Collaboration
2014-06-01
ATLAS Conditions data include about 2 TB in a relational database and 400 GB of files referenced from the database. Conditions data is entered and retrieved using COOL, the API for accessing data in the LCG Conditions Database infrastructure. It is managed using an ATLAS-customized python based tool set. Conditions data are required for every reconstruction and simulation job, so access to them is crucial for all aspects of ATLAS data taking and analysis, as well as by preceding tasks to derive optimal corrections to reconstruction. Optimized sets of conditions for processing are accomplished using strict version control on those conditions: a process which assigns COOL Tags to sets of conditions, and then unifies those conditions over data-taking intervals into a COOL Global Tag. This Global Tag identifies the set of conditions used to process data so that the underlying conditions can be uniquely identified with 100% reproducibility should the processing be executed again. Understanding shifts in the underlying conditions from one tag to another and ensuring interval completeness for all detectors for a set of runs to be processed is a complex task, requiring tools beyond the above mentioned python utilities. Therefore, a JavaScript /PHP based utility called the Conditions Tag Browser (CTB) has been developed. CTB gives detector and conditions experts the possibility to navigate through the different databases and COOL folders; explore the content of given tags and the differences between them, as well as their extent in time; visualize the content of channels associated with leaf tags. This report describes the structure and PHP/ JavaScript classes of functions of the CTB.
Dose-Related Effects of Alcohol on Cognitive Functioning
Dry, Matthew J.; Burns, Nicholas R.; Nettelbeck, Ted; Farquharson, Aaron L.; White, Jason M.
2012-01-01
We assessed the suitability of six applied tests of cognitive functioning to provide a single marker for dose-related alcohol intoxication. Numerous studies have demonstrated that alcohol has a deleterious effect on specific areas of cognitive processing but few have compared the effects of alcohol across a wide range of different cognitive processes. Adult participants (N = 56, 32 males, 24 females aged 18–45 years) were randomized to control or alcohol treatments within a mixed design experiment involving multiple-dosages at approximately one hour intervals (attained mean blood alcohol concentrations (BACs) of 0.00, 0.048, 0.082 and 0.10%), employing a battery of six psychometric tests; the Useful Field of View test (UFOV; processing speed together with directed attention); the Self-Ordered Pointing Task (SOPT; working memory); Inspection Time (IT; speed of processing independent from motor responding); the Traveling Salesperson Problem (TSP; strategic optimization); the Sustained Attention to Response Task (SART; vigilance, response inhibition and psychomotor function); and the Trail-Making Test (TMT; cognitive flexibility and psychomotor function). Results demonstrated that impairment is not uniform across different domains of cognitive processing and that both the size of the alcohol effect and the magnitude of effect change across different dose levels are quantitatively different for different cognitive processes. Only IT met the criteria for a marker for wide-spread application: reliable dose-related decline in a basic process as a function of rising BAC level and easy to use non-invasive task properties. PMID:23209840
Optimal deployment of resources for maximizing impact in spreading processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lokhov, Andrey Y.; Saad, David
The effective use of limited resources for controlling spreading processes on networks is of prime significance in diverse contexts, ranging from the identification of “influential spreaders” for maximizing information dissemination and targeted interventions in regulatory networks, to the development of mitigation policies for infectious diseases and financial contagion in economic systems. Solutions for these optimization tasks that are based purely on topological arguments are not fully satisfactory; in realistic settings, the problem is often characterized by heterogeneous interactions and requires interventions in a dynamic fashion over a finite time window via a restricted set of controllable nodes. The optimal distributionmore » of available resources hence results from an interplay between network topology and spreading dynamics. Here, we show how these problems can be addressed as particular instances of a universal analytical framework based on a scalable dynamic message-passing approach and demonstrate the efficacy of the method on a variety of real-world examples.« less
Optimal deployment of resources for maximizing impact in spreading processes
Lokhov, Andrey Y.; Saad, David
2017-09-12
The effective use of limited resources for controlling spreading processes on networks is of prime significance in diverse contexts, ranging from the identification of “influential spreaders” for maximizing information dissemination and targeted interventions in regulatory networks, to the development of mitigation policies for infectious diseases and financial contagion in economic systems. Solutions for these optimization tasks that are based purely on topological arguments are not fully satisfactory; in realistic settings, the problem is often characterized by heterogeneous interactions and requires interventions in a dynamic fashion over a finite time window via a restricted set of controllable nodes. The optimal distributionmore » of available resources hence results from an interplay between network topology and spreading dynamics. Here, we show how these problems can be addressed as particular instances of a universal analytical framework based on a scalable dynamic message-passing approach and demonstrate the efficacy of the method on a variety of real-world examples.« less
Arctic cognition: a study of cognitive performance in summer and winter at 69 degrees N
NASA Technical Reports Server (NTRS)
Brennen, T.; Martinussen, M.; Hansen, B. O.; Hjemdal, O.
1999-01-01
Evidence has accumulated over the past 15 years that affect in humans is cyclical. In winter there is a tendency to depression, with remission in summer, and this effect is stronger at higher latitudes. In order to determine whether human cognition is similarly rhythmical, this study investigated the cognitive processes of 100 participants living at 69 degrees N. Participants were tested in summer and winter on a range of cognitive tasks, including verbal memory, attention and simple reaction time tasks. The seasonally counterbalanced design and the very northerly latitude of this study provide optimal conditions for detecting impaired cognitive performance in winter, and the conclusion is negative: of five tasks with seasonal effects, four had disadvantages in summer. Like the menstrual cycle, the circannual cycle appears to influence mood but not cognition.
Total systems design analysis of high performance structures
NASA Technical Reports Server (NTRS)
Verderaime, V.
1993-01-01
Designer-control parameters were identified at interdiscipline interfaces to optimize structural systems performance and downstream development and operations with reliability and least life-cycle cost. Interface tasks and iterations are tracked through a matrix of performance disciplines integration versus manufacturing, verification, and operations interactions for a total system design analysis. Performance integration tasks include shapes, sizes, environments, and materials. Integrity integrating tasks are reliability and recurring structural costs. Significant interface designer control parameters were noted as shapes, dimensions, probability range factors, and cost. Structural failure concept is presented, and first-order reliability and deterministic methods, benefits, and limitations are discussed. A deterministic reliability technique combining benefits of both is proposed for static structures which is also timely and economically verifiable. Though launch vehicle environments were primarily considered, the system design process is applicable to any surface system using its own unique filed environments.
NASA Technical Reports Server (NTRS)
Shields, W. E.; Smith, J. D.; Washburn, D. A.; Rumbaugh, D. M. (Principal Investigator)
1997-01-01
The authors asked whether animals, like humans, use an uncertain response adaptively to escape indeterminate stimulus relations. Humans and monkeys were placed in a same-different task, known to be challenging for animals. Its difficulty was increased further by reducing the size of the stimulus differences, thereby making many same and different trials difficult to tell apart. Monkeys do escape selectively from these threshold trials, even while coping with 7 absolute stimulus levels concurrently. Monkeys even adjust their response strategies on short time scales according to the local task conditions. Signal-detection and optimality analyses confirm the similarity of humans' and animals' performances. Whereas associative interpretations account poorly for these results, an intuitive uncertainty construct does so easily. The authors discuss the cognitive processes that allow uncertainty's adaptive use and recommend further comparative studies of metacognition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gang, G; Siewerdsen, J; Stayman, J
Purpose: There has been increasing interest in integrating fluence field modulation (FFM) devices with diagnostic CT scanners for dose reduction purposes. Conventional FFM strategies, however, are often either based on heuristics or the analysis of filtered-backprojection (FBP) performance. This work investigates a prospective task-driven optimization of FFM for model-based iterative reconstruction (MBIR) in order to improve imaging performance at the same total dose as conventional strategies. Methods: The task-driven optimization framework utilizes an ultra-low dose 3D scout as a patient-specific anatomical model and a mathematical formation of the imaging task. The MBIR method investigated is quadratically penalized-likelihood reconstruction. The FFMmore » objective function uses detectability index, d’, computed as a function of the predicted spatial resolution and noise in the image. To optimize performance throughout the object, a maxi-min objective was adopted where the minimum d’ over multiple locations is maximized. To reduce the dimensionality of the problem, FFM is parameterized as a linear combination of 2D Gaussian basis functions over horizontal detector pixels and projection angles. The coefficients of these bases are found using the covariance matrix adaptation evolution strategy (CMA-ES) algorithm. The task-driven design was compared with three other strategies proposed for FBP reconstruction for a calcification cluster discrimination task in an abdomen phantom. Results: The task-driven optimization yielded FFM that was significantly different from those designed for FBP. Comparing all four strategies, the task-based design achieved the highest minimum d’ with an 8–48% improvement, consistent with the maxi-min objective. In addition, d’ was improved to a greater extent over a larger area within the entire phantom. Conclusion: Results from this investigation suggests the need to re-evaluate conventional FFM strategies for MBIR. The task-based optimization framework provides a promising approach that maximizes imaging performance under the same total dose constraint.« less
Impedance learning for robotic contact tasks using natural actor-critic algorithm.
Kim, Byungchan; Park, Jooyoung; Park, Shinsuk; Kang, Sungchul
2010-04-01
Compared with their robotic counterparts, humans excel at various tasks by using their ability to adaptively modulate arm impedance parameters. This ability allows us to successfully perform contact tasks even in uncertain environments. This paper considers a learning strategy of motor skill for robotic contact tasks based on a human motor control theory and machine learning schemes. Our robot learning method employs impedance control based on the equilibrium point control theory and reinforcement learning to determine the impedance parameters for contact tasks. A recursive least-square filter-based episodic natural actor-critic algorithm is used to find the optimal impedance parameters. The effectiveness of the proposed method was tested through dynamic simulations of various contact tasks. The simulation results demonstrated that the proposed method optimizes the performance of the contact tasks in uncertain conditions of the environment.
Foo, Brian; van der Schaar, Mihaela
2010-11-01
In this paper, we discuss distributed optimization techniques for configuring classifiers in a real-time, informationally-distributed stream mining system. Due to the large volume of streaming data, stream mining systems must often cope with overload, which can lead to poor performance and intolerable processing delay for real-time applications. Furthermore, optimizing over an entire system of classifiers is a difficult task since changing the filtering process at one classifier can impact both the feature values of data arriving at classifiers further downstream and thus, the classification performance achieved by an ensemble of classifiers, as well as the end-to-end processing delay. To address this problem, this paper makes three main contributions: 1) Based on classification and queuing theoretic models, we propose a utility metric that captures both the performance and the delay of a binary filtering classifier system. 2) We introduce a low-complexity framework for estimating the system utility by observing, estimating, and/or exchanging parameters between the inter-related classifiers deployed across the system. 3) We provide distributed algorithms to reconfigure the system, and analyze the algorithms based on their convergence properties, optimality, information exchange overhead, and rate of adaptation to non-stationary data sources. We provide results using different video classifier systems.
Optimization of a hardware implementation for pulse coupled neural networks for image applications
NASA Astrophysics Data System (ADS)
Gimeno Sarciada, Jesús; Lamela Rivera, Horacio; Warde, Cardinal
2010-04-01
Pulse Coupled Neural Networks are a very useful tool for image processing and visual applications, since it has the advantages of being invariant to image changes as rotation, scale, or certain distortion. Among other characteristics, the PCNN changes a given image input into a temporal representation which can be easily later analyzed for pattern recognition. The structure of a PCNN though, makes it necessary to determine all of its parameters very carefully in order to function optimally, so that the responses to the kind of inputs it will be subjected are clearly discriminated allowing for an easy and fast post-processing yielding useful results. This tweaking of the system is a taxing process. In this paper we analyze and compare two methods for modeling PCNNs. A purely mathematical model is programmed and a similar circuital model is also designed. Both are then used to determine the optimal values of the several parameters of a PCNN: gain, threshold, time constants for feed-in and threshold and linking leading to an optimal design for image recognition. The results are compared for usefulness, accuracy and speed, as well as the performance and time requirements for fast and easy design, thus providing a tool for future ease of management of a PCNN for different tasks.
Piéron’s Law and Optimal Behavior in Perceptual Decision-Making
van Maanen, Leendert; Grasman, Raoul P. P. P.; Forstmann, Birte U.; Wagenmakers, Eric-Jan
2012-01-01
Piéron’s Law is a psychophysical regularity in signal detection tasks that states that mean response times decrease as a power function of stimulus intensity. In this article, we extend Piéron’s Law to perceptual two-choice decision-making tasks, and demonstrate that the law holds as the discriminability between two competing choices is manipulated, even though the stimulus intensity remains constant. This result is consistent with predictions from a Bayesian ideal observer model. The model assumes that in order to respond optimally in a two-choice decision-making task, participants continually update the posterior probability of each response alternative, until the probability of one alternative crosses a criterion value. In addition to predictions for two-choice decision-making tasks, we extend the ideal observer model to predict Piéron’s Law in signal detection tasks. We conclude that Piéron’s Law is a general phenomenon that may be caused by optimality constraints. PMID:22232572
Bayesian Optimization for Neuroimaging Pre-processing in Brain Age Classification and Prediction
Lancaster, Jenessa; Lorenz, Romy; Leech, Rob; Cole, James H.
2018-01-01
Neuroimaging-based age prediction using machine learning is proposed as a biomarker of brain aging, relating to cognitive performance, health outcomes and progression of neurodegenerative disease. However, even leading age-prediction algorithms contain measurement error, motivating efforts to improve experimental pipelines. T1-weighted MRI is commonly used for age prediction, and the pre-processing of these scans involves normalization to a common template and resampling to a common voxel size, followed by spatial smoothing. Resampling parameters are often selected arbitrarily. Here, we sought to improve brain-age prediction accuracy by optimizing resampling parameters using Bayesian optimization. Using data on N = 2003 healthy individuals (aged 16–90 years) we trained support vector machines to (i) distinguish between young (<22 years) and old (>50 years) brains (classification) and (ii) predict chronological age (regression). We also evaluated generalisability of the age-regression model to an independent dataset (CamCAN, N = 648, aged 18–88 years). Bayesian optimization was used to identify optimal voxel size and smoothing kernel size for each task. This procedure adaptively samples the parameter space to evaluate accuracy across a range of possible parameters, using independent sub-samples to iteratively assess different parameter combinations to arrive at optimal values. When distinguishing between young and old brains a classification accuracy of 88.1% was achieved, (optimal voxel size = 11.5 mm3, smoothing kernel = 2.3 mm). For predicting chronological age, a mean absolute error (MAE) of 5.08 years was achieved, (optimal voxel size = 3.73 mm3, smoothing kernel = 3.68 mm). This was compared to performance using default values of 1.5 mm3 and 4mm respectively, resulting in MAE = 5.48 years, though this 7.3% improvement was not statistically significant. When assessing generalisability, best performance was achieved when applying the entire Bayesian optimization framework to the new dataset, out-performing the parameters optimized for the initial training dataset. Our study outlines the proof-of-principle that neuroimaging models for brain-age prediction can use Bayesian optimization to derive case-specific pre-processing parameters. Our results suggest that different pre-processing parameters are selected when optimization is conducted in specific contexts. This potentially motivates use of optimization techniques at many different points during the experimental process, which may improve statistical sensitivity and reduce opportunities for experimenter-led bias. PMID:29483870
Martin, A K; Mowry, B; Reutens, D; Robinson, G A
2015-10-01
Patients with schizophrenia often display deficits on tasks thought to measure "executive" processes. Recently, it has been suggested that reductions in fluid intelligence test performance entirely explain deficits reported for patients with focal frontal lesions on classical executive tasks. For patients with schizophrenia, it is unclear whether deficits on executive tasks are entirely accountable by fluid intelligence and representative of a common general process or best accounted for by distinct contributions to the cognitive profile of schizophrenia. In the current study, 50 patients with schizophrenia and 50 age, sex and premorbid intelligence matched controls were assessed using a broad neuropsychological battery, including tasks considered sensitive to executive abilities, namely the Hayling Sentence Completion Test (HSCT), word fluency, Stroop test, digit-span backwards, and spatial working memory. Fluid intelligence was measured using both the Matrix reasoning subtest from the Weschler Abbreviated Scale of Intelligence (WASI) and a composite score derived from a number of cognitive tests. Patients with schizophrenia were impaired on all cognitive measures compared with controls, except smell identification and the optimal betting and risk-taking measures from the Cambridge Gambling Task. After introducing fluid intelligence as a covariate, significant differences remained for HSCT suppression errors, and classical executive function tests such as the Stroop test and semantic/phonemic word fluency, regardless of which fluid intelligence measure was included. Fluid intelligence does not entirely explain impaired performance on all tests considered as reflecting "executive" processes. For schizophrenia, these measures should remain part of a comprehensive neuropsychological assessment alongside a measure of fluid intelligence. Copyright © 2015 Elsevier Inc. All rights reserved.
Modeling and Optimization of Multiple Unmanned Aerial Vehicles System Architecture Alternatives
Wang, Weiping; He, Lei
2014-01-01
Unmanned aerial vehicle (UAV) systems have already been used in civilian activities, although very limitedly. Confronted different types of tasks, multi UAVs usually need to be coordinated. This can be extracted as a multi UAVs system architecture problem. Based on the general system architecture problem, a specific description of the multi UAVs system architecture problem is presented. Then the corresponding optimization problem and an efficient genetic algorithm with a refined crossover operator (GA-RX) is proposed to accomplish the architecting process iteratively in the rest of this paper. The availability and effectiveness of overall method is validated using 2 simulations based on 2 different scenarios. PMID:25140328
Numerical algorithm for optimization of positive electrode in lead-acid batteries
NASA Astrophysics Data System (ADS)
Murariu, Ancuta Teodora; Buimaga-Iarinca, Luiza; Morari, Cristian
2017-12-01
The positive electrode in lead-acid batteries is one of the most sensitive parts of the whole battery, since it is affected by various aggresive chemical processes during its life. Therefore, an optimal design of the positive electrode of the battery may have as efect a dramatic improvement of the properties of the battery - such as total capacity or endurance during its life. Our efforts dedicated to this goal cover a range of rather complex tasks, from the design based on numerical analysis to statistic analysis. We present the structure of the software implementation and the results obtained for three types of positive electrodes.
Sanchez-Lopez, Javier; Fernandez, Thalia; Silva-Pereyra, Juan; Martinez Mesa, Juan A.; Di Russo, Francesco
2014-01-01
Cognitive and motor processes are essential for optimal athletic performance. Individuals trained in different skills and sports may have specialized cognitive abilities and motor strategies related to the characteristics of the activity and the effects of training and expertise. Most studies have investigated differences in motor-related cortical potential (MRCP) during self-paced tasks in athletes but not in stimulus-related tasks. The aim of the present study was to identify the differences in performance and MRCP between skilled and novice martial arts athletes during two different types of tasks: a sustained attention task and a transient attention task. Behavioral and electrophysiological data from twenty-two martial arts athletes were obtained while they performed a continuous performance task (CPT) to measure sustained attention and a cued continuous performance task (c-CPT) to measure transient attention. MRCP components were analyzed and compared between groups. Electrophysiological data in the CPT task indicated larger prefrontal positive activity and greater posterior negativity distribution prior to a motor response in the skilled athletes, while novices showed a significantly larger response-related P3 after a motor response in centro-parietal areas. A different effect occurred in the c-CPT task in which the novice athletes showed strong prefrontal positive activity before a motor response and a large response-related P3, while in skilled athletes, the prefrontal activity was absent. We propose that during the CPT, skilled athletes were able to allocate two different but related processes simultaneously according to CPT demand, which requires controlled attention and controlled motor responses. On the other hand, in the c-CPT, skilled athletes showed better cue facilitation, which permitted a major economy of resources and “automatic” or less controlled responses to relevant stimuli. In conclusion, the present data suggest that motor expertise enhances neural flexibility and allows better adaptation of cognitive control to the requested task. PMID:24621480
Sanchez-Lopez, Javier; Fernandez, Thalia; Silva-Pereyra, Juan; Martinez Mesa, Juan A; Di Russo, Francesco
2014-01-01
Cognitive and motor processes are essential for optimal athletic performance. Individuals trained in different skills and sports may have specialized cognitive abilities and motor strategies related to the characteristics of the activity and the effects of training and expertise. Most studies have investigated differences in motor-related cortical potential (MRCP) during self-paced tasks in athletes but not in stimulus-related tasks. The aim of the present study was to identify the differences in performance and MRCP between skilled and novice martial arts athletes during two different types of tasks: a sustained attention task and a transient attention task. Behavioral and electrophysiological data from twenty-two martial arts athletes were obtained while they performed a continuous performance task (CPT) to measure sustained attention and a cued continuous performance task (c-CPT) to measure transient attention. MRCP components were analyzed and compared between groups. Electrophysiological data in the CPT task indicated larger prefrontal positive activity and greater posterior negativity distribution prior to a motor response in the skilled athletes, while novices showed a significantly larger response-related P3 after a motor response in centro-parietal areas. A different effect occurred in the c-CPT task in which the novice athletes showed strong prefrontal positive activity before a motor response and a large response-related P3, while in skilled athletes, the prefrontal activity was absent. We propose that during the CPT, skilled athletes were able to allocate two different but related processes simultaneously according to CPT demand, which requires controlled attention and controlled motor responses. On the other hand, in the c-CPT, skilled athletes showed better cue facilitation, which permitted a major economy of resources and "automatic" or less controlled responses to relevant stimuli. In conclusion, the present data suggest that motor expertise enhances neural flexibility and allows better adaptation of cognitive control to the requested task.
Gas leak detection in infrared video with background modeling
NASA Astrophysics Data System (ADS)
Zeng, Xiaoxia; Huang, Likun
2018-03-01
Background modeling plays an important role in the task of gas detection based on infrared video. VIBE algorithm is a widely used background modeling algorithm in recent years. However, the processing speed of the VIBE algorithm sometimes cannot meet the requirements of some real time detection applications. Therefore, based on the traditional VIBE algorithm, we propose a fast prospect model and optimize the results by combining the connected domain algorithm and the nine-spaces algorithm in the following processing steps. Experiments show the effectiveness of the proposed method.
Low Temperature Performance of High-Speed Neural Network Circuits
NASA Technical Reports Server (NTRS)
Duong, T.; Tran, M.; Daud, T.; Thakoor, A.
1995-01-01
Artificial neural networks, derived from their biological counterparts, offer a new and enabling computing paradigm specially suitable for such tasks as image and signal processing with feature classification/object recognition, global optimization, and adaptive control. When implemented in fully parallel electronic hardware, it offers orders of magnitude speed advantage. Basic building blocks of the new architecture are the processing elements called neurons implemented as nonlinear operational amplifiers with sigmoidal transfer function, interconnected through weighted connections called synapses implemented using circuitry for weight storage and multiply functions either in an analog, digital, or hybrid scheme.
Dynamic Network Selection for Multicast Services in Wireless Cooperative Networks
NASA Astrophysics Data System (ADS)
Chen, Liang; Jin, Le; He, Feng; Cheng, Hanwen; Wu, Lenan
In next generation mobile multimedia communications, different wireless access networks are expected to cooperate. However, it is a challenging task to choose an optimal transmission path in this scenario. This paper focuses on the problem of selecting the optimal access network for multicast services in the cooperative mobile and broadcasting networks. An algorithm is proposed, which considers multiple decision factors and multiple optimization objectives. An analytic hierarchy process (AHP) method is applied to schedule the service queue and an artificial neural network (ANN) is used to improve the flexibility of the algorithm. Simulation results show that by applying the AHP method, a group of weight ratios can be obtained to improve the performance of multiple objectives. And ANN method is effective to adaptively adjust weight ratios when users' new waiting threshold is generated.
Wang, Zhongqi; Yang, Bo; Kang, Yonggang; Yang, Yuan
2016-01-01
Fixture plays an important part in constraining excessive sheet metal part deformation at machining, assembly, and measuring stages during the whole manufacturing process. However, it is still a difficult and nontrivial task to design and optimize sheet metal fixture locating layout at present because there is always no direct and explicit expression describing sheet metal fixture locating layout and responding deformation. To that end, an RBF neural network prediction model is proposed in this paper to assist design and optimization of sheet metal fixture locating layout. The RBF neural network model is constructed by training data set selected by uniform sampling and finite element simulation analysis. Finally, a case study is conducted to verify the proposed method.
Wang, Zhongqi; Yang, Bo; Kang, Yonggang; Yang, Yuan
2016-01-01
Fixture plays an important part in constraining excessive sheet metal part deformation at machining, assembly, and measuring stages during the whole manufacturing process. However, it is still a difficult and nontrivial task to design and optimize sheet metal fixture locating layout at present because there is always no direct and explicit expression describing sheet metal fixture locating layout and responding deformation. To that end, an RBF neural network prediction model is proposed in this paper to assist design and optimization of sheet metal fixture locating layout. The RBF neural network model is constructed by training data set selected by uniform sampling and finite element simulation analysis. Finally, a case study is conducted to verify the proposed method. PMID:27127499
The integrated manual and automatic control of complex flight systems
NASA Technical Reports Server (NTRS)
Schmidt, David K.
1991-01-01
Research dealt with the general area of optimal flight control synthesis for manned flight vehicles. The work was generic; no specific vehicle was the focus of study. However, the class of vehicles generally considered were those for which high authority, multivariable control systems might be considered, for the purpose of stabilization and the achievement of optimal handling characteristics. Within this scope, the topics of study included several optimal control synthesis techniques, control-theoretic modeling of the human operator in flight control tasks, and the development of possible handling qualities metrics and/or measures of merit. Basic contributions were made in all these topics, including human operator (pilot) models for multi-loop tasks, optimal output feedback flight control synthesis techniques; experimental validations of the methods developed, and fundamental modeling studies of the air-to-air tracking and flared landing tasks.
Gray, Rob
2013-08-01
Performance of a skill that involves acting on a goal object (e.g., a ball to be hit) can influence one's judgment of the size and speed of that object. The present study examined how these action-specific effects are affected when the goal of the actor is varied and they are free to choose between alternative actions. In Experiment 1, expert baseball players were asked to perform three different directional hitting tasks in a batting simulation and make interleaved perceptual judgments about three ball parameters (speed, plate crossing location, and size). Perceived ball size was largest (and perceived speed was slowest) when the ball crossing location was optimal for the particular hitting task the batter was performing (e.g., an "outside" pitch for opposite-field hitting). The magnitude of processing dependency between variables (speed vs. location and size vs. location) was positively correlated with batting performance. In Experiment 2, the action-specific effects observed in Experiment 1 were mimicked by systematically changing the ball diameter in the simulation as a function of plate crossing location. The number of swing initiations was greater when ball size was larger, and batters were more successful in the hitting task for which the larger pitches were optimal (e.g., greater number of pull hits than opposite-field hits when "inside" pitches were larger). These findings suggest attentional accentuation of goal-relevant targets underlies action-related changes in perception and are consistent with an action selection role for these effects. 2013 APA, all rights reserved
Preliminary Work for Examining the Scalability of Reinforcement Learning
NASA Technical Reports Server (NTRS)
Clouse, Jeff
1998-01-01
Researchers began studying automated agents that learn to perform multiple-step tasks early in the history of artificial intelligence (Samuel, 1963; Samuel, 1967; Waterman, 1970; Fikes, Hart & Nilsonn, 1972). Multiple-step tasks are tasks that can only be solved via a sequence of decisions, such as control problems, robotics problems, classic problem-solving, and game-playing. The objective of agents attempting to learn such tasks is to use the resources they have available in order to become more proficient at the tasks. In particular, each agent attempts to develop a good policy, a mapping from states to actions, that allows it to select actions that optimize a measure of its performance on the task; for example, reducing the number of steps necessary to complete the task successfully. Our study focuses on reinforcement learning, a set of learning techniques where the learner performs trial-and-error experiments in the task and adapts its policy based on the outcome of those experiments. Much of the work in reinforcement learning has focused on a particular, simple representation, where every problem state is represented explicitly in a table, and associated with each state are the actions that can be chosen in that state. A major advantage of this table lookup representation is that one can prove that certain reinforcement learning techniques will develop an optimal policy for the current task. The drawback is that the representation limits the application of reinforcement learning to multiple-step tasks with relatively small state-spaces. There has been a little theoretical work that proves that convergence to optimal solutions can be obtained when using generalization structures, but the structures are quite simple. The theory says little about complex structures, such as multi-layer, feedforward artificial neural networks (Rumelhart & McClelland, 1986), but empirical results indicate that the use of reinforcement learning with such structures is promising. These empirical results make no theoretical claims, nor compare the policies produced to optimal policies. A goal of our work is to be able to make the comparison between an optimal policy and one stored in an artificial neural network. A difficulty of performing such a study is finding a multiple-step task that is small enough that one can find an optimal policy using table lookup, yet large enough that, for practical purposes, an artificial neural network is really required. We have identified a limited form of the game OTHELLO as satisfying these requirements. The work we report here is in the very preliminary stages of research, but this paper provides background for the problem being studied and a description of our initial approach to examining the problem. In the remainder of this paper, we first describe reinforcement learning in more detail. Next, we present the game OTHELLO. Finally we argue that a restricted form of the game meets the requirements of our study, and describe our preliminary approach to finding an optimal solution to the problem.
Dispositional optimism, self-framing and medical decision-making.
Zhao, Xu; Huang, Chunlei; Li, Xuesong; Zhao, Xin; Peng, Jiaxi
2015-03-01
Self-framing is an important but underinvestigated area in risk communication and behavioural decision-making, especially in medical settings. The present study aimed to investigate the relationship among dispositional optimism, self-frame and decision-making. Participants (N = 500) responded to the Life Orientation Test-Revised and self-framing test of medical decision-making problem. The participants whose scores were higher than the middle value were regarded as highly optimistic individuals. The rest were regarded as low optimistic individuals. The results showed that compared to the high dispositional optimism group, participants from the low dispositional optimism group showed a greater tendency to use negative vocabulary to construct their self-frame, and tended to choose the radiation therapy with high treatment survival rate, but low 5-year survival rate. Based on the current findings, it can be concluded that self-framing effect still exists in medical situation and individual differences in dispositional optimism can influence the processing of information in a framed decision task, as well as risky decision-making. © 2014 International Union of Psychological Science.
Method of determining the necessary number of observations for video stream documents recognition
NASA Astrophysics Data System (ADS)
Arlazarov, Vladimir V.; Bulatov, Konstantin; Manzhikov, Temudzhin; Slavin, Oleg; Janiszewski, Igor
2018-04-01
This paper discusses a task of document recognition on a sequence of video frames. In order to optimize the processing speed an estimation is performed of stability of recognition results obtained from several video frames. Considering identity document (Russian internal passport) recognition on a mobile device it is shown that significant decrease is possible of the number of observations necessary for obtaining precise recognition result.
Integrated structure/control design - Present methodology and future opportunities
NASA Technical Reports Server (NTRS)
Weisshaar, T. A.; Newsom, J. R.; Zeiler, T. A.; Gilbert, M. G.
1986-01-01
Attention is given to current methodology applied to the integration of the optimal design process for structures and controls. Multilevel linear decomposition techniques proved to be most effective in organizing the computational efforts necessary for ISCD (integrated structures and control design) tasks. With the development of large orbiting space structures and actively controlled, high performance aircraft, there will be more situations in which this concept can be applied.
Effect of Processing Parameters on Reliability of VARTM/SCRIMP Composite Panels - Phase 1
2007-07-01
used in this program (Hess and Beach, 2000). The structural risks associated with new FRP composite ship structures can be mitigated by...reliability calibration of new designs. This program focuses on addressing the first and second tasks outlined above. 1.2. Phase I - Objectives The...Accomplishments: Tension Testing An optimized geometry for tension coupon testing was developed for marine grade composites. The new geometry reduces
Dynamic Decision Making in Complex Task Environments: Principles and Neural Mechanisms
2013-03-01
Dynamical models of cognition . Mathematical models of mental processes. Human performance optimization. U U U U Dr. Jay Myung 703-696-8487 Reset 1...we have continued to develop a neurodynamic theory of decision making, using a combination of computational and experimental approaches, to address...a long history in the field of human cognitive psychology. The theoretical foundations of this research can be traced back to signal detection
Routing UAVs to Co-Optimize Mission Effectiveness and Network Performance with Dynamic Programming
2011-03-01
Heuristics on Hexagonal Connected Dominating Sets to Model Routing Dissemination," in Communication Theory, Reliability, and Quality of Service (CTRQ...24] Matthew Capt. USAF Compton, Improving the Quality of Service and Security of Military Networks with a Network Tasking Order Process, 2010. [25...Wesley, 2006. [32] James Haught, "Adaptive Quality of Service Engine with Dynamic Queue Control," Air Force Institute of Technology, Wright
About some types of constraints in problems of routing
NASA Astrophysics Data System (ADS)
Petunin, A. A.; Polishuk, E. G.; Chentsov, A. G.; Chentsov, P. A.; Ukolov, S. S.
2016-12-01
Many routing problems arising in different applications can be interpreted as a discrete optimization problem with additional constraints. The latter include generalized travelling salesman problem (GTSP), to which task of tool routing for CNC thermal cutting machines is sometimes reduced. Technological requirements bound to thermal fields distribution during cutting process are of great importance when developing algorithms for this task solution. These requirements give rise to some specific constraints for GTSP. This paper provides a mathematical formulation for the problem of thermal fields calculating during metal sheet thermal cutting. Corresponding algorithm with its programmatic implementation is considered. The mathematical model allowing taking such constraints into account considering other routing problems is discussed either.
Simulating optoelectronic systems for remote sensing with SENSOR
NASA Astrophysics Data System (ADS)
Boerner, Anko
2003-04-01
The consistent end-to-end simulation of airborne and spaceborne remote sensing systems is an important task and sometimes the only way for the adaptation and optimization of a sensor and its observation conditions, the choice and test of algorithms for data processing, error estimation and the evaluation of the capabilities of the whole sensor system. The presented software simulator SENSOR (Software ENvironment for the Simulation of Optical Remote sensing systems) includes a full model of the sensor hardware, the observed scene, and the atmosphere in between. It allows the simulation of a wide range of optoelectronic systems for remote sensing. The simulator consists of three parts. The first part describes the geometrical relations between scene, sun, and the remote sensing system using a ray tracing algorithm. The second part of the simulation environment considers the radiometry. It calculates the at-sensor radiance using a pre-calculated multidimensional lookup-table taking the atmospheric influence on the radiation into account. Part three consists of an optical and an electronic sensor model for the generation of digital images. Using SENSOR for an optimization requires the additional application of task-specific data processing algorithms. The principle of the end-to-end-simulation approach is explained, all relevant concepts of SENSOR are discussed, and examples of its use are given. The verification of SENSOR is demonstrated.
A decision modeling for phasor measurement unit location selection in smart grid systems
NASA Astrophysics Data System (ADS)
Lee, Seung Yup
As a key technology for enhancing the smart grid system, Phasor Measurement Unit (PMU) provides synchronized phasor measurements of voltages and currents of wide-area electric power grid. With various benefits from its application, one of the critical issues in utilizing PMUs is the optimal site selection of units. The main aim of this research is to develop a decision support system, which can be used in resource allocation task for smart grid system analysis. As an effort to suggest a robust decision model and standardize the decision modeling process, a harmonized modeling framework, which considers operational circumstances of component, is proposed in connection with a deterministic approach utilizing integer programming. With the results obtained from the optimal PMU placement problem, the advantages and potential that the harmonized modeling process possesses are assessed and discussed.
Tack, Lois C; Thomas, Michelle; Reich, Karl
2007-03-01
Forensic labs globally face the same problem-a growing need to process a greater number and wider variety of samples for DNA analysis. The same forensic lab can be tasked all at once with processing mixed casework samples from crime scenes, convicted offender samples for database entry, and tissue from tsunami victims for identification. Besides flexibility in the robotic system chosen for forensic automation, there is a need, for each sample type, to develop new methodology that is not only faster but also more reliable than past procedures. FTA is a chemical treatment of paper, unique to Whatman Bioscience, and is used for the stabilization and storage of biological samples. Here, the authors describe optimization of the Whatman FTA Purification Kit protocol for use with the AmpFlSTR Identifiler PCR Amplification Kit.
Jain, Jinender; Singh, Bijender
2017-04-01
Development of an ideal process for reduction of food phytates using microbial phytases is a demanding task by all food and feed industries all over the world. Phytase production by Bacillus subtilis subsp. subtilis JJBS250 isolated from soil sample was optimized in submerged fermentation using statistical tools. Among all the culture variables tested, sucrose, sodium phytate and Tween-80 were identified as the most significant variables using the Placket-Burman design. Further optimization of these variables resulted in a 6.79-fold improvement in phytase production (7170 U/L) as compared to unoptimized medium. Supplementation of microbial phytases (fungal and bacterial) resulted in improved bioavailability of nutritional components with the concomitant liberation of inorganic phosphorus, reducing sugar, soluble protein and amino acids, thus mitigating anti-nutritional properties of phytic acid.
Multi-Satellite Scheduling Approach for Dynamic Areal Tasks Triggered by Emergent Disasters
NASA Astrophysics Data System (ADS)
Niu, X. N.; Zhai, X. J.; Tang, H.; Wu, L. X.
2016-06-01
The process of satellite mission scheduling, which plays a significant role in rapid response to emergent disasters, e.g. earthquake, is used to allocate the observation resources and execution time to a series of imaging tasks by maximizing one or more objectives while satisfying certain given constraints. In practice, the information obtained of disaster situation changes dynamically, which accordingly leads to the dynamic imaging requirement of users. We propose a satellite scheduling model to address dynamic imaging tasks triggered by emergent disasters. The goal of proposed model is to meet the emergency response requirements so as to make an imaging plan to acquire rapid and effective information of affected area. In the model, the reward of the schedule is maximized. To solve the model, we firstly present a dynamic segmenting algorithm to partition area targets. Then the dynamic heuristic algorithm embedding in a greedy criterion is designed to obtain the optimal solution. To evaluate the model, we conduct experimental simulations in the scene of Wenchuan Earthquake. The results show that the simulated imaging plan can schedule satellites to observe a wider scope of target area. We conclude that our satellite scheduling model can optimize the usage of satellite resources so as to obtain images in disaster response in a more timely and efficient manner.
NASA Technical Reports Server (NTRS)
Townsend, James C.; Weston, Robert P.; Eidson, Thomas M.
1993-01-01
The Framework for Interdisciplinary Design Optimization (FIDO) is a general programming environment for automating the distribution of complex computing tasks over a networked system of heterogeneous computers. For example, instead of manually passing a complex design problem between its diverse specialty disciplines, the FIDO system provides for automatic interactions between the discipline tasks and facilitates their communications. The FIDO system networks all the computers involved into a distributed heterogeneous computing system, so they have access to centralized data and can work on their parts of the total computation simultaneously in parallel whenever possible. Thus, each computational task can be done by the most appropriate computer. Results can be viewed as they are produced and variables changed manually for steering the process. The software is modular in order to ease migration to new problems: different codes can be substituted for each of the current code modules with little or no effect on the others. The potential for commercial use of FIDO rests in the capability it provides for automatically coordinating diverse computations on a networked system of workstations and computers. For example, FIDO could provide the coordination required for the design of vehicles or electronics or for modeling complex systems.
Optimal Planning and Problem-Solving
NASA Technical Reports Server (NTRS)
Clemet, Bradley; Schaffer, Steven; Rabideau, Gregg
2008-01-01
CTAEMS MDP Optimal Planner is a problem-solving software designed to command a single spacecraft/rover, or a team of spacecraft/rovers, to perform the best action possible at all times according to an abstract model of the spacecraft/rover and its environment. It also may be useful in solving logistical problems encountered in commercial applications such as shipping and manufacturing. The planner reasons around uncertainty according to specified probabilities of outcomes using a plan hierarchy to avoid exploring certain kinds of suboptimal actions. Also, planned actions are calculated as the state-action space is expanded, rather than afterward, to reduce by an order of magnitude the processing time and memory used. The software solves planning problems with actions that can execute concurrently, that have uncertain duration and quality, and that have functional dependencies on others that affect quality. These problems are modeled in a hierarchical planning language called C_TAEMS, a derivative of the TAEMS language for specifying domains for the DARPA Coordinators program. In realistic environments, actions often have uncertain outcomes and can have complex relationships with other tasks. The planner approaches problems by considering all possible actions that may be taken from any state reachable from a given, initial state, and from within the constraints of a given task hierarchy that specifies what tasks may be performed by which team member.
Conceptual design of an aircraft automated coating removal system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, J.E.; Draper, J.V.; Pin, F.G.
1996-05-01
Paint stripping of the U.S. Air Force`s large transport aircrafts is currently a labor-intensive, manual process. Significant reductions in costs, personnel and turnaround time can be accomplished by the judicious use of automation in some process tasks. This paper presents the conceptual design of a coating removal systems for the tail surfaces of the C-5 plane. Emphasis is placed on the technology selection to optimize human-automation synergy with respect to overall costs, throughput, quality, safety, and reliability. Trade- offs between field-proven vs. research-requiring technologies, and between expected gain vs. cost and complexity, have led to a conceptual design which ismore » semi-autonomous (relying on the human for task specification and disturbance handling) yet incorporates sensor- based automation (for sweep path generation and tracking, surface following, stripping quality control and tape/breach handling).« less
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Youngblood, John N.; Saha, Aindam
1987-01-01
Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.
High-Frequency Binaural Beats Increase Cognitive Flexibility: Evidence from Dual-Task Crosstalk
Hommel, Bernhard; Sellaro, Roberta; Fischer, Rico; Borg, Saskia; Colzato, Lorenza S.
2016-01-01
Increasing evidence suggests that cognitive-control processes can be configured to optimize either persistence of information processing (by amplifying competition between decision-making alternatives and top-down biasing of this competition) or flexibility (by dampening competition and biasing). We investigated whether high-frequency binaural beats, an auditory illusion suspected to act as a cognitive enhancer, have an impact on cognitive-control configuration. We hypothesized that binaural beats in the gamma range bias the cognitive-control style toward flexibility, which in turn should increase the crosstalk between tasks in a dual-task paradigm. We replicated earlier findings that the reaction time in the first-performed task is sensitive to the compatibility between the responses in the first and the second task—an indication of crosstalk. As predicted, exposing participants to binaural beats in the gamma range increased this effect as compared to a control condition in which participants were exposed to a continuous tone of 340 Hz. These findings provide converging evidence that the cognitive-control style can be systematically biased by inducing particular internal states; that high-frequency binaural beats bias the control style toward more flexibility; and that different styles are implemented by changing the strength of local competition and top-down bias. PMID:27605922
Zhang, Jianhua; Yin, Zhong; Wang, Rubin
2017-01-01
This paper developed a cognitive task-load (CTL) classification algorithm and allocation strategy to sustain the optimal operator CTL levels over time in safety-critical human-machine integrated systems. An adaptive human-machine system is designed based on a non-linear dynamic CTL classifier, which maps a set of electroencephalogram (EEG) and electrocardiogram (ECG) related features to a few CTL classes. The least-squares support vector machine (LSSVM) is used as dynamic pattern classifier. A series of electrophysiological and performance data acquisition experiments were performed on seven volunteer participants under a simulated process control task environment. The participant-specific dynamic LSSVM model is constructed to classify the instantaneous CTL into five classes at each time instant. The initial feature set, comprising 56 EEG and ECG related features, is reduced to a set of 12 salient features (including 11 EEG-related features) by using the locality preserving projection (LPP) technique. An overall correct classification rate of about 80% is achieved for the 5-class CTL classification problem. Then the predicted CTL is used to adaptively allocate the number of process control tasks between operator and computer-based controller. Simulation results showed that the overall performance of the human-machine system can be improved by using the adaptive automation strategy proposed.
Cortical membrane potential signature of optimal states for sensory signal detection
McGinley, Matthew J.; David, Stephen V.; McCormick, David A.
2015-01-01
The neural correlates of optimal states for signal detection task performance are largely unknown. One hypothesis holds that optimal states exhibit tonically depolarized cortical neurons with enhanced spiking activity, such as occur during movement. We recorded membrane potentials of auditory cortical neurons in mice trained on a challenging tone-in-noise detection task while assessing arousal with simultaneous pupillometry and hippocampal recordings. Arousal measures accurately predicted multiple modes of membrane potential activity, including: rhythmic slow oscillations at low arousal, stable hyperpolarization at intermediate arousal, and depolarization during phasic or tonic periods of hyper-arousal. Walking always occurred during hyper-arousal. Optimal signal detection behavior and sound-evoked responses, at both sub-threshold and spiking levels, occurred at intermediate arousal when pre-decision membrane potentials were stably hyperpolarized. These results reveal a cortical physiological signature of the classically-observed inverted-U relationship between task performance and arousal, and that optimal detection exhibits enhanced sensory-evoked responses and reduced background synaptic activity. PMID:26074005
Computer Simulation of a Multiaxis Air-to-Air Tracking Task Using the Optimal Pilot Control Model.
1982-12-01
v ABSTRACT ........ ............................. .. vi CHAPTER 1 - INTRODUCTION ....... ..................... 1 1.1 Motivation... Introduction ......... . 4 2.2 Optimal Pilot Control Model and Control Synthesis 4 2.3 Pitch Tracking Task ...... ................... 6 2.4 Multiaxis...CHAPTER 3 - SIMULATION SYSTEM ...... .................. 33 3.1 Introduction ........ ....................... 33 3.2 System Hardware
A novel task-oriented optimal design for P300-based brain-computer interfaces.
Zhou, Zongtan; Yin, Erwei; Liu, Yang; Jiang, Jun; Hu, Dewen
2014-10-01
Objective. The number of items of a P300-based brain-computer interface (BCI) should be adjustable in accordance with the requirements of the specific tasks. To address this issue, we propose a novel task-oriented optimal approach aimed at increasing the performance of general P300 BCIs with different numbers of items. Approach. First, we proposed a stimulus presentation with variable dimensions (VD) paradigm as a generalization of the conventional single-character (SC) and row-column (RC) stimulus paradigms. Furthermore, an embedding design approach was employed for any given number of items. Finally, based on the score-P model of each subject, the VD flash pattern was selected by a linear interpolation approach for a certain task. Main results. The results indicate that the optimal BCI design consistently outperforms the conventional approaches, i.e., the SC and RC paradigms. Specifically, there is significant improvement in the practical information transfer rate for a large number of items. Significance. The results suggest that the proposed optimal approach would provide useful guidance in the practical design of general P300-based BCIs.
A novel task-oriented optimal design for P300-based brain-computer interfaces
NASA Astrophysics Data System (ADS)
Zhou, Zongtan; Yin, Erwei; Liu, Yang; Jiang, Jun; Hu, Dewen
2014-10-01
Objective. The number of items of a P300-based brain-computer interface (BCI) should be adjustable in accordance with the requirements of the specific tasks. To address this issue, we propose a novel task-oriented optimal approach aimed at increasing the performance of general P300 BCIs with different numbers of items. Approach. First, we proposed a stimulus presentation with variable dimensions (VD) paradigm as a generalization of the conventional single-character (SC) and row-column (RC) stimulus paradigms. Furthermore, an embedding design approach was employed for any given number of items. Finally, based on the score-P model of each subject, the VD flash pattern was selected by a linear interpolation approach for a certain task. Main results. The results indicate that the optimal BCI design consistently outperforms the conventional approaches, i.e., the SC and RC paradigms. Specifically, there is significant improvement in the practical information transfer rate for a large number of items. Significance. The results suggest that the proposed optimal approach would provide useful guidance in the practical design of general P300-based BCIs.
Cassini-Huygens maneuver automation for navigation
NASA Technical Reports Server (NTRS)
Goodson, Troy; Attiyah, Amy; Buffington, Brent; Hahn, Yungsun; Pojman, Joan; Stavert, Bob; Strange, Nathan; Stumpf, Paul; Wagner, Sean; Wolff, Peter;
2006-01-01
Many times during the Cassini-Huygens mission to Saturn, propulsive maneuvers must be spaced so closely together that there isn't enough time or workforce to execute the maneuver-related software manually, one subsystem at a time. Automation is required. Automating the maneuver design process has involved close cooperation between teams. We present the contribution from the Navigation system. In scope, this includes trajectory propagation and search, generation of ephemerides, general tasks such as email notification and file transfer, and presentation materials. The software has been used to help understand maneuver optimization results, Huygens probe delivery statistics, and Saturn ring-plane crossing geometry. The Maneuver Automation Software (MAS), developed for the Cassini-Huygens program enables frequent maneuvers by handling mundane tasks such as creation of deliverable files, file delivery, generation and transmission of email announcements, generation of presentation material and other supporting documentation. By hand, these tasks took up hours, if not days, of work for each maneuver. Automated, these tasks may be completed in under an hour. During the cruise trajectory the spacing of maneuvers was such that development of a maneuver design could span about a month, involving several other processes in addition to that described, above. Often, about the last five days of this process covered the generation of a final design using an updated orbit-determination estimate. To support the tour trajectory, the orbit determination data cut-off of five days before the maneuver needed to be reduced to approximately one day and the whole maneuver development process needed to be reduced to less than a week..
When more of the same is better
NASA Astrophysics Data System (ADS)
Fontanari, José F.
2016-01-01
Problem solving (e.g., drug design, traffic engineering, software development) by task forces represents a substantial portion of the economy of developed countries. Here we use an agent-based model of cooperative problem-solving systems to study the influence of diversity on the performance of a task force. We assume that agents cooperate by exchanging information on their partial success and use that information to imitate the more successful agent in the system —the model. The agents differ only in their propensities to copy the model. We find that, for easy tasks, the optimal organization is a homogeneous system composed of agents with the highest possible copy propensities. For difficult tasks, we find that diversity can prevent the system from being trapped in sub-optimal solutions. However, when the system size is adjusted to maximize the performance the homogeneous systems outperform the heterogeneous systems, i.e., for optimal performance, sameness should be preferred to diversity.
Matrix model of the grinding process of cement clinker in the ball mill
NASA Astrophysics Data System (ADS)
Sharapov, Rashid R.
2018-02-01
In the article attention is paid to improving the efficiency of production of fine powders, in particular Portland cement clinker. The questions of Portland cement clinker grinding in closed circuit ball mills. Noted that the main task of modeling the grinding process is predicting the granulometric composition of the finished product taking into account constructive and technological parameters used ball mill and separator. It is shown that the most complete and informative characterization of the grinding process in a ball mill is a grinding matrix taking into account the transformation of grain composition inside the mill drum. Shows how the relative mass fraction of the particles of crushed material, get to corresponding fraction. Noted, that the actual task of reconstruction of the matrix of grinding on the experimental data obtained in the real operating installations. On the basis of experimental data obtained on industrial installations, using matrix method to determine the kinetics of the grinding process in closed circuit ball mills. The calculation method of the conversion of the grain composition of the crushed material along the mill drum developed. Taking into account the proposed approach can be optimized processing methods to improve the manufacturing process of Portland cement clinker.
Park, Chanhun; Nam, Hee-Geun; Lee, Ki Bong; Mun, Sungyong
2014-10-24
The economically-efficient separation of formic acid from acetic acid and succinic acid has been a key issue in the production of formic acid with the Actinobacillus bacteria fermentation. To address this issue, an optimal three-zone simulated moving bed (SMB) chromatography for continuous separation of formic acid from acetic acid and succinic acid was developed in this study. As a first step for this task, the adsorption isotherm and mass-transfer parameters of each organic acid on the qualified adsorbent (Amberchrom-CG300C) were determined through a series of multiple frontal experiments. The determined parameters were then used in optimizing the SMB process for the considered separation. During such optimization, the additional investigation for selecting a proper SMB port configuration, which could be more advantageous for attaining better process performances, was carried out between two possible configurations. It was found that if the properly selected port configuration was adopted in the SMB of interest, the throughout and the formic-acid product concentration could be increased by 82% and 181% respectively. Finally, the optimized SMB process based on the properly selected port configuration was tested experimentally using a self-assembled SMB unit with three zones. The SMB experimental results and the relevant computer simulation verified that the developed process in this study was successful in continuous recovery of formic acid from a ternary organic-acid mixture of interest with high throughput, high purity, high yield, and high product concentration. Copyright © 2014 Elsevier B.V. All rights reserved.
Quantitative analysis of task selection for brain-computer interfaces
NASA Astrophysics Data System (ADS)
Llera, Alberto; Gómez, Vicenç; Kappen, Hilbert J.
2014-10-01
Objective. To assess quantitatively the impact of task selection in the performance of brain-computer interfaces (BCI). Approach. We consider the task-pairs derived from multi-class BCI imagery movement tasks in three different datasets. We analyze for the first time the benefits of task selection on a large-scale basis (109 users) and evaluate the possibility of transferring task-pair information across days for a given subject. Main results. Selecting the subject-dependent optimal task-pair among three different imagery movement tasks results in approximately 20% potential increase in the number of users that can be expected to control a binary BCI. The improvement is observed with respect to the best task-pair fixed across subjects. The best task-pair selected for each subject individually during a first day of recordings is generally a good task-pair in subsequent days. In general, task learning from the user side has a positive influence in the generalization of the optimal task-pair, but special attention should be given to inexperienced subjects. Significance. These results add significant evidence to existing literature that advocates task selection as a necessary step towards usable BCIs. This contribution motivates further research focused on deriving adaptive methods for task selection on larger sets of mental tasks in practical online scenarios.
Support of surgical process modeling by using adaptable software user interfaces
NASA Astrophysics Data System (ADS)
Neumuth, T.; Kaschek, B.; Czygan, M.; Goldstein, D.; Strauß, G.; Meixensberger, J.; Burgert, O.
2010-03-01
Surgical Process Modeling (SPM) is a powerful method for acquiring data about the evolution of surgical procedures. Surgical Process Models are used in a variety of use cases including evaluation studies, requirements analysis and procedure optimization, surgical education, and workflow management scheme design. This work proposes the use of adaptive, situation-aware user interfaces for observation support software for SPM. We developed a method to support the modeling of the observer by using an ontological knowledge base. This is used to drive the graphical user interface for the observer to restrict the search space of terminology depending on the current situation. In the evaluation study it is shown, that the workload of the observer was decreased significantly by using adaptive user interfaces. 54 SPM observation protocols were analyzed by using the NASA Task Load Index and it was shown that the use of the adaptive user interface disburdens the observer significantly in workload criteria effort, mental demand and temporal demand, helping him to concentrate on his essential task of modeling the Surgical Process.
Optimized Assistive Human-Robot Interaction Using Reinforcement Learning.
Modares, Hamidreza; Ranatunga, Isura; Lewis, Frank L; Popa, Dan O
2016-03-01
An intelligent human-robot interaction (HRI) system with adjustable robot behavior is presented. The proposed HRI system assists the human operator to perform a given task with minimum workload demands and optimizes the overall human-robot system performance. Motivated by human factor studies, the presented control structure consists of two control loops. First, a robot-specific neuro-adaptive controller is designed in the inner loop to make the unknown nonlinear robot behave like a prescribed robot impedance model as perceived by a human operator. In contrast to existing neural network and adaptive impedance-based control methods, no information of the task performance or the prescribed robot impedance model parameters is required in the inner loop. Then, a task-specific outer-loop controller is designed to find the optimal parameters of the prescribed robot impedance model to adjust the robot's dynamics to the operator skills and minimize the tracking error. The outer loop includes the human operator, the robot, and the task performance details. The problem of finding the optimal parameters of the prescribed robot impedance model is transformed into a linear quadratic regulator (LQR) problem which minimizes the human effort and optimizes the closed-loop behavior of the HRI system for a given task. To obviate the requirement of the knowledge of the human model, integral reinforcement learning is used to solve the given LQR problem. Simulation results on an x - y table and a robot arm, and experimental implementation results on a PR2 robot confirm the suitability of the proposed method.
Optimal control solutions to sodic soil reclamation
NASA Astrophysics Data System (ADS)
Mau, Yair; Porporato, Amilcare
2016-05-01
We study the reclamation process of a sodic soil by irrigation with water amended with calcium cations. In order to explore the entire range of time-dependent strategies, this task is framed as an optimal control problem, where the amendment rate is the control and the total rehabilitation time is the quantity to be minimized. We use a minimalist model of vertically averaged soil salinity and sodicity, in which the main feedback controlling the dynamics is the nonlinear coupling of soil water and exchange complex, given by the Gapon equation. We show that the optimal solution is a bang-bang control strategy, where the amendment rate is discontinuously switched along the process from a maximum value to zero. The solution enables a reduction in remediation time of about 50%, compared with the continuous use of good-quality irrigation water. Because of its general structure, the bang-bang solution is also shown to work for the reclamation of other soil conditions, such as saline-sodic soils. The novelty in our modeling approach is the capability of searching the entire "strategy space" for optimal time-dependent protocols. The optimal solutions found for the minimalist model can be then fine-tuned by experiments and numerical simulations, applicable to realistic conditions that include spatial variability and heterogeneities.
Chen, Stephanie I; Visser, Troy A W; Huf, Samuel; Loft, Shayne
2017-09-01
Automation can improve operator performance and reduce workload, but can also degrade operator situation awareness (SA) and the ability to regain manual control. In 3 experiments, we examined the extent to which automation could be designed to benefit performance while ensuring that individuals maintained SA and could regain manual control. Participants completed a simulated submarine track management task under varying task load. The automation was designed to facilitate information acquisition and analysis, but did not make task decisions. Relative to a condition with no automation, the continuous use of automation improved performance and reduced subjective workload, but degraded SA. Automation that was engaged and disengaged by participants as required (adaptable automation) moderately improved performance and reduced workload relative to no automation, but degraded SA. Automation engaged and disengaged based on task load (adaptive automation) provided no benefit to performance or workload, and degraded SA relative to no automation. Automation never led to significant return-to-manual deficits. However, all types of automation led to degraded performance on a nonautomated task that shared information processing requirements with automated tasks. Given these outcomes, further research is urgently required to establish how to design automation to maximize performance while keeping operators cognitively engaged. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Sethi, Arjun; Voon, Valerie; Critchley, Hugo D; Cercignani, Mara; Harrison, Neil A
2018-05-01
Computational models of reinforcement learning have helped dissect discrete components of reward-related function and characterize neurocognitive deficits in psychiatric illnesses. Stimulus novelty biases decision-making, even when unrelated to choice outcome, acting as if possessing intrinsic reward value to guide decisions toward uncertain options. Heightened novelty seeking is characteristic of attention deficit hyperactivity disorder, yet how this influences reward-related decision-making is computationally encoded, or is altered by stimulant medication, is currently uncertain. Here we used an established reinforcement-learning task to model effects of novelty on reward-related behaviour during functional MRI in 30 adults with attention deficit hyperactivity disorder and 30 age-, sex- and IQ-matched control subjects. Each participant was tested on two separate occasions, once ON and once OFF stimulant medication. OFF medication, patients with attention deficit hyperactivity disorder showed significantly impaired task performance (P = 0.027), and greater selection of novel options (P = 0.004). Moreover, persistence in selecting novel options predicted impaired task performance (P = 0.025). These behavioural deficits were accompanied by a significantly lower learning rate (P = 0.011) and heightened novelty signalling within the substantia nigra/ventral tegmental area (family-wise error corrected P < 0.05). Compared to effects in controls, stimulant medication improved attention deficit hyperactivity disorder participants' overall task performance (P = 0.011), increased reward-learning rates (P = 0.046) and enhanced their ability to differentiate optimal from non-optimal novel choices (P = 0.032). It also reduced substantia nigra/ventral tegmental area responses to novelty. Preliminary cross-sectional evidence additionally suggested an association between long-term stimulant treatment and a reduction in the rewarding value of novelty. These data suggest that aberrant substantia nigra/ventral tegmental area novelty processing plays an important role in the suboptimal reward-related decision-making characteristic of attention deficit hyperactivity disorder. Compared to effects in controls, abnormalities in novelty processing and reward-related learning were improved by stimulant medication, suggesting that they may be disorder-specific targets for the pharmacological management of attention deficit hyperactivity disorder symptoms.
The impact of crosstalk on three-dimensional laparoscopic performance and workload.
Sakata, Shinichiro; Grove, Philip M; Watson, Marcus O; Stevenson, Andrew R L
2017-10-01
This is the first study to explore the effects of crosstalk from 3D laparoscopic displays on technical performance and workload. We studied crosstalk at magnitudes that may have been tolerated during laparoscopic surgery. Participants were 36 voluntary doctors. To minimize floor effects, participants completed their surgery rotations, and a laparoscopic suturing course for surgical trainees. We used a counterbalanced, within-subjects design in which participants were randomly assigned to complete laparoscopic tasks in one of six unique testing sequences. In a simulation laboratory, participants were randomly assigned to complete laparoscopic 'navigation in space' and suturing tasks in three viewing conditions: 2D, 3D without ghosting and 3D with ghosting. Participants calibrated their exposure to crosstalk as the maximum level of ghosting that they could tolerate without discomfort. The Randot® Stereotest was used to verify stereoacuity. The study performance metric was time to completion. The NASA TLX was used to measure workload. Normal threshold stereoacuity (40-20 second of arc) was verified in all participants. Comparing optimal 3D with 2D viewing conditions, mean performance times were 2.8 and 1.6 times faster in laparoscopic navigation in space and suturing tasks respectively (p< .001). Comparing optimal 3D with suboptimal 3D viewing conditions, mean performance times were 2.9 times faster in both tasks (p< .001). Mean workload in 2D was 1.5 and 1.3 times greater than in optimal 3D viewing, for navigation in space and suturing tasks respectively (p< .001). Mean workload associated with suboptimal 3D was 1.3 times greater than optimal 3D in both laparoscopic tasks (p< .001). There was no significant relationship between the magnitude of ghosting score, laparoscopic performance and workload. Our findings highlight the advantages of 3D displays when used optimally, and their shortcomings when used sub-optimally, on both laparoscopic performance and workload.
Engineering and Physics Optimization of Breed and Burn Fast Reactor Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael J. Driscoll; Pavel Hejzlar; Peter Yarsky
2005-12-09
This project is organized under four major tasks (each of which has two or more subtasks) with contributions among the three collaborating organizations (MIT, INEEL and ANL-West): Task A: Core Physics and Fuel Cycle; Task B: Core Thermal Hydraulics; Task C: Plant Design Task; and D: Fuel Design.
NASA Astrophysics Data System (ADS)
Eyono Obono, S. D.; Basak, Sujit Kumar
2011-12-01
The general formulation of the assignment problem consists in the optimal allocation of a given set of tasks to a workforce. This problem is covered by existing literature for different domains such as distributed databases, distributed systems, transportation, packets radio networks, IT outsourcing, and teaching allocation. This paper presents a new version of the assignment problem for the allocation of academic tasks to staff members in departments with long leave opportunities. It presents the description of a workload allocation scheme and its algorithm, for the allocation of an equitable number of tasks in academic departments where long leaves are necessary.
Task-Driven Orbit Design and Implementation on a Robotic C-Arm System for Cone-Beam CT.
Ouadah, S; Jacobson, M; Stayman, J W; Ehtiati, T; Weiss, C; Siewerdsen, J H
2017-03-01
This work applies task-driven optimization to the design of non-circular orbits that maximize imaging performance for a particular imaging task. First implementation of task-driven imaging on a clinical robotic C-arm system is demonstrated, and a framework for orbit calculation is described and evaluated. We implemented a task-driven imaging framework to optimize orbit parameters that maximize detectability index d '. This framework utilizes a specified Fourier domain task function and an analytical model for system spatial resolution and noise. Two experiments were conducted to test the framework. First, a simple task was considered consisting of frequencies lying entirely on the f z -axis (e.g., discrimination of structures oriented parallel to the central axial plane), and a "circle + arc" orbit was incorporated into the framework as a means to improve sampling of these frequencies, and thereby increase task-based detectability. The orbit was implemented on a robotic C-arm (Artis Zeego, Siemens Healthcare). A second task considered visualization of a cochlear implant simulated within a head phantom, with spatial frequency response emphasizing high-frequency content in the ( f y , f z ) plane of the cochlea. An optimal orbit was computed using the task-driven framework, and the resulting image was compared to that for a circular orbit. For the f z -axis task, the circle + arc orbit was shown to increase d ' by a factor of 1.20, with an improvement of 0.71 mm in a 3D edge-spread measurement for edges located far from the central plane and a decrease in streak artifacts compared to a circular orbit. For the cochlear implant task, the resulting orbit favored complementary views of high tilt angles in a 360° orbit, and d ' was increased by a factor of 1.83. This work shows that a prospective definition of imaging task can be used to optimize source-detector orbit and improve imaging performance. The method was implemented for execution of non-circular, task-driven orbits on a clinical robotic C-arm system. The framework is sufficiently general to include both acquisition parameters (e.g., orbit, kV, and mA selection) and reconstruction parameters (e.g., a spatially varying regularizer).
Task-driven orbit design and implementation on a robotic C-arm system for cone-beam CT
NASA Astrophysics Data System (ADS)
Ouadah, S.; Jacobson, M.; Stayman, J. W.; Ehtiati, T.; Weiss, C.; Siewerdsen, J. H.
2017-03-01
Purpose: This work applies task-driven optimization to the design of non-circular orbits that maximize imaging performance for a particular imaging task. First implementation of task-driven imaging on a clinical robotic C-arm system is demonstrated, and a framework for orbit calculation is described and evaluated. Methods: We implemented a task-driven imaging framework to optimize orbit parameters that maximize detectability index d'. This framework utilizes a specified Fourier domain task function and an analytical model for system spatial resolution and noise. Two experiments were conducted to test the framework. First, a simple task was considered consisting of frequencies lying entirely on the fz-axis (e.g., discrimination of structures oriented parallel to the central axial plane), and a "circle + arc" orbit was incorporated into the framework as a means to improve sampling of these frequencies, and thereby increase task-based detectability. The orbit was implemented on a robotic C-arm (Artis Zeego, Siemens Healthcare). A second task considered visualization of a cochlear implant simulated within a head phantom, with spatial frequency response emphasizing high-frequency content in the (fy, fz) plane of the cochlea. An optimal orbit was computed using the task-driven framework, and the resulting image was compared to that for a circular orbit. Results: For the fz-axis task, the circle + arc orbit was shown to increase d' by a factor of 1.20, with an improvement of 0.71 mm in a 3D edge-spread measurement for edges located far from the central plane and a decrease in streak artifacts compared to a circular orbit. For the cochlear implant task, the resulting orbit favored complementary views of high tilt angles in a 360° orbit, and d' was increased by a factor of 1.83. Conclusions: This work shows that a prospective definition of imaging task can be used to optimize source-detector orbit and improve imaging performance. The method was implemented for execution of non-circular, task-driven orbits on a clinical robotic C-arm system. The framework is sufficiently general to include both acquisition parameters (e.g., orbit, kV, and mA selection) and reconstruction parameters (e.g., a spatially varying regularizer).
CAMS as a tool for human factors research in spaceflight
NASA Astrophysics Data System (ADS)
Sauer, Juergen
2004-01-01
The paper reviews a number of research studies that were carried out with a PC-based task environment called Cabin Air Management System (CAMS) simulating the operation of a spacecraft's life support system. As CAMS was a multiple task environment, it allowed the measurement of performance at different levels. Four task components of different priority were embedded in the task environment: diagnosis and repair of system faults, maintaining atmospheric parameters in a safe state, acknowledgement of system alarms (reaction time), and keeping a record of critical system resources (prospective memory). Furthermore, the task environment permitted the examination of different task management strategies and changes in crew member state (fatigue, anxiety, mental effort). A major goal of the research programme was to examine how crew members adapted to various forms of sub-optimal working conditions, such as isolation and confinement, sleep deprivation and noise. None of the studies provided evidence for decrements in primary task performance. However, the results showed a number of adaptive responses of crew members to adjust to the different sub-optimal working conditions. There was evidence for adjustments in information sampling strategies (usually reductions in sampling frequency) as a result of unfavourable working conditions. The results also showed selected decrements in secondary task performance. Prospective memory seemed to be somewhat more vulnerable to sub-optimal working conditions than performance on the reaction time task. Finally, suggestions are made for future research with the CAMS environment.
Design tool for multiprocessor scheduling and evaluation of iterative dataflow algorithms
NASA Technical Reports Server (NTRS)
Jones, Robert L., III
1995-01-01
A graph-theoretic design process and software tool is defined for selecting a multiprocessing scheduling solution for a class of computational problems. The problems of interest are those that can be described with a dataflow graph and are intended to be executed repetitively on a set of identical processors. Typical applications include signal processing and control law problems. Graph-search algorithms and analysis techniques are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool applies the design process to a given problem and includes performance optimization through the inclusion of additional precedence constraints among the schedulable tasks.
Nygren, T E
1997-09-01
It is well documented that the way a static choice task is "framed" can dramatically alter choice behavior, often leading to observable preference reversals. This framing effect appears to result from perceived changes in the nature or location of a person's initial reference point, but it is not clear how framing effects might generalize to performance on dynamic decision making tasks that are characterized by high workload, time constraints, risk, or stress. A study was conducted to examine the hypothesis that framing can introduce affective components to the decision making process and can influence, either favorably (positive frame) or adversely (negative frame), the implementation and use of decision making strategies in dynamic high-workload environments. Results indicated that negative frame participants were significantly impaired in developing and employing a simple optimal decision strategy relative to a positive frame group. Discussion focuses on implications of these results for models of dynamic decision making.
Fault tolerance of artificial neural networks with applications in critical systems
NASA Technical Reports Server (NTRS)
Protzel, Peter W.; Palumbo, Daniel L.; Arras, Michael K.
1992-01-01
This paper investigates the fault tolerance characteristics of time continuous recurrent artificial neural networks (ANN) that can be used to solve optimization problems. The principle of operations and performance of these networks are first illustrated by using well-known model problems like the traveling salesman problem and the assignment problem. The ANNs are then subjected to 13 simultaneous 'stuck at 1' or 'stuck at 0' faults for network sizes of up to 900 'neurons'. The effects of these faults is demonstrated and the cause for the observed fault tolerance is discussed. An application is presented in which a network performs a critical task for a real-time distributed processing system by generating new task allocations during the reconfiguration of the system. The performance degradation of the ANN under the presence of faults is investigated by large-scale simulations, and the potential benefits of delegating a critical task to a fault tolerant network are discussed.
History matching through dynamic decision-making
Maschio, Célio; Santos, Antonio Alberto; Schiozer, Denis; Rocha, Anderson
2017-01-01
History matching is the process of modifying the uncertain attributes of a reservoir model to reproduce the real reservoir performance. It is a classical reservoir engineering problem and plays an important role in reservoir management since the resulting models are used to support decisions in other tasks such as economic analysis and production strategy. This work introduces a dynamic decision-making optimization framework for history matching problems in which new models are generated based on, and guided by, the dynamic analysis of the data of available solutions. The optimization framework follows a ‘learning-from-data’ approach, and includes two optimizer components that use machine learning techniques, such as unsupervised learning and statistical analysis, to uncover patterns of input attributes that lead to good output responses. These patterns are used to support the decision-making process while generating new, and better, history matched solutions. The proposed framework is applied to a benchmark model (UNISIM-I-H) based on the Namorado field in Brazil. Results show the potential the dynamic decision-making optimization framework has for improving the quality of history matching solutions using a substantial smaller number of simulations when compared with a previous work on the same benchmark. PMID:28582413
Kiefer, Gundolf; Lehmann, Helko; Weese, Jürgen
2006-04-01
Maximum intensity projections (MIPs) are an important visualization technique for angiographic data sets. Efficient data inspection requires frame rates of at least five frames per second at preserved image quality. Despite the advances in computer technology, this task remains a challenge. On the one hand, the sizes of computed tomography and magnetic resonance images are increasing rapidly. On the other hand, rendering algorithms do not automatically benefit from the advances in processor technology, especially for large data sets. This is due to the faster evolving processing power and the slower evolving memory access speed, which is bridged by hierarchical cache memory architectures. In this paper, we investigate memory access optimization methods and use them for generating MIPs on general-purpose central processing units (CPUs) and graphics processing units (GPUs), respectively. These methods can work on any level of the memory hierarchy, and we show that properly combined methods can optimize memory access on multiple levels of the hierarchy at the same time. We present performance measurements to compare different algorithm variants and illustrate the influence of the respective techniques. On current hardware, the efficient handling of the memory hierarchy for CPUs improves the rendering performance by a factor of 3 to 4. On GPUs, we observed that the effect is even larger, especially for large data sets. The methods can easily be adjusted to different hardware specifics, although their impact can vary considerably. They can also be used for other rendering techniques than MIPs, and their use for more general image processing task could be investigated in the future.
Jones, A Kyle; Heintz, Philip; Geiser, William; Goldman, Lee; Jerjian, Khachig; Martin, Melissa; Peck, Donald; Pfeiffer, Douglas; Ranger, Nicole; Yorkston, John
2015-11-01
Quality control (QC) in medical imaging is an ongoing process and not just a series of infrequent evaluations of medical imaging equipment. The QC process involves designing and implementing a QC program, collecting and analyzing data, investigating results that are outside the acceptance levels for the QC program, and taking corrective action to bring these results back to an acceptable level. The QC process involves key personnel in the imaging department, including the radiologist, radiologic technologist, and the qualified medical physicist (QMP). The QMP performs detailed equipment evaluations and helps with oversight of the QC program, the radiologic technologist is responsible for the day-to-day operation of the QC program. The continued need for ongoing QC in digital radiography has been highlighted in the scientific literature. The charge of this task group was to recommend consistency tests designed to be performed by a medical physicist or a radiologic technologist under the direction of a medical physicist to identify problems with an imaging system that need further evaluation by a medical physicist, including a fault tree to define actions that need to be taken when certain fault conditions are identified. The focus of this final report is the ongoing QC process, including rejected image analysis, exposure analysis, and artifact identification. These QC tasks are vital for the optimal operation of a department performing digital radiography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, A. Kyle, E-mail: kyle.jones@mdanderson.org; Geiser, William; Heintz, Philip
Quality control (QC) in medical imaging is an ongoing process and not just a series of infrequent evaluations of medical imaging equipment. The QC process involves designing and implementing a QC program, collecting and analyzing data, investigating results that are outside the acceptance levels for the QC program, and taking corrective action to bring these results back to an acceptable level. The QC process involves key personnel in the imaging department, including the radiologist, radiologic technologist, and the qualified medical physicist (QMP). The QMP performs detailed equipment evaluations and helps with oversight of the QC program, the radiologic technologist ismore » responsible for the day-to-day operation of the QC program. The continued need for ongoing QC in digital radiography has been highlighted in the scientific literature. The charge of this task group was to recommend consistency tests designed to be performed by a medical physicist or a radiologic technologist under the direction of a medical physicist to identify problems with an imaging system that need further evaluation by a medical physicist, including a fault tree to define actions that need to be taken when certain fault conditions are identified. The focus of this final report is the ongoing QC process, including rejected image analysis, exposure analysis, and artifact identification. These QC tasks are vital for the optimal operation of a department performing digital radiography.« less
Accelerating sino-atrium computer simulations with graphic processing units.
Zhang, Hong; Xiao, Zheng; Lin, Shien-fong
2015-01-01
Sino-atrial node cells (SANCs) play a significant role in rhythmic firing. To investigate their role in arrhythmia and interactions with the atrium, computer simulations based on cellular dynamic mathematical models are generally used. However, the large-scale computation usually makes research difficult, given the limited computational power of Central Processing Units (CPUs). In this paper, an accelerating approach with Graphic Processing Units (GPUs) is proposed in a simulation consisting of the SAN tissue and the adjoining atrium. By using the operator splitting method, the computational task was made parallel. Three parallelization strategies were then put forward. The strategy with the shortest running time was further optimized by considering block size, data transfer and partition. The results showed that for a simulation with 500 SANCs and 30 atrial cells, the execution time taken by the non-optimized program decreased 62% with respect to a serial program running on CPU. The execution time decreased by 80% after the program was optimized. The larger the tissue was, the more significant the acceleration became. The results demonstrated the effectiveness of the proposed GPU-accelerating methods and their promising applications in more complicated biological simulations.
NASA Astrophysics Data System (ADS)
Guo, Peng; Cheng, Wenming; Wang, Yi
2014-10-01
The quay crane scheduling problem (QCSP) determines the handling sequence of tasks at ship bays by a set of cranes assigned to a container vessel such that the vessel's service time is minimized. A number of heuristics or meta-heuristics have been proposed to obtain the near-optimal solutions to overcome the NP-hardness of the problem. In this article, the idea of generalized extremal optimization (GEO) is adapted to solve the QCSP with respect to various interference constraints. The resulting GEO is termed the modified GEO. A randomized searching method for neighbouring task-to-QC assignments to an incumbent task-to-QC assignment is developed in executing the modified GEO. In addition, a unidirectional search decoding scheme is employed to transform a task-to-QC assignment to an active quay crane schedule. The effectiveness of the developed GEO is tested on a suite of benchmark problems introduced by K.H. Kim and Y.M. Park in 2004 (European Journal of Operational Research, Vol. 156, No. 3). Compared with other well-known existing approaches, the experiment results show that the proposed modified GEO is capable of obtaining the optimal or near-optimal solution in a reasonable time, especially for large-sized problems.
NASA Technical Reports Server (NTRS)
Friedmann, P. P.; Venkatesan, C.; Yuan, K.
1992-01-01
This paper describes the development of a new structural optimization capability aimed at the aeroelastic tailoring of composite rotor blades with straight and swept tips. The primary objective is to reduce vibration levels in forward flight without diminishing the aeroelastic stability margins of the blade. In the course of this research activity a number of complicated tasks have been addressed: (1) development of a new, aeroelastic stability and response analysis; (2) formulation of a new comprehensive sensitive analysis, which facilitates the generation of the appropriate approximations for the objective and the constraints; (3) physical understanding of the new model and, in particular, determination of its potential for aeroelastic tailoring, and (4) combination of the newly developed analysis capability, the sensitivity derivatives and the optimizer into a comprehensive optimization capability. The first three tasks have been completed and the fourth task is in progress.
Enhancement of human cognitive performance using transcranial magnetic stimulation (TMS)
Luber, Bruce; Lisanby, and Sarah H.
2014-01-01
Here we review the usefulness of transcranial magnetic stimulation (TMS) in modulating cortical networks in ways that might produce performance enhancements in healthy human subjects. To date over sixty studies have reported significant improvements in speed and accuracy in a variety of tasks involving perceptual, motor, and executive processing. Two basic categories of enhancement mechanisms are suggested by this literature: direct modulation of a cortical region or network that leads to more efficient processing, and addition-by-subtraction, which is disruption of processing which competes or distracts from task performance. Potential applications of TMS cognitive enhancement, including research into cortical function, rehabilitation therapy in neurological and psychiatric illness, and accelerated skill acquisition in healthy individuals are discussed, as are methods of optimizing the magnitude and duration of TMS-induced performance enhancement, such as improvement of targeting through further integration of brain imaging with TMS. One technique, combining multiple sessions of TMS with concurrent TMS/task performance to induce Hebbian-like learning, appears to be promising for prolonging enhancement effects. While further refinements in the application of TMS to cognitive enhancement can still be made, and questions remain regarding the mechanisms underlying the observed effects, this appears to be a fruitful area of investigation that may shed light on the basic mechanisms of cognitive function and their therapeutic modulation. PMID:23770409
Solving the optimal attention allocation problem in manual control
NASA Technical Reports Server (NTRS)
Kleinman, D. L.
1976-01-01
Within the context of the optimal control model of human response, analytic expressions for the gradients of closed-loop performance metrics with respect to human operator attention allocation are derived. These derivatives serve as the basis for a gradient algorithm that determines the optimal attention that a human should allocate among several display indicators in a steady-state manual control task. Application of the human modeling techniques are made to study the hover control task for a CH-46 VTOL flight tested by NASA.
1989-12-01
to construct because the mechanism is a dispatching procedure. Since all nonpreemptive schedules are contained in the set of all preemptive schedules...the optimal value of T’.. in the preemptive case is at least a lower bound on the optimal T., for the nonpreemptive schedules. This principle is the...adapt to changes in the enviro.nment. In hard real-time systems, tasks are also distinguished as preemptable and nonpreemptable . A task is preemptable
NASA Astrophysics Data System (ADS)
Platisa, Ljiljana; Vansteenkiste, Ewout; Goossens, Bart; Marchessoux, Cédric; Kimpe, Tom; Philips, Wilfried
2009-02-01
Medical-imaging systems are designed to aid medical specialists in a specific task. Therefore, the physical parameters of a system need to optimize the task performance of a human observer. This requires measurements of human performance in a given task during the system optimization. Typically, psychophysical studies are conducted for this purpose. Numerical observer models have been successfully used to predict human performance in several detection tasks. Especially, the task of signal detection using a channelized Hotelling observer (CHO) in simulated images has been widely explored. However, there are few studies done for clinically acquired images that also contain anatomic noise. In this paper, we investigate the performance of a CHO in the task of detecting lung nodules in real radiographic images of the chest. To evaluate variability introduced by the limited available data, we employ a commonly used study of a multi-reader multi-case (MRMC) scenario. It accounts for both case and reader variability. Finally, we use the "oneshot" methods to estimate the MRMC variance of the area under the ROC curve (AUC). The obtained AUC compares well to those reported for human observer study on a similar data set. Furthermore, the "one-shot" analysis implies a fairly consistent performance of the CHO with the variance of AUC below 0.002. This indicates promising potential for numerical observers in optimization of medical imaging displays and encourages further investigation on the subject.
Lara, Tania; Madrid, Juan Antonio; Correa, Ángel
2014-01-01
Time of day modulates our cognitive functions, especially those related to executive control, such as the ability to inhibit inappropriate responses. However, the impact of individual differences in time of day preferences (i.e. morning vs. evening chronotype) had not been considered by most studies. It was also unclear whether the vigilance decrement (impaired performance with time on task) depends on both time of day and chronotype. In this study, morning-type and evening-type participants performed a task measuring vigilance and response inhibition (the Sustained Attention to Response Task, SART) in morning and evening sessions. The results showed that the vigilance decrement in inhibitory performance was accentuated at non-optimal as compared to optimal times of day. In the morning-type group, inhibition performance decreased linearly with time on task only in the evening session, whereas in the morning session it remained more accurate and stable over time. In contrast, inhibition performance in the evening-type group showed a linear vigilance decrement in the morning session, whereas in the evening session the vigilance decrement was attenuated, following a quadratic trend. Our findings imply that the negative effects of time on task in executive control can be prevented by scheduling cognitive tasks at the optimal time of day according to specific circadian profiles of individuals. Therefore, time of day and chronotype influences should be considered in research and clinical studies as well as real-word situations demanding executive control for response inhibition. PMID:24586404
Visual anticipation biases conscious decision making but not bottom-up visual processing.
Mathews, Zenon; Cetnarski, Ryszard; Verschure, Paul F M J
2014-01-01
Prediction plays a key role in control of attention but it is not clear which aspects of prediction are most prominent in conscious experience. An evolving view on the brain is that it can be seen as a prediction machine that optimizes its ability to predict states of the world and the self through the top-down propagation of predictions and the bottom-up presentation of prediction errors. There are competing views though on whether prediction or prediction errors dominate the formation of conscious experience. Yet, the dynamic effects of prediction on perception, decision making and consciousness have been difficult to assess and to model. We propose a novel mathematical framework and a psychophysical paradigm that allows us to assess both the hierarchical structuring of perceptual consciousness, its content and the impact of predictions and/or errors on conscious experience, attention and decision-making. Using a displacement detection task combined with reverse correlation, we reveal signatures of the usage of prediction at three different levels of perceptual processing: bottom-up fast saccades, top-down driven slow saccades and consciousnes decisions. Our results suggest that the brain employs multiple parallel mechanism at different levels of perceptual processing in order to shape effective sensory consciousness within a predicted perceptual scene. We further observe that bottom-up sensory and top-down predictive processes can be dissociated through cognitive load. We propose a probabilistic data association model from dynamical systems theory to model the predictive multi-scale bias in perceptual processing that we observe and its role in the formation of conscious experience. We propose that these results support the hypothesis that consciousness provides a time-delayed description of a task that is used to prospectively optimize real time control structures, rather than being engaged in the real-time control of behavior itself.
Modeling of tool path for the CNC sheet cutting machines
NASA Astrophysics Data System (ADS)
Petunin, Aleksandr A.
2015-11-01
In the paper the problem of tool path optimization for CNC (Computer Numerical Control) cutting machines is considered. The classification of the cutting techniques is offered. We also propose a new classification of toll path problems. The tasks of cost minimization and time minimization for standard cutting technique (Continuous Cutting Problem, CCP) and for one of non-standard cutting techniques (Segment Continuous Cutting Problem, SCCP) are formalized. We show that the optimization tasks can be interpreted as discrete optimization problem (generalized travel salesman problem with additional constraints, GTSP). Formalization of some constraints for these tasks is described. For the solution GTSP we offer to use mathematical model of Prof. Chentsov based on concept of a megalopolis and dynamic programming.
Research on Scheduling Algorithm for Multi-satellite and Point Target Task on Swinging Mode
NASA Astrophysics Data System (ADS)
Wang, M.; Dai, G.; Peng, L.; Song, Z.; Chen, G.
2012-12-01
Nowadays, using satellite in space to observe ground is an important and major method to obtain ground information. With the development of the scientific technology in the field of space, many fields such as military and economic and other areas have more and more requirement of space technology because of the benefits of the satellite's widespread, timeliness and unlimited of area and country. And at the same time, because of the wide use of all kinds of satellites, sensors, repeater satellites and ground receiving stations, ground control system are now facing great challenge. Therefore, how to make the best value of satellite resources so as to make full use of them becomes an important problem of ground control system. Satellite scheduling is to distribute the resource to all tasks without conflict to obtain the scheduling result so as to complete as many tasks as possible to meet user's requirement under considering the condition of the requirement of satellites, sensors and ground receiving stations. Considering the size of the task, we can divide tasks into point task and area task. This paper only considers point targets. In this paper, a description of satellite scheduling problem and a chief introduction of the theory of satellite scheduling are firstly made. We also analyze the restriction of resource and task in scheduling satellites. The input and output flow of scheduling process are also chiefly described in the paper. On the basis of these analyses, we put forward a scheduling model named as multi-variable optimization model for multi-satellite and point target task on swinging mode. In the multi-variable optimization model, the scheduling problem is transformed the parametric optimization problem. The parameter we wish to optimize is the swinging angle of every time-window. In the view of the efficiency and accuracy, some important problems relating the satellite scheduling such as the angle relation between satellites and ground targets, positive and negative swinging angle and the computation of time window are analyzed and discussed. And many strategies to improve the efficiency of this model are also put forward. In order to solve the model, we bring forward the conception of activity sequence map. By using the activity sequence map, the activity choice and the start time of the activity can be divided. We also bring forward three neighborhood operators to search the result space. The front movement remaining time and the back movement remaining time are used to analyze the feasibility to generate solution from neighborhood operators. Lastly, the algorithm to solve the problem and model is put forward based genetic algorithm. Population initialization, crossover operator, mutation operator, individual evaluation, collision decrease operator, select operator and collision elimination operator is designed in the paper. Finally, the scheduling result and the simulation for a practical example on 5 satellites and 100 point targets with swinging mode is given, and the scheduling performances are also analyzed while the swinging angle in 0, 5, 10, 15, 25. It can be shown by the result that the model and the algorithm are more effective than those ones without swinging mode.
Colombo, Roberto; Sterpi, Irma; Mazzone, Alessandra; Delconte, Carmen; Pisano, Fabrizio
2012-05-01
In robot-assisted neurorehabilitation, matching the task difficulty level to the patient's needs and abilities, both initially and as the relearning process progresses, can enhance the effectiveness of training and improve patients' motivation and outcome. This study presents a Progressive Task Regulation algorithm implemented in a robot for upper limb rehabilitation. It evaluates the patient's performance during training through the computation of robot-measured parameters, and automatically changes the features of the reaching movements, adapting the difficulty level of the motor task to the patient's abilities. In particular, it can select different types of assistance (time-triggered, activity-triggered, and negative assistance) and implement varied therapy practice to promote generalization processes. The algorithm was tuned by assessing the performance data obtained in 22 chronic stroke patients who underwent robotic rehabilitation, in which the difficulty level of the task was manually adjusted by the therapist. Thus, we could verify the patient's recovery strategies and implement task transition rules to match both the patient's and therapist's behavior. In addition, the algorithm was tested in a sample of five chronic stroke patients. The findings show good agreement with the therapist decisions so indicating that it could be useful for the implementation of training protocols allowing individualized and gradual treatment of upper limb disabilities in patients after stroke. The application of this algorithm during robot-assisted therapy should allow an easier management of the different motor tasks administered during training, thereby facilitating the therapist's activity in the treatment of different pathologic conditions of the neuromuscular system.
Liu, Y; Wickens, C D
1994-11-01
The evaluation of mental workload is becoming increasingly important in system design and analysis. The present study examined the structure and assessment of mental workload in performing decision and monitoring tasks by focusing on two mental workload measurements: subjective assessment and time estimation. The task required the assignment of a series of incoming customers to the shortest of three parallel service lines displayed on a computer monitor. The subject was either in charge of the customer assignment (manual mode) or was monitoring an automated system performing the same task (automatic mode). In both cases, the subjects were required to detect the non-optimal assignments that they or the computer had made. Time pressure was manipulated by the experimenter to create fast and slow conditions. The results revealed a multi-dimensional structure of mental workload and a multi-step process of subjective workload assessment. The results also indicated that subjective workload was more influenced by the subject's participatory mode than by the factor of task speed. The time estimation intervals produced while performing the decision and monitoring tasks had significantly greater length and larger variability than those produced while either performing no other tasks or performing a well practised customer assignment task. This result seemed to indicate that time estimation was sensitive to the presence of perceptual/cognitive demands, but not to response related activities to which behavioural automaticity has developed.
Artificial intelligence for the CTA Observatory scheduler
NASA Astrophysics Data System (ADS)
Colomé, Josep; Colomer, Pau; Campreciós, Jordi; Coiffard, Thierry; de Oña, Emma; Pedaletti, Giovanna; Torres, Diego F.; Garcia-Piquer, Alvaro
2014-08-01
The Cherenkov Telescope Array (CTA) project will be the next generation ground-based very high energy gamma-ray instrument. The success of the precursor projects (i.e., HESS, MAGIC, VERITAS) motivated the construction of this large infrastructure that is included in the roadmap of the ESFRI projects since 2008. CTA is planned to start the construction phase in 2015 and will consist of two arrays of Cherenkov telescopes operated as a proposal-driven open observatory. Two sites are foreseen at the southern and northern hemispheres. The CTA observatory will handle several observation modes and will have to operate tens of telescopes with a highly efficient and reliable control. Thus, the CTA planning tool is a key element in the control layer for the optimization of the observatory time. The main purpose of the scheduler for CTA is the allocation of multiple tasks to one single array or to multiple sub-arrays of telescopes, while maximizing the scientific return of the facility and minimizing the operational costs. The scheduler considers long- and short-term varying conditions to optimize the prioritization of tasks. A short-term scheduler provides the system with the capability to adapt, in almost real-time, the selected task to the varying execution constraints (i.e., Targets of Opportunity, health or status of the system components, environment conditions). The scheduling procedure ensures that long-term planning decisions are correctly transferred to the short-term prioritization process for a suitable selection of the next task to execute on the array. In this contribution we present the constraints to CTA task scheduling that helped classifying it as a Flexible Job-Shop Problem case and finding its optimal solution based on Artificial Intelligence techniques. We describe the scheduler prototype that uses a Guarded Discrete Stochastic Neural Network (GDSN), for an easy representation of the possible long- and short-term planning solutions, and Constraint Propagation techniques. A simulation platform, an analysis tool and different test case scenarios for CTA were developed to test the performance of the scheduler and are also described.
Composite fuselage crown panel manufacturing technology
NASA Technical Reports Server (NTRS)
Willden, Kurtis; Metschan, S.; Grant, C.; Brown, T.
1992-01-01
Commercial fuselage structures contain significant challenges in attempting to save manufacturing costs with advanced composite technology. Assembly issues, materials costs, and fabrication of elements with complex geometry are each expected to drive the cost of composite fuselage structure. Key technologies, such as large crown panel fabrication, were pursued for low cost. An intricate bond panel design and manufacturing concept were selected based on the efforts of the Design Build Team. The manufacturing processes selected for the intricate bond design include multiple large panel fabrication with Advanced Tow Placement (ATP) process, innovative cure tooling concepts, resin transfer molding of long fuselage frames, and use of low cost materials forms. The process optimization for final design/manufacturing configuration included factory simulations and hardware demonstrations. These efforts and other optimization tasks were instrumental in reducing costs by 18 pct. and weight by 45 pct. relative to an aluminum baseline. The qualitative and quantitative results of the manufacturing demonstrations were used to assess manufacturing risks and technology readiness.
Composite fuselage crown panel manufacturing technology
NASA Technical Reports Server (NTRS)
Willden, Kurtis; Metschan, S.; Grant, C.; Brown, T.
1992-01-01
Commercial fuselage structures contain significant challenges in attempting to save manufacturing costs with advanced composite technology. Assembly issues, material costs, and fabrication of elements with complex geometry are each expected to drive the cost of composite fuselage structures. Boeing's efforts under the NASA ACT program have pursued key technologies for low-cost, large crown panel fabrication. An intricate bond panel design and manufacturing concepts were selected based on the efforts of the Design Build Team (DBT). The manufacturing processes selected for the intricate bond design include multiple large panel fabrication with the Advanced Tow Placement (ATP) process, innovative cure tooling concepts, resin transfer molding of long fuselage frames, and utilization of low-cost material forms. The process optimization for final design/manufacturing configuration included factory simulations and hardware demonstrations. These efforts and other optimization tasks were instrumental in reducing cost by 18 percent and weight by 45 percent relative to an aluminum baseline. The qualitative and quantitative results of the manufacturing demonstrations were used to assess manufacturing risks and technology readiness.
Modelling and Simulation in the Design Process of Armored Vehicles
2003-03-01
trackway conditions is a demanding optimization task. Basically, a high level of ride comfort requires soft suspension tuning, whereas driving safety relies...The maximum off-road speed is generally limited by traction, input torque, driving safety and ride comfort. When obstacles are to be negotiated, the...wheel travel was defined during the mobility simulation runs. Figure 14: Ramp 1.5m at 40 kph; virtual and physical prototype Driving safety and ride
Non-traditional Sensor Tasking for SSA: A Case Study
NASA Astrophysics Data System (ADS)
Herz, A.; Herz, E.; Center, K.; Martinez, I.; Favero, N.; Clark, C.; Therien, W.; Jeffries, M.
Industry has recognized that maintaining SSA of the orbital environment going forward is too challenging for the government alone. Consequently there are a significant number of commercial activities in various stages of development standing-up novel sensors and sensor networks to assist in SSA gathering and dissemination. Use of these systems will allow government and military operators to focus on the most sensitive space control issues while allocating routine or lower priority data gathering responsibility to the commercial side. The fact that there will be multiple (perhaps many) commercial sensor capabilities available in this new operational model begets a common access solution. Absent a central access point to assert data needs, optimized use of all commercial sensor resources is not possible and the opportunity for coordinated collections satisfying overarching SSA-elevating objectives is lost. Orbit Logic is maturing its Heimdall Web system - an architecture facilitating “data requestor” perspectives (allowing government operations centers to assert SSA data gathering objectives) and “sensor operator” perspectives (through which multiple sensors of varying phenomenology and capability are integrated via machine -machine interfaces). When requestors submit their needs, Heimdall’s planning engine determines tasking schedules across all sensors, optimizing their use via an SSA-specific figure-of-merit. ExoAnalytic was a key partner in refining the sensor operator interfaces, working with Orbit Logic through specific details of sensor tasking schedule delivery and the return of observation data. Scant preparation on both sides preceded several integration exercises (walk-then-run style), which culminated in successful demonstration of the ability to supply optimized schedules for routine public catalog data collection – then adapt sensor tasking schedules in real-time upon receipt of urgent data collection requests. This paper will provide a narrative of the joint integration process - detailing decision points, compromises, and results obtained on the road toward a set of interoperability standards for commercial sensor accommodation.
Boselie, J J L M; Vancleef, L M G; Peters, M L
2018-03-24
Chronic pain is associated with emotional problems as well as difficulties in cognitive functioning. Prior experimental studies have shown that optimism, the tendency to expect that good things happen in the future, and positive emotions can counteract pain-induced task performance deficits in healthy participants. More specifically, induced optimism was found to buffer against the negative effects of experimental pain on executive functioning. This clinical experiment examined whether this beneficial effect can be extended to a chronic pain population. Patients (N = 122) were randomized to a positive psychology Internet-based intervention (PPI; n = 74) or a waiting list control condition (WLC; n = 48). The PPI consisted of positive psychology exercises that particularly target optimism, positive emotions and self-compassion. Results demonstrated that patients in the PPI condition scored higher on happiness, optimism, positive future expectancies, positive affect, self-compassion and ability to live a desired life despite pain, and scored lower on pain catastrophizing, depression and anxiety compared to patients in the WLC condition. However, executive task performance did not improve following completion of the PPI, compared to the WLC condition. Despite the lack of evidence that positive emotions and optimism can improve executive task performance in chronic pain patients, this study did convincingly demonstrate that it is possible to increase positive emotions and optimism in chronic pain patients with an online positive psychology intervention. It is imperative to further explore amendable psychological factors that may reduce the negative impact of pain on executive functioning. We demonstrated that an Internet-based positive psychology intervention strengthens optimism and positive emotions in chronic pain patients. These emotional improvements are not associated with improved executive task performance. As pain itself often cannot be relieved, it is imperative to have techniques to reduce the burden of living with chronic pain. © 2018 The Authors. European Journal of Pain published by John Wiley & Sons Ltd on behalf of European Pain Federation -EFIC®.
Heimdall System for MSSS Sensor Tasking
NASA Astrophysics Data System (ADS)
Herz, A.; Jones, B.; Herz, E.; George, D.; Axelrad, P.; Gehly, S.
In Norse Mythology, Heimdall uses his foreknowledge and keen eyesight to keep watch for disaster from his home near the Rainbow Bridge. Orbit Logic and the Colorado Center for Astrodynamics Research (CCAR) at the University of Colorado (CU) have developed the Heimdall System to schedule observations of known and uncharacterized objects and search for new objects from the Maui Space Surveillance Site. Heimdall addresses the current need for automated and optimized SSA sensor tasking driven by factors associated with improved space object catalog maintenance. Orbit Logic and CU developed an initial baseline prototype SSA sensor tasking capability for select sensors at the Maui Space Surveillance Site (MSSS) using STK and STK Scheduler, and then added a new Track Prioritization Component for FiSST-inspired computations for predicted Information Gain and Probability of Detection, and a new SSA-specific Figure-of-Merit (FOM) for optimized SSA sensor tasking. While the baseline prototype addresses automation and some of the multi-sensor tasking optimization, the SSA-improved prototype addresses all of the key elements required for improved tasking leading to enhanced object catalog maintenance. The Heimdall proof-of-concept was demonstrated for MSSS SSA sensor tasking for a 24 hour period to attempt observations of all operational satellites in the unclassified NORAD catalog, observe a small set of high priority GEO targets every 30 minutes, make a sky survey of the GEO belt region accessible to MSSS sensors, and observe particular GEO regions that have a high probability of finding new objects with any excess sensor time. This Heimdall prototype software paves the way for further R&D that will integrate this technology into the MSSS systems for operational scheduling, improve the software's scalability, and further tune and enhance schedule optimization. The Heimdall software for SSA sensor tasking provides greatly improved performance over manual tasking, improved coordinated sensor usage, and tasking schedules driven by catalog improvement goals (reduced overall covariance, etc.). The improved performance also enables more responsive sensor tasking to address external events, newly detected objects, newly detected object activity, and sensor anomalies. Instead of having to wait until the next day's scheduling phase, events can be addressed with new tasking schedules immediately (within seconds or minutes). Perhaps the most important benefit is improved SSA based on an overall improvement to the quality of the space catalog. By driving sensor tasking and scheduling based on predicted Information Gain and other relevant factors, better decisions are made in the application of available sensor resources, leading to an improved catalog and better information about the objects of most interest. The Heimdall software solution provides a configurable, automated system to improve sensor tasking efficiency and responsiveness for SSA applications. The FISST algorithms for Track Prioritization, SSA specific task and resource attributes, Scheduler algorithms, and configurable SSA-specific Figure-of-Merit together provide optimized and tunable scheduling for the Maui Space Surveillance Site and possibly other sites and organizations across the U.S. military and for allies around the world.
Ma, Wei Ji; Shen, Shan; Dziugaite, Gintare; van den Berg, Ronald
2015-01-01
In tasks such as visual search and change detection, a key question is how observers integrate noisy measurements from multiple locations to make a decision. Decision rules proposed to model this process haven fallen into two categories: Bayes-optimal (ideal observer) rules and ad-hoc rules. Among the latter, the maximum-of-outputs (max) rule has been most prominent. Reviewing recent work and performing new model comparisons across a range of paradigms, we find that in all cases except for one, the optimal rule describes human data as well as or better than every max rule either previously proposed or newly introduced here. This casts doubt on the utility of the max rule for understanding perceptual decision-making. PMID:25584425
Use of EPANET solver to manage water distribution in Smart City
NASA Astrophysics Data System (ADS)
Antonowicz, A.; Brodziak, R.; Bylka, J.; Mazurkiewicz, J.; Wojtecki, S.; Zakrzewski, P.
2018-02-01
Paper presents a method of using EPANET solver to support manage water distribution system in Smart City. The main task is to develop the application that allows remote access to the simulation model of the water distribution network developed in the EPANET environment. Application allows to perform both single and cyclic simulations with the specified step of changing the values of the selected process variables. In the paper the architecture of application was shown. The application supports the selection of the best device control algorithm using optimization methods. Optimization procedures are possible with following methods: brute force, SLSQP (Sequential Least SQuares Programming), Modified Powell Method. Article was supplemented by example of using developed computer tool.
Estimation of the laser cutting operating cost by support vector regression methodology
NASA Astrophysics Data System (ADS)
Jović, Srđan; Radović, Aleksandar; Šarkoćević, Živče; Petković, Dalibor; Alizamir, Meysam
2016-09-01
Laser cutting is a popular manufacturing process utilized to cut various types of materials economically. The operating cost is affected by laser power, cutting speed, assist gas pressure, nozzle diameter and focus point position as well as the workpiece material. In this article, the process factors investigated were: laser power, cutting speed, air pressure and focal point position. The aim of this work is to relate the operating cost to the process parameters mentioned above. CO2 laser cutting of stainless steel of medical grade AISI316L has been investigated. The main goal was to analyze the operating cost through the laser power, cutting speed, air pressure, focal point position and material thickness. Since the laser operating cost is a complex, non-linear task, soft computing optimization algorithms can be used. Intelligent soft computing scheme support vector regression (SVR) was implemented. The performance of the proposed estimator was confirmed with the simulation results. The SVR results are then compared with artificial neural network and genetic programing. According to the results, a greater improvement in estimation accuracy can be achieved through the SVR compared to other soft computing methodologies. The new optimization methods benefit from the soft computing capabilities of global optimization and multiobjective optimization rather than choosing a starting point by trial and error and combining multiple criteria into a single criterion.
Modeling human decision making behavior in supervisory control
NASA Technical Reports Server (NTRS)
Tulga, M. K.; Sheridan, T. B.
1977-01-01
An optimal decision control model was developed, which is based primarily on a dynamic programming algorithm which looks at all the available task possibilities, charts an optimal trajectory, and commits itself to do the first step (i.e., follow the optimal trajectory during the next time period), and then iterates the calculation. A Bayesian estimator was included which estimates the tasks which might occur in the immediate future and provides this information to the dynamic programming routine. Preliminary trials comparing the human subject's performance to that of the optimal model show a great similarity, but indicate that the human skips certain movements which require quick change in strategy.
Gaussian process surrogates for failure detection: A Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Wang, Hongqiao; Lin, Guang; Li, Jinglai
2016-05-01
An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.
Optimally designing games for behavioural research
Rafferty, Anna N.; Zaharia, Matei; Griffiths, Thomas L.
2014-01-01
Computer games can be motivating and engaging experiences that facilitate learning, leading to their increasing use in education and behavioural experiments. For these applications, it is often important to make inferences about the knowledge and cognitive processes of players based on their behaviour. However, designing games that provide useful behavioural data are a difficult task that typically requires significant trial and error. We address this issue by creating a new formal framework that extends optimal experiment design, used in statistics, to apply to game design. In this framework, we use Markov decision processes to model players' actions within a game, and then make inferences about the parameters of a cognitive model from these actions. Using a variety of concept learning games, we show that in practice, this method can predict which games will result in better estimates of the parameters of interest. The best games require only half as many players to attain the same level of precision. PMID:25002821
NASA Technical Reports Server (NTRS)
Patten, William Neff
1989-01-01
There is an evident need to discover a means of establishing reliable, implementable controls for systems that are plagued by nonlinear and, or uncertain, model dynamics. The development of a generic controller design tool for tough-to-control systems is reported. The method utilizes a moving grid, time infinite element based solution of the necessary conditions that describe an optimal controller for a system. The technique produces a discrete feedback controller. Real time laboratory experiments are now being conducted to demonstrate the viability of the method. The algorithm that results is being implemented in a microprocessor environment. Critical computational tasks are accomplished using a low cost, on-board, multiprocessor (INMOS T800 Transputers) and parallel processing. Progress to date validates the methodology presented. Applications of the technique to the control of highly flexible robotic appendages are suggested.
An Automated, Adaptive Framework for Optimizing Preprocessing Pipelines in Task-Based Functional MRI
Churchill, Nathan W.; Spring, Robyn; Afshin-Pour, Babak; Dong, Fan; Strother, Stephen C.
2015-01-01
BOLD fMRI is sensitive to blood-oxygenation changes correlated with brain function; however, it is limited by relatively weak signal and significant noise confounds. Many preprocessing algorithms have been developed to control noise and improve signal detection in fMRI. Although the chosen set of preprocessing and analysis steps (the “pipeline”) significantly affects signal detection, pipelines are rarely quantitatively validated in the neuroimaging literature, due to complex preprocessing interactions. This paper outlines and validates an adaptive resampling framework for evaluating and optimizing preprocessing choices by optimizing data-driven metrics of task prediction and spatial reproducibility. Compared to standard “fixed” preprocessing pipelines, this optimization approach significantly improves independent validation measures of within-subject test-retest, and between-subject activation overlap, and behavioural prediction accuracy. We demonstrate that preprocessing choices function as implicit model regularizers, and that improvements due to pipeline optimization generalize across a range of simple to complex experimental tasks and analysis models. Results are shown for brief scanning sessions (<3 minutes each), demonstrating that with pipeline optimization, it is possible to obtain reliable results and brain-behaviour correlations in relatively small datasets. PMID:26161667
Spatio-temporal Hotelling observer for signal detection from image sequences
Caucci, Luca; Barrett, Harrison H.; Rodríguez, Jeffrey J.
2010-01-01
Detection of signals in noisy images is necessary in many applications, including astronomy and medical imaging. The optimal linear observer for performing a detection task, called the Hotelling observer in the medical literature, can be regarded as a generalization of the familiar prewhitening matched filter. Performance on the detection task is limited by randomness in the image data, which stems from randomness in the object, randomness in the imaging system, and randomness in the detector outputs due to photon and readout noise, and the Hotelling observer accounts for all of these effects in an optimal way. If multiple temporal frames of images are acquired, the resulting data set is a spatio-temporal random process, and the Hotelling observer becomes a spatio-temporal linear operator. This paper discusses the theory of the spatio-temporal Hotelling observer and estimation of the required spatio-temporal covariance matrices. It also presents a parallel implementation of the observer on a cluster of Sony PLAYSTATION 3 gaming consoles. As an example, we consider the use of the spatio-temporal Hotelling observer for exoplanet detection. PMID:19550494
Spatio-temporal Hotelling observer for signal detection from image sequences.
Caucci, Luca; Barrett, Harrison H; Rodriguez, Jeffrey J
2009-06-22
Detection of signals in noisy images is necessary in many applications, including astronomy and medical imaging. The optimal linear observer for performing a detection task, called the Hotelling observer in the medical literature, can be regarded as a generalization of the familiar prewhitening matched filter. Performance on the detection task is limited by randomness in the image data, which stems from randomness in the object, randomness in the imaging system, and randomness in the detector outputs due to photon and readout noise, and the Hotelling observer accounts for all of these effects in an optimal way. If multiple temporal frames of images are acquired, the resulting data set is a spatio-temporal random process, and the Hotelling observer becomes a spatio-temporal linear operator. This paper discusses the theory of the spatio-temporal Hotelling observer and estimation of the required spatio-temporal covariance matrices. It also presents a parallel implementation of the observer on a cluster of Sony PLAYSTATION 3 gaming consoles. As an example, we consider the use of the spatio-temporal Hotelling observer for exoplanet detection.
Face verification with balanced thresholds.
Yan, Shuicheng; Xu, Dong; Tang, Xiaoou
2007-01-01
The process of face verification is guided by a pre-learned global threshold, which, however, is often inconsistent with class-specific optimal thresholds. It is, hence, beneficial to pursue a balance of the class-specific thresholds in the model-learning stage. In this paper, we present a new dimensionality reduction algorithm tailored to the verification task that ensures threshold balance. This is achieved by the following aspects. First, feasibility is guaranteed by employing an affine transformation matrix, instead of the conventional projection matrix, for dimensionality reduction, and, hence, we call the proposed algorithm threshold balanced transformation (TBT). Then, the affine transformation matrix, constrained as the product of an orthogonal matrix and a diagonal matrix, is optimized to improve the threshold balance and classification capability in an iterative manner. Unlike most algorithms for face verification which are directly transplanted from face identification literature, TBT is specifically designed for face verification and clarifies the intrinsic distinction between these two tasks. Experiments on three benchmark face databases demonstrate that TBT significantly outperforms the state-of-the-art subspace techniques for face verification.
Collectives for Multiple Resource Job Scheduling Across Heterogeneous Servers
NASA Technical Reports Server (NTRS)
Tumer, K.; Lawson, J.
2003-01-01
Efficient management of large-scale, distributed data storage and processing systems is a major challenge for many computational applications. Many of these systems are characterized by multi-resource tasks processed across a heterogeneous network. Conventional approaches, such as load balancing, work well for centralized, single resource problems, but breakdown in the more general case. In addition, most approaches are often based on heuristics which do not directly attempt to optimize the world utility. In this paper, we propose an agent based control system using the theory of collectives. We configure the servers of our network with agents who make local job scheduling decisions. These decisions are based on local goals which are constructed to be aligned with the objective of optimizing the overall efficiency of the system. We demonstrate that multi-agent systems in which all the agents attempt to optimize the same global utility function (team game) only marginally outperform conventional load balancing. On the other hand, agents configured using collectives outperform both team games and load balancing (by up to four times for the latter), despite their distributed nature and their limited access to information.
Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks
Chen, Jianhui; Liu, Ji; Ye, Jieping
2013-01-01
We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms. PMID:24077658
Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks.
Chen, Jianhui; Liu, Ji; Ye, Jieping
2012-02-01
We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms.
In Search of the Optimal Path: How Learners at Task Use an Online Dictionary
ERIC Educational Resources Information Center
Hamel, Marie-Josee
2012-01-01
We have analyzed circa 180 navigation paths followed by six learners while they performed three language encoding tasks at the computer using an online dictionary prototype. Our hypothesis was that learners who follow an "optimal path" while navigating within the dictionary, using its search and look-up functions, would have a high chance of…
Multiphase porous media modelling: A novel approach to predicting food processing performance.
Khan, Md Imran H; Joardder, M U H; Kumar, Chandan; Karim, M A
2018-03-04
The development of a physics-based model of food processing is essential to improve the quality of processed food and optimize energy consumption. Food materials, particularly plant-based food materials, are complex in nature as they are porous and have hygroscopic properties. A multiphase porous media model for simultaneous heat and mass transfer can provide a realistic understanding of transport processes and thus can help to optimize energy consumption and improve food quality. Although the development of a multiphase porous media model for food processing is a challenging task because of its complexity, many researchers have attempted it. The primary aim of this paper is to present a comprehensive review of the multiphase models available in the literature for different methods of food processing, such as drying, frying, cooking, baking, heating, and roasting. A critical review of the parameters that should be considered for multiphase modelling is presented which includes input parameters, material properties, simulation techniques and the hypotheses. A discussion on the general trends in outcomes, such as moisture saturation, temperature profile, pressure variation, and evaporation patterns, is also presented. The paper concludes by considering key issues in the existing multiphase models and future directions for development of multiphase models.
Distributed State Estimation Using a Modified Partitioned Moving Horizon Strategy for Power Systems.
Chen, Tengpeng; Foo, Yi Shyh Eddy; Ling, K V; Chen, Xuebing
2017-10-11
In this paper, a distributed state estimation method based on moving horizon estimation (MHE) is proposed for the large-scale power system state estimation. The proposed method partitions the power systems into several local areas with non-overlapping states. Unlike the centralized approach where all measurements are sent to a processing center, the proposed method distributes the state estimation task to the local processing centers where local measurements are collected. Inspired by the partitioned moving horizon estimation (PMHE) algorithm, each local area solves a smaller optimization problem to estimate its own local states by using local measurements and estimated results from its neighboring areas. In contrast with PMHE, the error from the process model is ignored in our method. The proposed modified PMHE (mPMHE) approach can also take constraints on states into account during the optimization process such that the influence of the outliers can be further mitigated. Simulation results on the IEEE 14-bus and 118-bus systems verify that our method achieves comparable state estimation accuracy but with a significant reduction in the overall computation load.
Analysis of methods of processing of expert information by optimization of administrative decisions
NASA Astrophysics Data System (ADS)
Churakov, D. Y.; Tsarkova, E. G.; Marchenko, N. D.; Grechishnikov, E. V.
2018-03-01
In the real operation the measure definition methodology in case of expert estimation of quality and reliability of application-oriented software products is offered. In operation methods of aggregation of expert estimates on the example of a collective choice of an instrumental control projects in case of software development of a special purpose for needs of institutions are described. Results of operation of dialogue decision making support system are given an algorithm of the decision of the task of a choice on the basis of a method of the analysis of hierarchies and also. The developed algorithm can be applied by development of expert systems to the solution of a wide class of the tasks anyway connected to a multicriteria choice.
Optimal Measurement Tasks and Their Physical Realizations
NASA Astrophysics Data System (ADS)
Yerokhin, Vadim
This thesis reflects works previously published by the author and materials hitherto unpublished on the subject of quantum information theory. Particularly, results in optimal discrimination, cloning, and separation of quantum states, and their relationships, are discussed. Our interest lies in the scenario where we are given one of two quantum states prepared with a known a-priori probability. We are given full information about the states and are assigned the task of performing an optimal measurement on the incoming state. Given that none of these tasks is in general possible to perform perfectly we must choose a figure of merit to optimize, and as we shall see there is always a trade-off between competing figures of merit, such as the likelihood of getting the desired result versus the quality of the result. For state discrimination the competing figures of merit are the success rate of the measurement, the errors involved, and the inconclusiveness. Similarly increasing the separation between states comes at a cost of less frequent successful applications of the separation protocol. For cloning, aside from successfully producing clones we are also interested in the fidelity of the clones compared to the original state, which is a measure of the quality of the clones. Because all quantum operations obey the same set of conditions for evolution one may expect similar restrictions on disparate measurement strategies, and our work shows a deep connection between all three branches, with cloning and separation asymptotically converging to state discrimination. Via Neumark's theorem, our description of these unitary processes can be implemented using single-photon interferometry with linear optical devices. Amazingly any quantum mechanical evolution may be decomposed as an experiment involving only lasers, beamsplitters, phase-shifters and mirrors. Such readily available tools allow for verification of the aforementioned protocols and we build upon existing results to derive explicit setups that the experimentalist may build.
Hamamé, Carlos M; Cosmelli, Diego; Henriquez, Rodrigo; Aboitiz, Francisco
2011-04-26
Humans and other animals change the way they perceive the world due to experience. This process has been labeled as perceptual learning, and implies that adult nervous systems can adaptively modify the way in which they process sensory stimulation. However, the mechanisms by which the brain modifies this capacity have not been sufficiently analyzed. We studied the neural mechanisms of human perceptual learning by combining electroencephalographic (EEG) recordings of brain activity and the assessment of psychophysical performance during training in a visual search task. All participants improved their perceptual performance as reflected by an increase in sensitivity (d') and a decrease in reaction time. The EEG signal was acquired throughout the entire experiment revealing amplitude increments, specific and unspecific to the trained stimulus, in event-related potential (ERP) components N2pc and P3 respectively. P3 unspecific modification can be related to context or task-based learning, while N2pc may be reflecting a more specific attentional-related boosting of target detection. Moreover, bell and U-shaped profiles of oscillatory brain activity in gamma (30-60 Hz) and alpha (8-14 Hz) frequency bands may suggest the existence of two phases for learning acquisition, which can be understood as distinctive optimization mechanisms in stimulus processing. We conclude that there are reorganizations in several neural processes that contribute differently to perceptual learning in a visual search task. We propose an integrative model of neural activity reorganization, whereby perceptual learning takes place as a two-stage phenomenon including perceptual, attentional and contextual processes.
The effect of haptic guidance and visual feedback on learning a complex tennis task.
Marchal-Crespo, Laura; van Raai, Mark; Rauter, Georg; Wolf, Peter; Riener, Robert
2013-11-01
While haptic guidance can improve ongoing performance of a motor task, several studies have found that it ultimately impairs motor learning. However, some recent studies suggest that the haptic demonstration of optimal timing, rather than movement magnitude, enhances learning in subjects trained with haptic guidance. Timing of an action plays a crucial role in the proper accomplishment of many motor skills, such as hitting a moving object (discrete timing task) or learning a velocity profile (time-critical tracking task). The aim of the present study is to evaluate which feedback conditions-visual or haptic guidance-optimize learning of the discrete and continuous elements of a timing task. The experiment consisted in performing a fast tennis forehand stroke in a virtual environment. A tendon-based parallel robot connected to the end of a racket was used to apply haptic guidance during training. In two different experiments, we evaluated which feedback condition was more adequate for learning: (1) a time-dependent discrete task-learning to start a tennis stroke and (2) a tracking task-learning to follow a velocity profile. The effect that the task difficulty and subject's initial skill level have on the selection of the optimal training condition was further evaluated. Results showed that the training condition that maximizes learning of the discrete time-dependent motor task depends on the subjects' initial skill level. Haptic guidance was especially suitable for less-skilled subjects and in especially difficult discrete tasks, while visual feedback seems to benefit more skilled subjects. Additionally, haptic guidance seemed to promote learning in a time-critical tracking task, while visual feedback tended to deteriorate the performance independently of the task difficulty and subjects' initial skill level. Haptic guidance outperformed visual feedback, although additional studies are needed to further analyze the effect of other types of feedback visualization on motor learning of time-critical tasks.
Timing of repetition suppression of event-related potentials to unattended objects.
Stefanics, Gabor; Heinzle, Jakob; Czigler, István; Valentini, Elia; Stephan, Klaas Enno
2018-05-26
Current theories of object perception emphasize the automatic nature of perceptual inference. Repetition suppression (RS), the successive decrease of brain responses to repeated stimuli, is thought to reflect the optimization of perceptual inference through neural plasticity. While functional imaging studies revealed brain regions that show suppressed responses to the repeated presentation of an object, little is known about the intra-trial time course of repetition effects to everyday objects. Here we used event-related potentials (ERP) to task-irrelevant line-drawn objects, while participants engaged in a distractor task. We quantified changes in ERPs over repetitions using three general linear models (GLM) that modelled RS by an exponential, linear, or categorical "change detection" function in each subject. Our aim was to select the model with highest evidence and determine the within-trial time-course and scalp distribution of repetition effects using that model. Model comparison revealed the superiority of the exponential model indicating that repetition effects are observable for trials beyond the first repetition. Model parameter estimates revealed a sequence of RS effects in three time windows (86-140ms, 322-360ms, and 400-446ms) and with occipital, temporo-parietal, and fronto-temporal distribution, respectively. An interval of repetition enhancement (RE) was also observed (320-340ms) over occipito-temporal sensors. Our results show that automatic processing of task-irrelevant objects involves multiple intervals of RS with distinct scalp topographies. These sequential intervals of RS and RE might reflect the short-term plasticity required for optimization of perceptual inference and the associated changes in prediction errors (PE) and predictions, respectively, over stimulus repetitions during automatic object processing. This article is protected by copyright. All rights reserved. © 2018 The Authors European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Measuring perceived mental workload in children.
Laurie-Rose, Cynthia; Frey, Meredith; Ennis, Aristi; Zamary, Amanda
2014-01-01
Little is known about the mental workload, or psychological costs, associated with information processing tasks in children. We adapted the highly regarded NASA Task Load Index (NASA-TLX) multidimensional workload scale (Hart & Staveland, 1988) to test its efficacy for use with elementary school children. We developed 2 types of tasks, each with 2 levels of demand, to draw differentially on resources from the separate subscales of workload. In Experiment 1, our participants were both typical and school-labeled gifted children recruited from 4th and 5th grades. Results revealed that task type elicited different workload profiles, and task demand directly affected the children's experience of workload. In general, gifted children experienced less workload than typical children. Objective response time and accuracy measures provide evidence for the criterion validity of the workload ratings. In Experiment 2, we applied the same method with 1st- and 2nd-grade children. Findings from Experiment 2 paralleled those of Experiment 1 and support the use of NASA-TLX with even the youngest elementary school children. These findings contribute to the fledgling field of educational ergonomics and attest to the innovative application of workload research. Such research may optimize instructional techniques and identify children at risk for experiencing overload.
Left Posterior Parietal Cortex Participates in Both Task Preparation and Episodic Retrieval
Phillips, Jeffrey S.; Velanova, Katerina; Wolk, David A.; Wheeler, Mark E.
2012-01-01
Optimal memory retrieval depends not only on the fidelity of stored information, but also on the attentional state of the subject. Factors such as mental preparedness to engage in stimulus processing can facilitate or hinder memory retrieval. The current study used functional magnetic resonance imaging (fMRI) to distinguish preparatory brain activity before episodic and semantic retrieval tasks from activity associated with retrieval itself. A catch-trial imaging paradigm permitted separation of neural responses to preparatory task cues and memory probes. Episodic and semantic task preparation engaged a common set of brain regions, including the bilateral intraparietal sulcus (IPS), left fusiform gyrus (FG), and the pre-supplementary motor area (pre-SMA). In the subsequent retrieval phase, the left IPS was among a set of frontoparietal regions that responded differently to old and new stimuli. In contrast, the right IPS responded to preparatory cues with little modulation during memory retrieval. The findings support a strong left-lateralization of retrieval success effects in left parietal cortex, and further indicate that left IPS performs operations that are common to both task preparation and memory retrieval. Such operations may be related to attentional control, monitoring of stimulus relevance, or retrieval. PMID:19285142
Working-memory load and temporal myopia in dynamic decision making.
Worthy, Darrell A; Otto, A Ross; Maddox, W Todd
2012-11-01
We examined the role of working memory (WM) in dynamic decision making by having participants perform decision-making tasks under single-task or dual-task conditions. In 2 experiments participants performed dynamic decision-making tasks in which they chose 1 of 2 options on each trial. The decreasing option always gave a larger immediate reward but caused future rewards for both options to decrease. The increasing option always gave a smaller immediate reward but caused future rewards for both options to increase. In each experiment we manipulated the reward structure such that the decreasing option was the optimal choice in 1 condition and the increasing option was the optimal choice in the other condition. Behavioral results indicated that dual-task participants selected the immediately rewarding decreasing option more often, and single-task participants selected the increasing option more often, regardless of which option was optimal. Thus, dual-task participants performed worse on 1 type of task but better on the other type. Modeling results showed that single-task participants' data were most often best fit by a win-stay, lose-shift (WSLS) rule-based model that tracked differences across trials, and dual-task participants' data were most often best fit by a Softmax reinforcement learning model that tracked recency-weighted average rewards for each option. This suggests that manipulating WM load affects the degree to which participants focus on the immediate versus delayed consequences of their actions and whether they employ a rule-based WSLS strategy, but it does not necessarily affect how well people weigh the immediate versus delayed benefits when determining the long-term utility of each option.
Multisensory perceptual learning is dependent upon task difficulty.
De Niear, Matthew A; Koo, Bonhwang; Wallace, Mark T
2016-11-01
There has been a growing interest in developing behavioral tasks to enhance temporal acuity as recent findings have demonstrated changes in temporal processing in a number of clinical conditions. Prior research has demonstrated that perceptual training can enhance temporal acuity both within and across different sensory modalities. Although certain forms of unisensory perceptual learning have been shown to be dependent upon task difficulty, this relationship has not been explored for multisensory learning. The present study sought to determine the effects of task difficulty on multisensory perceptual learning. Prior to and following a single training session, participants completed a simultaneity judgment (SJ) task, which required them to judge whether a visual stimulus (flash) and auditory stimulus (beep) presented in synchrony or at various stimulus onset asynchronies (SOAs) occurred synchronously or asynchronously. During the training session, participants completed the same SJ task but received feedback regarding the accuracy of their responses. Participants were randomly assigned to one of three levels of difficulty during training: easy, moderate, and hard, which were distinguished based on the SOAs used during training. We report that only the most difficult (i.e., hard) training protocol enhanced temporal acuity. We conclude that perceptual training protocols for enhancing multisensory temporal acuity may be optimized by employing audiovisual stimuli for which it is difficult to discriminate temporal synchrony from asynchrony.
TBDQ: A Pragmatic Task-Based Method to Data Quality Assessment and Improvement
Vaziri, Reza; Mohsenzadeh, Mehran; Habibi, Jafar
2016-01-01
Organizations are increasingly accepting data quality (DQ) as a major key to their success. In order to assess and improve DQ, methods have been devised. Many of these methods attempt to raise DQ by directly manipulating low quality data. Such methods operate reactively and are suitable for organizations with highly developed integrated systems. However, there is a lack of a proactive DQ method for businesses with weak IT infrastructure where data quality is largely affected by tasks that are performed by human agents. This study aims to develop and evaluate a new method for structured data, which is simple and practical so that it can easily be applied to real world situations. The new method detects the potentially risky tasks within a process, and adds new improving tasks to counter them. To achieve continuous improvement, an award system is also developed to help with the better selection of the proposed improving tasks. The task-based DQ method (TBDQ) is most appropriate for small and medium organizations, and simplicity in implementation is one of its most prominent features. TBDQ is case studied in an international trade company. The case study shows that TBDQ is effective in selecting optimal activities for DQ improvement in terms of cost and improvement. PMID:27192547
Design and Analysis of Self-Adapted Task Scheduling Strategies in Wireless Sensor Networks
Guo, Wenzhong; Xiong, Naixue; Chao, Han-Chieh; Hussain, Sajid; Chen, Guolong
2011-01-01
In a wireless sensor network (WSN), the usage of resources is usually highly related to the execution of tasks which consume a certain amount of computing and communication bandwidth. Parallel processing among sensors is a promising solution to provide the demanded computation capacity in WSNs. Task allocation and scheduling is a typical problem in the area of high performance computing. Although task allocation and scheduling in wired processor networks has been well studied in the past, their counterparts for WSNs remain largely unexplored. Existing traditional high performance computing solutions cannot be directly implemented in WSNs due to the limitations of WSNs such as limited resource availability and the shared communication medium. In this paper, a self-adapted task scheduling strategy for WSNs is presented. First, a multi-agent-based architecture for WSNs is proposed and a mathematical model of dynamic alliance is constructed for the task allocation problem. Then an effective discrete particle swarm optimization (PSO) algorithm for the dynamic alliance (DPSO-DA) with a well-designed particle position code and fitness function is proposed. A mutation operator which can effectively improve the algorithm’s ability of global search and population diversity is also introduced in this algorithm. Finally, the simulation results show that the proposed solution can achieve significant better performance than other algorithms. PMID:22163971
Dynamic Task Performance, Cohesion, and Communications in Human Groups.
Giraldo, Luis Felipe; Passino, Kevin M
2016-10-01
In the study of the behavior of human groups, it has been observed that there is a strong interaction between the cohesiveness of the group, its performance when the group has to solve a task, and the patterns of communication between the members of the group. Developing mathematical and computational tools for the analysis and design of task-solving groups that are not only cohesive but also perform well is of importance in social sciences, organizational management, and engineering. In this paper, we model a human group as a dynamical system whose behavior is driven by a task optimization process and the interaction between subsystems that represent the members of the group interconnected according to a given communication network. These interactions are described as attractions and repulsions among members. We show that the dynamics characterized by the proposed mathematical model are qualitatively consistent with those observed in real-human groups, where the key aspect is that the attraction patterns in the group and the commitment to solve the task are not static but change over time. Through a theoretical analysis of the system we provide conditions on the parameters that allow the group to have cohesive behaviors, and Monte Carlo simulations are used to study group dynamics for different sets of parameters, communication topologies, and tasks to solve.
Convex Formulations of Learning from Crowds
NASA Astrophysics Data System (ADS)
Kajino, Hiroshi; Kashima, Hisashi
It has attracted considerable attention to use crowdsourcing services to collect a large amount of labeled data for machine learning, since crowdsourcing services allow one to ask the general public to label data at very low cost through the Internet. The use of crowdsourcing has introduced a new challenge in machine learning, that is, coping with low quality of crowd-generated data. There have been many recent attempts to address the quality problem of multiple labelers, however, there are two serious drawbacks in the existing approaches, that are, (i) non-convexity and (ii) task homogeneity. Most of the existing methods consider true labels as latent variables, which results in non-convex optimization problems. Also, the existing models assume only single homogeneous tasks, while in realistic situations, clients can offer multiple tasks to crowds and crowd workers can work on different tasks in parallel. In this paper, we propose a convex optimization formulation of learning from crowds by introducing personal models of individual crowds without estimating true labels. We further extend the proposed model to multi-task learning based on the resemblance between the proposed formulation and that for an existing multi-task learning model. We also devise efficient iterative methods for solving the convex optimization problems by exploiting conditional independence structures in multiple classifiers.
What aspects of vision facilitate haptic processing?
Millar, Susanna; Al-Attar, Zainab
2005-12-01
We investigate how vision affects haptic performance when task-relevant visual cues are reduced or excluded. The task was to remember the spatial location of six landmarks that were explored by touch in a tactile map. Here, we use specially designed spectacles that simulate residual peripheral vision, tunnel vision, diffuse light perception, and total blindness. Results for target locations differed, suggesting additional effects from adjacent touch cues. These are discussed. Touch with full vision was most accurate, as expected. Peripheral and tunnel vision, which reduce visuo-spatial cues, differed in error pattern. Both were less accurate than full vision, and significantly more accurate than touch with diffuse light perception, and touch alone. The important finding was that touch with diffuse light perception, which excludes spatial cues, did not differ from touch without vision in performance accuracy, nor in location error pattern. The contrast between spatially relevant versus spatially irrelevant vision provides new, rather decisive, evidence against the hypothesis that vision affects haptic processing even if it does not add task-relevant information. The results support optimal integration theories, and suggest that spatial and non-spatial aspects of vision need explicit distinction in bimodal studies and theories of spatial integration.
Armeson, Kent E.; Hill, Elizabeth G.; Bonilha, Heather Shaw; Martin-Harris, Bonnie
2017-01-01
Purpose The purpose of this study was to identify which swallowing task(s) yielded the worst performance during a standardized modified barium swallow study (MBSS) in order to optimize the detection of swallowing impairment. Method This secondary data analysis of adult MBSSs estimated the probability of each swallowing task yielding the derived Modified Barium Swallow Impairment Profile (MBSImP™©; Martin-Harris et al., 2008) Overall Impression (OI; worst) scores using generalized estimating equations. The range of probabilities across swallowing tasks was calculated to discern which swallowing task(s) yielded the worst performance. Results Large-volume, thin-liquid swallowing tasks had the highest probabilities of yielding the OI scores for oral containment and airway protection. The cookie swallowing task was most likely to yield OI scores for oral clearance. Several swallowing tasks had nearly equal probabilities (≤ .20) of yielding the OI score. Conclusions The MBSS must represent impairment while requiring boluses that challenge the swallowing system. No single swallowing task had a sufficiently high probability to yield the identification of the worst score for each physiological component. Omission of swallowing tasks will likely fail to capture the most severe impairment for physiological components critical for safe and efficient swallowing. Results provide further support for standardized, well-tested protocols during MBSS. PMID:28614846
Hazelwood, R Jordan; Armeson, Kent E; Hill, Elizabeth G; Bonilha, Heather Shaw; Martin-Harris, Bonnie
2017-07-12
The purpose of this study was to identify which swallowing task(s) yielded the worst performance during a standardized modified barium swallow study (MBSS) in order to optimize the detection of swallowing impairment. This secondary data analysis of adult MBSSs estimated the probability of each swallowing task yielding the derived Modified Barium Swallow Impairment Profile (MBSImP™©; Martin-Harris et al., 2008) Overall Impression (OI; worst) scores using generalized estimating equations. The range of probabilities across swallowing tasks was calculated to discern which swallowing task(s) yielded the worst performance. Large-volume, thin-liquid swallowing tasks had the highest probabilities of yielding the OI scores for oral containment and airway protection. The cookie swallowing task was most likely to yield OI scores for oral clearance. Several swallowing tasks had nearly equal probabilities (≤ .20) of yielding the OI score. The MBSS must represent impairment while requiring boluses that challenge the swallowing system. No single swallowing task had a sufficiently high probability to yield the identification of the worst score for each physiological component. Omission of swallowing tasks will likely fail to capture the most severe impairment for physiological components critical for safe and efficient swallowing. Results provide further support for standardized, well-tested protocols during MBSS.
NASA Astrophysics Data System (ADS)
Mezentsev, Yu A.; Baranova, N. V.
2018-05-01
A universal economical and mathematical model designed for determination of optimal strategies for managing subsystems (components of subsystems) of production and logistics of enterprises is considered. Declared universality allows taking into account on the system level both production components, including limitations on the ways of converting raw materials and components into sold goods, as well as resource and logical restrictions on input and output material flows. The presented model and generated control problems are developed within the framework of the unified approach that allows one to implement logical conditions of any complexity and to define corresponding formal optimization tasks. Conceptual meaning of used criteria and limitations are explained. The belonging of the generated tasks of the mixed programming with the class of NP is shown. An approximate polynomial algorithm for solving the posed optimization tasks for mixed programming of real dimension with high computational complexity is proposed. Results of testing the algorithm on the tasks in a wide range of dimensions are presented.
The integrated manual and automatic control of complex flight systems
NASA Technical Reports Server (NTRS)
Schmidt, D. K.
1985-01-01
Pilot/vehicle analysis techniques for optimizing aircraft handling qualities are presented. The analysis approach considered is based on the optimal control frequency domain techniques. These techniques stem from an optimal control approach of a Neal-Smith like analysis on aircraft attitude dynamics extended to analyze the flared landing task. Some modifications to the technique are suggested and discussed. An in depth analysis of the effect of the experimental variables, such as prefilter, is conducted to gain further insight into the flared land task for this class of vehicle dynamics.
DQE and system optimization for indirect-detection flat-panel imagers in diagnostic radiology
NASA Astrophysics Data System (ADS)
Siewerdsen, Jeffrey H.; Antonuk, Larry E.
1998-07-01
The performance of indirect-detection flat-panel imagers incorporating CsI:Tl x-ray converters is examined through calculation of the detective quantum efficiency (DQE) under conditions of chest radiography, fluoroscopy, and mammography. Calculations are based upon a cascaded systems model which has demonstrated excellent agreement with empirical signal, noise- power spectra, and DQE results. For each application, the DQE is calculated as a function of spatial-frequency and CsI:Tl thickness. A preliminary investigation into the optimization of flat-panel imaging systems is described, wherein the x-ray converter thickness which provides optimal DQE for a given imaging task is estimated. For each application, a number of example tasks involving detection of an object of variable size and contrast against a noisy background are considered. The method described is fairly general and can be extended to account for a variety of imaging tasks. For the specific examples considered, the preliminary results estimate optimal CsI:Tl thicknesses of approximately 450 micrometer (approximately 200 mg/cm2), approximately 320 micrometer (approximately 140 mg/cm2), and approximately 200 micrometer (approximately 90 mg/cm2) for chest radiography, fluoroscopy, and mammography, respectively. These results are expected to depend upon the imaging task as well as upon the quality of available CsI:Tl, and future improvements in scintillator fabrication could result in increased optimal thickness and DQE.
NASA Astrophysics Data System (ADS)
Ghaly, Michael; Links, Jonathan M.; Frey, Eric C.
2016-03-01
The collimator is the primary factor that determines the spatial resolution and noise tradeoff in myocardial perfusion SPECT images. In this paper, the goal was to find the collimator that optimizes the image quality in terms of a perfusion defect detection task. Since the optimal collimator could depend on the level of approximation of the collimator-detector response (CDR) compensation modeled in reconstruction, we performed this optimization for the cases of modeling the full CDR (including geometric, septal penetration and septal scatter responses), the geometric CDR, or no model of the CDR. We evaluated the performance on the detection task using three model observers. Two observers operated on data in the projection domain: the Ideal Observer (IO) and IO with Model-Mismatch (IO-MM). The third observer was an anthropomorphic Channelized Hotelling Observer (CHO), which operated on reconstructed images. The projection-domain observers have the advantage that they are computationally less intensive. The IO has perfect knowledge of the image formation process, i.e. it has a perfect model of the CDR. The IO-MM takes into account the mismatch between the true (complete and accurate) model and an approximate model, e.g. one that might be used in reconstruction. We evaluated the utility of these projection domain observers in optimizing instrumentation parameters. We investigated a family of 8 parallel-hole collimators, spanning a wide range of resolution and sensitivity tradeoffs, using a population of simulated projection (for the IO and IO-MM) and reconstructed (for the CHO) images that included background variability. We simulated anterolateral and inferior perfusion defects with variable extents and severities. The area under the ROC curve was estimated from the IO, IO-MM, and CHO test statistics and served as the figure-of-merit. The optimal collimator for the IO had a resolution of 9-11 mm FWHM at 10 cm, which is poorer resolution than typical collimators used for MPS. When the IO-MM and CHO used a geometric or no model of the CDR, the optimal collimator shifted toward higher resolution than that obtained using the IO and the CHO with full CDR modeling. With the optimal collimator, the IO-MM and CHO using geometric modeling gave similar performance to full CDR modeling. Collimators with poorer resolution were optimal when CDR modeling was used. The agreement of rankings between the IO-MM and CHO confirmed that the IO-MM is useful for optimization tasks when model mismatch is present due to its substantially reduced computational burden compared to the CHO.
NASA Astrophysics Data System (ADS)
Kastens, K. A.; Malyn-Smith, J.; Ippolito, J.; Krumhansl, R.
2014-12-01
In August of 2014, the Oceans of Data Institute at Education Development Center, Inc. (EDC) is convening an expert panel to begin the process of developing an occupational skills profile for the "big-data-enabled professional." We define such a professional as an "individual who works with large complex data sets on a regular basis, asking and answering questions, analyzing trends, and finding meaningful patterns, in order to increase the efficiency of processes, make decisions and predictions, solve problems, generate hypotheses, and/or develop new understandings." The expert panel includes several geophysicists, as well as data professionals from engineering, higher education, analytical journalism, forensics, bioinformatics, and telecommunications. Working with experienced facilitators, the expert panel will create a detailed synopsis of the tasks and responsibilities characteristic of their profession, as well as the skills, knowledge and behaviors that enable them to succeed in the workplace. After the panel finishes their work, the task matrix and associated narrative will be vetted and validated by a larger group of additional professionals, and then disseminated for use by educators and employers. The process we are using is called DACUM (Developing a Curriculum), adapted by EDC and optimized for emergent professions, such as the "big-data-enabled professional." DACUM is a well-established method for analyzing jobs and occupations, commonly used in technical fields to develop curriculum and training programs that reflect authentic work tasks found in scientific and technical workplaces. The premises behind the DACUM approach are that: expert workers are better able to describe their own occupation than anyone else; any job can be described in terms of the tasks that successful workers in the occupation perform; all tasks have direct implications for the knowledge, skills, understandings and attitudes that must be taught and learned in preparation for the targeted career. At AGU, we will describe the process and present the finalized occupational profile.
Turbine Performance Optimization Task Status
NASA Technical Reports Server (NTRS)
Griffin, Lisa W.; Turner, James E. (Technical Monitor)
2001-01-01
Capability to optimize for turbine performance and accurately predict unsteady loads will allow for increased reliability, Isp, and thrust-to-weight. The development of a fast, accurate aerodynamic design, analysis, and optimization system is required.
Janssen, Christian P; Brumby, Duncan P; Dowell, John; Chater, Nick; Howes, Andrew
2011-01-01
We report the results of a dual-task study in which participants performed a tracking and typing task under various experimental conditions. An objective payoff function was used to provide explicit feedback on how participants should trade off performance between the tasks. Results show that participants' dual-task interleaving strategy was sensitive to changes in the difficulty of the tracking task and resulted in differences in overall task performance. To test the hypothesis that people select strategies that maximize payoff, a Cognitively Bounded Rational Analysis model was developed. This analysis evaluated a variety of dual-task interleaving strategies to identify the optimal strategy for maximizing payoff in each condition. The model predicts that the region of optimum performance is different between experimental conditions. The correspondence between human data and the prediction of the optimal strategy is found to be remarkably high across a number of performance measures. This suggests that participants were honing their behavior to maximize payoff. Limitations are discussed. Copyright © 2011 Cognitive Science Society, Inc.
Bi-Level Integrated System Synthesis (BLISS) for Concurrent and Distributed Processing
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Altus, Troy D.; Phillips, Matthew; Sandusky, Robert
2002-01-01
The paper introduces a new version of the Bi-Level Integrated System Synthesis (BLISS) methods intended for optimization of engineering systems conducted by distributed specialty groups working concurrently and using a multiprocessor computing environment. The method decomposes the overall optimization task into subtasks associated with disciplines or subsystems where the local design variables are numerous and a single, system-level optimization whose design variables are relatively few. The subtasks are fully autonomous as to their inner operations and decision making. Their purpose is to eliminate the local design variables and generate a wide spectrum of feasible designs whose behavior is represented by Response Surfaces to be accessed by a system-level optimization. It is shown that, if the problem is convex, the solution of the decomposed problem is the same as that obtained without decomposition. A simplified example of an aircraft design shows the method working as intended. The paper includes a discussion of the method merits and demerits and recommendations for further research.
Carmena, Jose M.
2016-01-01
Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter initialization. Finally, the architecture extended control to tasks beyond those used for CLDA training. These results have significant implications towards the development of clinically-viable neuroprosthetics. PMID:27035820
The Mass-Longevity Triangle: Pareto Optimality and the Geometry of Life-History Trait Space
Szekely, Pablo; Korem, Yael; Moran, Uri; Mayo, Avi; Alon, Uri
2015-01-01
When organisms need to perform multiple tasks they face a fundamental tradeoff: no phenotype can be optimal at all tasks. This situation was recently analyzed using Pareto optimality, showing that tradeoffs between tasks lead to phenotypes distributed on low dimensional polygons in trait space. The vertices of these polygons are archetypes—phenotypes optimal at a single task. This theory was applied to examples from animal morphology and gene expression. Here we ask whether Pareto optimality theory can apply to life history traits, which include longevity, fecundity and mass. To comprehensively explore the geometry of life history trait space, we analyze a dataset of life history traits of 2105 endothermic species. We find that, to a first approximation, life history traits fall on a triangle in log-mass log-longevity space. The vertices of the triangle suggest three archetypal strategies, exemplified by bats, shrews and whales, with specialists near the vertices and generalists in the middle of the triangle. To a second approximation, the data lies in a tetrahedron, whose extra vertex above the mass-longevity triangle suggests a fourth strategy related to carnivory. Each animal species can thus be placed in a coordinate system according to its distance from the archetypes, which may be useful for genome-scale comparative studies of mammalian aging and other biological aspects. We further demonstrate that Pareto optimality can explain a range of previous studies which found animal and plant phenotypes which lie in triangles in trait space. This study demonstrates the applicability of multi-objective optimization principles to understand life history traits and to infer archetypal strategies that suggest why some mammalian species live much longer than others of similar mass. PMID:26465336
De Groote, Friedl; Jonkers, Ilse; Duysens, Jacques
2014-01-01
Finding muscle activity generating a given motion is a redundant problem, since there are many more muscles than degrees of freedom. The control strategies determining muscle recruitment from a redundant set are still poorly understood. One theory of motor control suggests that motion is produced through activating a small number of muscle synergies, i.e., muscle groups that are activated in a fixed ratio by a single input signal. Because of the reduced number of input signals, synergy-based control is low dimensional. But a major criticism on the theory of synergy-based control of muscles is that muscle synergies might reflect task constraints rather than a neural control strategy. Another theory of motor control suggests that muscles are recruited by optimizing performance. Optimization of performance has been widely used to calculate muscle recruitment underlying a given motion while assuming independent recruitment of muscles. If synergies indeed determine muscle recruitment underlying a given motion, optimization approaches that do not model synergy-based control could result in muscle activations that do not show the synergistic muscle action observed through electromyography (EMG). If, however, synergistic muscle action results from performance optimization and task constraints (joint kinematics and external forces), such optimization approaches are expected to result in low-dimensional synergistic muscle activations that are similar to EMG-based synergies. We calculated muscle recruitment underlying experimentally measured gait patterns by optimizing performance assuming independent recruitment of muscles. We found that the muscle activations calculated without any reference to synergies can be accurately explained by on average four synergies. These synergies are similar to EMG-based synergies. We therefore conclude that task constraints and performance optimization explain synergistic muscle recruitment from a redundant set of muscles.
Passive motion paradigm: an alternative to optimal control.
Mohan, Vishwanathan; Morasso, Pietro
2011-01-01
IN THE LAST YEARS, OPTIMAL CONTROL THEORY (OCT) HAS EMERGED AS THE LEADING APPROACH FOR INVESTIGATING NEURAL CONTROL OF MOVEMENT AND MOTOR COGNITION FOR TWO COMPLEMENTARY RESEARCH LINES: behavioral neuroscience and humanoid robotics. In both cases, there are general problems that need to be addressed, such as the "degrees of freedom (DoFs) problem," the common core of production, observation, reasoning, and learning of "actions." OCT, directly derived from engineering design techniques of control systems quantifies task goals as "cost functions" and uses the sophisticated formal tools of optimal control to obtain desired behavior (and predictions). We propose an alternative "softer" approach passive motion paradigm (PMP) that we believe is closer to the biomechanics and cybernetics of action. The basic idea is that actions (overt as well as covert) are the consequences of an internal simulation process that "animates" the body schema with the attractor dynamics of force fields induced by the goal and task-specific constraints. This internal simulation offers the brain a way to dynamically link motor redundancy with task-oriented constraints "at runtime," hence solving the "DoFs problem" without explicit kinematic inversion and cost function computation. We argue that the function of such computational machinery is not only restricted to shaping motor output during action execution but also to provide the self with information on the feasibility, consequence, understanding and meaning of "potential actions." In this sense, taking into account recent developments in neuroscience (motor imagery, simulation theory of covert actions, mirror neuron system) and in embodied robotics, PMP offers a novel framework for understanding motor cognition that goes beyond the engineering control paradigm provided by OCT. Therefore, the paper is at the same time a review of the PMP rationale, as a computational theory, and a perspective presentation of how to develop it for designing better cognitive architectures.
Automated Cloud Observation for Ground Telescope Optimization
NASA Astrophysics Data System (ADS)
Lane, B.; Jeffries, M. W., Jr.; Therien, W.; Nguyen, H.
As the number of man-made objects placed in space each year increases with advancements in commercial, academic and industry, the number of objects required to be detected, tracked, and characterized continues to grow at an exponential rate. Commercial companies, such as ExoAnalytic Solutions, have deployed ground based sensors to maintain track custody of these objects. For the ExoAnalytic Global Telescope Network (EGTN), observation of such objects are collected at the rate of over 10 million unique observations per month (as of September 2017). Currently, the EGTN does not optimally collect data on nights with significant cloud levels. However, a majority of these nights prove to be partially cloudy providing clear portions in the sky for EGTN sensors to observe. It proves useful for a telescope to utilize these clear areas to continue resident space object (RSO) observation. By dynamically updating the tasking with the varying cloud positions, the number of observations could potentially increase dramatically due to increased persistence, cadence, and revisit. This paper will discuss the recent algorithms being implemented within the EGTN, including the motivation, need, and general design. The use of automated image processing as well as various edge detection methods, including Canny, Sobel, and Marching Squares, on real-time large FOV images of the sky enhance the tasking and scheduling of a ground based telescope is discussed in Section 2. Implementations of these algorithms on single and expanding to multiple telescopes, will be explored. Results of applying these algorithms to the EGTN in real-time and comparison to non-optimized EGTN tasking is presented in Section 3. Finally, in Section 4 we explore future work in applying these throughout the EGTN as well as other optical telescopes.
NASA Astrophysics Data System (ADS)
Biryuk, V. V.; Tsapkova, A. B.; Larin, E. A.; Livshiz, M. Y.; Sheludko, L. P.
2018-01-01
A set of mathematical models for calculating the reliability indexes of structurally complex multifunctional combined installations in heat and power supply systems was developed. Reliability of energy supply is considered as required condition for the creation and operation of heat and power supply systems. The optimal value of the power supply system coefficient F is based on an economic assessment of the consumers’ loss caused by the under-supply of electric power and additional system expences for the creation and operation of an emergency capacity reserve. Rationing of RI of the industrial heat supply is based on the use of concept of technological margin of safety of technological processes. The definition of rationed RI values of heat supply of communal consumers is based on the air temperature level iside the heated premises. The complex allows solving a number of practical tasks for providing reliability of heat supply for consumers. A probabilistic model is developed for calculating the reliability indexes of combined multipurpose heat and power plants in heat-and-power supply systems. The complex of models and calculation programs can be used to solve a wide range of specific tasks of optimization of schemes and parameters of combined heat and power plants and systems, as well as determining the efficiency of various redundance methods to ensure specified reliability of power supply.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cortright, Randy; Rozmiarek, Robert; Dally, Brice
2017-08-31
The objective of this project was to develop an improved multistage process for the hydrothermal liquefaction (HTL) of biomass to serve as a new front-end, deconstruction process ideally suited to feed Virent’s well-proven catalytic technology, which is already being scaled up. This process produced water soluble, partially de-oxygenated intermediates that are ideally suited for catalytic finishing to fungible distillate hydrocarbons. Through this project, Virent, with its partners, demonstrated the conversion of pine wood chips to drop-in hydrocarbon distillate fuels using a multi-stage fractional conversion system that is integrated with Virent’s BioForming® process. The majority of work was in the liquefactionmore » task and included temperature scoping, solvent optimization, and separations.« less
Array automated assembly task, phase 2. Low cost silicon solar array project
NASA Technical Reports Server (NTRS)
Rhee, S. S.; Jones, G. T.; Allison, K. T.
1978-01-01
Several modifications instituted in the wafer surface preparation process served to significantly reduce the process cost to 1.55 cents per peak watt in 1975 cents. Performance verification tests of a laser scanning system showed a limited capability to detect hidden cracks or defects, but with potential equipment modifications this cost effective system could be rendered suitable for applications. Installation of electroless nickel plating system was completed along with an optimization of the wafer plating process. The solder coating and flux removal process verification test was completed. An optimum temperature range of 500-550 C was found to produce uniform solder coating with the restriction that a modified dipping procedure is utilized. Finally, the construction of the spray-on dopant equipment was completed.
2018-05-01
Reports an error in "Objectifying the subjective: Building blocks of metacognitive experiences in conflict tasks" by Laurence Questienne, Anne Atas, Boris Burle and Wim Gevers ( Journal of Experimental Psychology: General , 2018[Jan], Vol 147[1], 125-131). In this article, the second sentence of the second paragraph of the Data Processing section is incorrect due to a production error. The second sentence should read as follows: RTs slower/shorter than Median 3 Median Absolute Deviations computed by participant were removed. (The following abstract of the original article appeared in record 2017-52065-001.) Metacognitive appraisals are essential for optimizing our information processing. In conflict tasks, metacognitive appraisals can result from different interrelated features (e.g., motor activity, visual awareness, response speed). Thanks to an original approach combining behavioral and electromyographic measures, the current study objectified the contribution of three features (reaction time [RT], motor hesitation with and without response competition, and visual congruency) to the subjective experience of urge-to-err in a priming conflict task. Both RT and motor hesitation with response competition were major determinants of metacognitive appraisals. Importantly, motor hesitation in absence of response competition and visual congruency had limited effect. Because science aims to rely on objectivity, subjective experiences are often discarded from scientific inquiry. The current study shows that subjectivity can be objectified. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Layout optimization of DRAM cells using rigorous simulation model for NTD
NASA Astrophysics Data System (ADS)
Jeon, Jinhyuck; Kim, Shinyoung; Park, Chanha; Yang, Hyunjo; Yim, Donggyu; Kuechler, Bernd; Zimmermann, Rainer; Muelders, Thomas; Klostermann, Ulrich; Schmoeller, Thomas; Do, Mun-hoe; Choi, Jung-Hoe
2014-03-01
DRAM chip space is mainly determined by the size of the memory cell array patterns which consist of periodic memory cell features and edges of the periodic array. Resolution Enhancement Techniques (RET) are used to optimize the periodic pattern process performance. Computational Lithography such as source mask optimization (SMO) to find the optimal off axis illumination and optical proximity correction (OPC) combined with model based SRAF placement are applied to print patterns on target. For 20nm Memory Cell optimization we see challenges that demand additional tool competence for layout optimization. The first challenge is a memory core pattern of brick-wall type with a k1 of 0.28, so it allows only two spectral beams to interfere. We will show how to analytically derive the only valid geometrically limited source. Another consequence of two-beam interference limitation is a "super stable" core pattern, with the advantage of high depth of focus (DoF) but also low sensitivity to proximity corrections or changes of contact aspect ratio. This makes an array edge correction very difficult. The edge can be the most critical pattern since it forms the transition from the very stable regime of periodic patterns to non-periodic periphery, so it combines the most critical pitch and highest susceptibility to defocus. Above challenge makes the layout correction to a complex optimization task demanding a layout optimization that finds a solution with optimal process stability taking into account DoF, exposure dose latitude (EL), mask error enhancement factor (MEEF) and mask manufacturability constraints. This can only be achieved by simultaneously considering all criteria while placing and sizing SRAFs and main mask features. The second challenge is the use of a negative tone development (NTD) type resist, which has a strong resist effect and is difficult to characterize experimentally due to negative resist profile taper angles that perturb CD at bottom characterization by scanning electron microscope (SEM) measurements. High resist impact and difficult model data acquisition demand for a simulation model that hat is capable of extrapolating reliably beyond its calibration dataset. We use rigorous simulation models to provide that predictive performance. We have discussed the need of a rigorous mask optimization process for DRAM contact cell layout yielding mask layouts that are optimal in process performance, mask manufacturability and accuracy. In this paper, we have shown the step by step process from analytical illumination source derivation, a NTD and application tailored model calibration to layout optimization such as OPC and SRAF placement. Finally the work has been verified with simulation and experimental results on wafer.
Review of Findings for Human Performance Contribution to Risk in Operating Events
2002-03-01
and loss of DC power. Key to this event was failure to control setpoints on safety-related equipment and failure to maintain the load tap changer...34 Therefore, "to optimize task execution at the job site, it is important to align organizational processes and values." Effective team skills are an...reactor was blocked and the water level rapidly dropped to the automatic low-level scram setpoint . Human Performance Issues Control rods were fully
1980-09-01
morphology appears to be effective on an unstructured problem and provides a useful vehicle for clearly defining the functions and tasks that meet the needs...approach used is a structured decision process which was successfully demonstrated in FY 78 on relatively simple mechanical equipment and has now been...including achievement of practical conclusions from the large scale optimization procedures. This design morphology provided a useful vehicle for
Flexible modulation of risk attitude during decision-making under quota.
Fujimoto, Atsushi; Takahashi, Hidehiko
2016-10-01
Risk attitude is often regarded as an intrinsic parameter in the individual personality. However, ethological studies reported state-dependent strategy optimization irrespective of individual preference. To synthesize the two contrasting literatures, we developed a novel gambling task that dynamically manipulated the quota severity (required outcome to clear the task) in a course of choice trials and conducted a task-fMRI study in human participants. The participants showed their individual risk preference when they had no quota constraint ('individual-preference mode'), while they adopted state-dependent optimal strategy when they needed to achieve a quota ('strategy-optimization mode'). fMRI analyses illustrated that the interplay among prefrontal areas and salience-network areas reflected the quota severity and the utilization of the optimal strategy, shedding light on the neural substrates of the quota-dependent risk attitude. Our results demonstrated the complex nature of risk-sensitive decision-making and may provide a new perspective for the understanding of problematic risky behaviors in human. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Azarpour, Masoumeh; Enzner, Gerald
2017-12-01
Binaural noise reduction, with applications for instance in hearing aids, has been a very significant challenge. This task relates to the optimal utilization of the available microphone signals for the estimation of the ambient noise characteristics and for the optimal filtering algorithm to separate the desired speech from the noise. The additional requirements of low computational complexity and low latency further complicate the design. A particular challenge results from the desired reconstruction of binaural speech input with spatial cue preservation. The latter essentially diminishes the utility of multiple-input/single-output filter-and-sum techniques such as beamforming. In this paper, we propose a comprehensive and effective signal processing configuration with which most of the aforementioned criteria can be met suitably. This relates especially to the requirement of efficient online adaptive processing for noise estimation and optimal filtering while preserving the binaural cues. Regarding noise estimation, we consider three different architectures: interaural (ITF), cross-relation (CR), and principal-component (PCA) target blocking. An objective comparison with two other noise PSD estimation algorithms demonstrates the superiority of the blocking-based noise estimators, especially the CR-based and ITF-based blocking architectures. Moreover, we present a new noise reduction filter based on minimum mean-square error (MMSE), which belongs to the class of common gain filters, hence being rigorous in terms of spatial cue preservation but also efficient and competitive for the acoustic noise reduction task. A formal real-time subjective listening test procedure is also developed in this paper. The proposed listening test enables a real-time assessment of the proposed computationally efficient noise reduction algorithms in a realistic acoustic environment, e.g., considering time-varying room impulse responses and the Lombard effect. The listening test outcome reveals that the signals processed by the blocking-based algorithms are significantly preferred over the noisy signal in terms of instantaneous noise attenuation. Furthermore, the listening test data analysis confirms the conclusions drawn based on the objective evaluation.
Radeva, Tsvetomira; Dornhaus, Anna; Lynch, Nancy; Nagpal, Radhika; Su, Hsin-Hao
2017-12-01
Adaptive collective systems are common in biology and beyond. Typically, such systems require a task allocation algorithm: a mechanism or rule-set by which individuals select particular roles. Here we study the performance of such task allocation mechanisms measured in terms of the time for individuals to allocate to tasks. We ask: (1) Is task allocation fundamentally difficult, and thus costly? (2) Does the performance of task allocation mechanisms depend on the number of individuals? And (3) what other parameters may affect their efficiency? We use techniques from distributed computing theory to develop a model of a social insect colony, where workers have to be allocated to a set of tasks; however, our model is generalizable to other systems. We show, first, that the ability of workers to quickly assess demand for work in tasks they are not currently engaged in crucially affects whether task allocation is quickly achieved or not. This indicates that in social insect tasks such as thermoregulation, where temperature may provide a global and near instantaneous stimulus to measure the need for cooling, for example, it should be easy to match the number of workers to the need for work. In other tasks, such as nest repair, it may be impossible for workers not directly at the work site to know that this task needs more workers. We argue that this affects whether task allocation mechanisms are under strong selection. Second, we show that colony size does not affect task allocation performance under our assumptions. This implies that when effects of colony size are found, they are not inherent in the process of task allocation itself, but due to processes not modeled here, such as higher variation in task demand for smaller colonies, benefits of specialized workers, or constant overhead costs. Third, we show that the ratio of the number of available workers to the workload crucially affects performance. Thus, workers in excess of those needed to complete all tasks improve task allocation performance. This provides a potential explanation for the phenomenon that social insect colonies commonly contain inactive workers: these may be a 'surplus' set of workers that improves colony function by speeding up optimal allocation of workers to tasks. Overall our study shows how limitations at the individual level can affect group level outcomes, and suggests new hypotheses that can be explored empirically.
Dornhaus, Anna; Su, Hsin-Hao
2017-01-01
Adaptive collective systems are common in biology and beyond. Typically, such systems require a task allocation algorithm: a mechanism or rule-set by which individuals select particular roles. Here we study the performance of such task allocation mechanisms measured in terms of the time for individuals to allocate to tasks. We ask: (1) Is task allocation fundamentally difficult, and thus costly? (2) Does the performance of task allocation mechanisms depend on the number of individuals? And (3) what other parameters may affect their efficiency? We use techniques from distributed computing theory to develop a model of a social insect colony, where workers have to be allocated to a set of tasks; however, our model is generalizable to other systems. We show, first, that the ability of workers to quickly assess demand for work in tasks they are not currently engaged in crucially affects whether task allocation is quickly achieved or not. This indicates that in social insect tasks such as thermoregulation, where temperature may provide a global and near instantaneous stimulus to measure the need for cooling, for example, it should be easy to match the number of workers to the need for work. In other tasks, such as nest repair, it may be impossible for workers not directly at the work site to know that this task needs more workers. We argue that this affects whether task allocation mechanisms are under strong selection. Second, we show that colony size does not affect task allocation performance under our assumptions. This implies that when effects of colony size are found, they are not inherent in the process of task allocation itself, but due to processes not modeled here, such as higher variation in task demand for smaller colonies, benefits of specialized workers, or constant overhead costs. Third, we show that the ratio of the number of available workers to the workload crucially affects performance. Thus, workers in excess of those needed to complete all tasks improve task allocation performance. This provides a potential explanation for the phenomenon that social insect colonies commonly contain inactive workers: these may be a ‘surplus’ set of workers that improves colony function by speeding up optimal allocation of workers to tasks. Overall our study shows how limitations at the individual level can affect group level outcomes, and suggests new hypotheses that can be explored empirically. PMID:29240763
Is self-generated thought a means of social problem solving?
Ruby, Florence J. M.; Smallwood, Jonathan; Sackur, Jerome; Singer, Tania
2013-01-01
Appropriate social problem solving constitutes a critical skill for individuals and may rely on processes important for self-generated thought (SGT). The aim of the current study was to investigate the link between SGT and social problem solving. Using the Means-End Problem Solving task (MEPS), we assessed participants' abilities to resolve daily social problems in terms of overall efficiency and number of relevant means they provided to reach the given solution. Participants also performed a non-demanding choice reaction time task (CRT) and a moderately-demanding working memory task (WM) as a context in which to measure their SGT (assessed via thought sampling). We found that although overall SGT was associated with lower MEPS efficiency, it was also associated with higher relevant means, perhaps because both depend on the capacity to generate cognition that is independent from the hear and now. The specific content of SGT did not differentially predict individual differences in social problem solving, suggesting that the relationship may depend on SGT regardless of its content. In addition, we also found that performance at the WM but not the CRT was linked to overall better MEPS performance, suggesting that individuals good at social processing are also distinguished by their capacity to constrain attention to an external task. Our results provide novel evidence that the capacity for SGT is implicated in the process by which solutions to social problems are generated, although optimal problem solving may be achieved by individuals who display a suitable balance between SGT and cognition derived from perceptual input. PMID:24391621
Cerami, Chiara; Dodich, Alessandra; Iannaccone, Sandro; Marcone, Alessandra; Lettieri, Giada; Crespi, Chiara; Gianolli, Luigi; Cappa, Stefano F.; Perani, Daniela
2015-01-01
The behavioural variant of frontotemporal dementia (bvFTD) is a rare disease mainly affecting the social brain. FDG-PET fronto-temporal hypometabolism is a supportive feature for the diagnosis. It may also provide specific functional metabolic signatures for altered socio-emotional processing. In this study, we evaluated the emotion recognition and attribution deficits and FDG-PET cerebral metabolic patterns at the group and individual levels in a sample of sporadic bvFTD patients, exploring the cognitive-functional correlations. Seventeen probable mild bvFTD patients (10 male and 7 female; age 67.8±9.9) were administered standardized and validated version of social cognition tasks assessing the recognition of basic emotions and the attribution of emotions and intentions (i.e., Ekman 60-Faces test-Ek60F and Story-based Empathy task-SET). FDG-PET was analysed using an optimized voxel-based SPM method at the single-subject and group levels. Severe deficits of emotion recognition and processing characterized the bvFTD condition. At the group level, metabolic dysfunction in the right amygdala, temporal pole, and middle cingulate cortex was highly correlated to the emotional recognition and attribution performances. At the single-subject level, however, heterogeneous impairments of social cognition tasks emerged, and different metabolic patterns, involving limbic structures and prefrontal cortices, were also observed. The derangement of a right limbic network is associated with altered socio-emotional processing in bvFTD patients, but different hypometabolic FDG-PET patterns and heterogeneous performances on social tasks at an individual level exist. PMID:26513651
Research of grasping algorithm based on scara industrial robot
NASA Astrophysics Data System (ADS)
Peng, Tao; Zuo, Ping; Yang, Hai
2018-04-01
As the tobacco industry grows, facing the challenge of the international tobacco giant, efficient logistics service is one of the key factors. How to complete the tobacco sorting task of efficient economy is the goal of tobacco sorting and optimization research. Now the cigarette distribution system uses a single line to carry out the single brand sorting task, this article adopts a single line to realize the cigarette sorting task of different brands. Using scara robot special algorithm for sorting and packaging, the optimization scheme significantly enhances the indicators of smoke sorting system. Saving labor productivity, obviously improve production efficiency.
A Cognitive Modeling Approach to Strategy Formation in Dynamic Decision Making.
Prezenski, Sabine; Brechmann, André; Wolff, Susann; Russwinkel, Nele
2017-01-01
Decision-making is a high-level cognitive process based on cognitive processes like perception, attention, and memory. Real-life situations require series of decisions to be made, with each decision depending on previous feedback from a potentially changing environment. To gain a better understanding of the underlying processes of dynamic decision-making, we applied the method of cognitive modeling on a complex rule-based category learning task. Here, participants first needed to identify the conjunction of two rules that defined a target category and later adapt to a reversal of feedback contingencies. We developed an ACT-R model for the core aspects of this dynamic decision-making task. An important aim of our model was that it provides a general account of how such tasks are solved and, with minor changes, is applicable to other stimulus materials. The model was implemented as a mixture of an exemplar-based and a rule-based approach which incorporates perceptual-motor and metacognitive aspects as well. The model solves the categorization task by first trying out one-feature strategies and then, as a result of repeated negative feedback, switching to two-feature strategies. Overall, this model solves the task in a similar way as participants do, including generally successful initial learning as well as reversal learning after the change of feedback contingencies. Moreover, the fact that not all participants were successful in the two learning phases is also reflected in the modeling data. However, we found a larger variance and a lower overall performance of the modeling data as compared to the human data which may relate to perceptual preferences or additional knowledge and rules applied by the participants. In a next step, these aspects could be implemented in the model for a better overall fit. In view of the large interindividual differences in decision performance between participants, additional information about the underlying cognitive processes from behavioral, psychobiological and neurophysiological data may help to optimize future applications of this model such that it can be transferred to other domains of comparable dynamic decision tasks.