Zhang, Rubo; Yang, Yu
2017-01-01
Research on distributed task planning model for multi-autonomous underwater vehicle (MAUV). A scroll time domain quantum artificial bee colony (STDQABC) optimization algorithm is proposed to solve the multi-AUV optimal task planning scheme. In the uncertain marine environment, the rolling time domain control technique is used to realize a numerical optimization in a narrowed time range. Rolling time domain control is one of the better task planning techniques, which can greatly reduce the computational workload and realize the tradeoff between AUV dynamics, environment and cost. Finally, a simulation experiment was performed to evaluate the distributed task planning performance of the scroll time domain quantum bee colony optimization algorithm. The simulation results demonstrate that the STDQABC algorithm converges faster than the QABC and ABC algorithms in terms of both iterations and running time. The STDQABC algorithm can effectively improve MAUV distributed tasking planning performance, complete the task goal and get the approximate optimal solution. PMID:29186166
Li, Jianjun; Zhang, Rubo; Yang, Yu
2017-01-01
Research on distributed task planning model for multi-autonomous underwater vehicle (MAUV). A scroll time domain quantum artificial bee colony (STDQABC) optimization algorithm is proposed to solve the multi-AUV optimal task planning scheme. In the uncertain marine environment, the rolling time domain control technique is used to realize a numerical optimization in a narrowed time range. Rolling time domain control is one of the better task planning techniques, which can greatly reduce the computational workload and realize the tradeoff between AUV dynamics, environment and cost. Finally, a simulation experiment was performed to evaluate the distributed task planning performance of the scroll time domain quantum bee colony optimization algorithm. The simulation results demonstrate that the STDQABC algorithm converges faster than the QABC and ABC algorithms in terms of both iterations and running time. The STDQABC algorithm can effectively improve MAUV distributed tasking planning performance, complete the task goal and get the approximate optimal solution.
Computer Simulation of a Multiaxis Air-to-Air Tracking Task Using the Optimal Pilot Control Model.
1982-12-01
v ABSTRACT ........ ............................. .. vi CHAPTER 1 - INTRODUCTION ....... ..................... 1 1.1 Motivation... Introduction ......... . 4 2.2 Optimal Pilot Control Model and Control Synthesis 4 2.3 Pitch Tracking Task ...... ................... 6 2.4 Multiaxis...CHAPTER 3 - SIMULATION SYSTEM ...... .................. 33 3.1 Introduction ........ ....................... 33 3.2 System Hardware
A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori
2005-07-01
The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectivesmore » of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.« less
A Framework to Design and Optimize Chemical Flooding Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori
2006-08-31
The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectivesmore » of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.« less
A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori
2004-11-01
The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectivesmore » of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, G.A.; Sepehrnoori, K.
1995-12-31
The objective of this research is to develop cost-effective surfactant flooding technology by using simulation studies to evaluate and optimize alternative design strategies taking into account reservoir characteristics process chemistry, and process design options such as horizontal wells. Task 1 is the development of an improved numerical method for our simulator that will enable us to solve a wider class of these difficult simulation problems accurately and affordably. Task 2 is the application of this simulator to the optimization of surfactant flooding to reduce its risk and cost. In this quarter, we have continued working on Task 2 to optimizemore » surfactant flooding design and have included economic analysis to the optimization process. An economic model was developed using a spreadsheet and the discounted cash flow (DCF) method of economic analysis. The model was designed specifically for a domestic onshore surfactant flood and has been used to economically evaluate previous work that used a technical approach to optimization. The DCF model outputs common economic decision making criteria, such as net present value (NPV), internal rate of return (IRR), and payback period.« less
Impedance learning for robotic contact tasks using natural actor-critic algorithm.
Kim, Byungchan; Park, Jooyoung; Park, Shinsuk; Kang, Sungchul
2010-04-01
Compared with their robotic counterparts, humans excel at various tasks by using their ability to adaptively modulate arm impedance parameters. This ability allows us to successfully perform contact tasks even in uncertain environments. This paper considers a learning strategy of motor skill for robotic contact tasks based on a human motor control theory and machine learning schemes. Our robot learning method employs impedance control based on the equilibrium point control theory and reinforcement learning to determine the impedance parameters for contact tasks. A recursive least-square filter-based episodic natural actor-critic algorithm is used to find the optimal impedance parameters. The effectiveness of the proposed method was tested through dynamic simulations of various contact tasks. The simulation results demonstrated that the proposed method optimizes the performance of the contact tasks in uncertain conditions of the environment.
Automatic CT simulation optimization for radiation therapy: A general strategy.
Li, Hua; Yu, Lifeng; Anastasio, Mark A; Chen, Hsin-Chen; Tan, Jun; Gay, Hiram; Michalski, Jeff M; Low, Daniel A; Mutic, Sasa
2014-03-01
In radiation therapy, x-ray computed tomography (CT) simulation protocol specifications should be driven by the treatment planning requirements in lieu of duplicating diagnostic CT screening protocols. The purpose of this study was to develop a general strategy that allows for automatically, prospectively, and objectively determining the optimal patient-specific CT simulation protocols based on radiation-therapy goals, namely, maintenance of contouring quality and integrity while minimizing patient CT simulation dose. The authors proposed a general prediction strategy that provides automatic optimal CT simulation protocol selection as a function of patient size and treatment planning task. The optimal protocol is the one that delivers the minimum dose required to provide a CT simulation scan that yields accurate contours. Accurate treatment plans depend on accurate contours in order to conform the dose to actual tumor and normal organ positions. An image quality index, defined to characterize how simulation scan quality affects contour delineation, was developed and used to benchmark the contouring accuracy and treatment plan quality within the predication strategy. A clinical workflow was developed to select the optimal CT simulation protocols incorporating patient size, target delineation, and radiation dose efficiency. An experimental study using an anthropomorphic pelvis phantom with added-bolus layers was used to demonstrate how the proposed prediction strategy could be implemented and how the optimal CT simulation protocols could be selected for prostate cancer patients based on patient size and treatment planning task. Clinical IMRT prostate treatment plans for seven CT scans with varied image quality indices were separately optimized and compared to verify the trace of target and organ dosimetry coverage. Based on the phantom study, the optimal image quality index for accurate manual prostate contouring was 4.4. The optimal tube potentials for patient sizes of 38, 43, 48, 53, and 58 cm were 120, 140, 140, 140, and 140 kVp, respectively, and the corresponding minimum CTDIvol for achieving the optimal image quality index 4.4 were 9.8, 32.2, 100.9, 241.4, and 274.1 mGy, respectively. For patients with lateral sizes of 43-58 cm, 120-kVp scan protocols yielded up to 165% greater radiation dose relative to 140-kVp protocols, and 140-kVp protocols always yielded a greater image quality index compared to the same dose-level 120-kVp protocols. The trace of target and organ dosimetry coverage and the γ passing rates of seven IMRT dose distribution pairs indicated the feasibility of the proposed image quality index for the predication strategy. A general strategy to predict the optimal CT simulation protocols in a flexible and quantitative way was developed that takes into account patient size, treatment planning task, and radiation dose. The experimental study indicated that the optimal CT simulation protocol and the corresponding radiation dose varied significantly for different patient sizes, contouring accuracy, and radiation treatment planning tasks.
Chowdhary, A G; Challis, J H
2001-07-07
A series of overarm throws, constrained to the parasagittal plane, were simulated using a muscle model actuated two-segment model representing the forearm and hand plus projectile. The parameters defining the modeled muscles and the anthropometry of the two-segment models were specific to the two young male subjects. All simulations commenced from a position of full elbow flexion and full wrist extension. The study was designed to elucidate the optimal inter-muscular coordination strategies for throwing projectiles to achieve maximum range, as well as maximum projectile kinetic energy for a variety of projectile masses. A proximal to distal (PD) sequence of muscle activations was seen in many of the simulated throws but not all. Under certain conditions moment reversal produced a longer throw and greater projectile energy, and deactivation of the muscles resulted in increased projectile energy. Therefore, simple timing of muscle activation does not fully describe the patterns of muscle recruitment which can produce optimal throws. The models of the two subjects required different timings of muscle activations, and for some of the tasks used different coordination patterns. Optimal strategies were found to vary with the mass of the projectile, the anthropometry and the muscle characteristics of the subjects modeled. The tasks examined were relatively simple, but basic rules for coordinating these tasks were not evident. Copyright 2001 Academic Press.
NASA Astrophysics Data System (ADS)
Lima, José; Pereira, Ana I.; Costa, Paulo; Pinto, Andry; Costa, Pedro
2017-07-01
This paper describes an optimization procedure for a robot with 12 degrees of freedom avoiding the inverse kinematics problem, which is a hard task for this type of robot manipulator. This robot can be used to pick and place tasks in complex designs. Combining an accurate and fast direct kinematics model with optimization strategies, it is possible to achieve the joints angles for a desired end-effector position and orientation. The optimization methods stretched simulated annealing algorithm and genetic algorithm were used. The solutions found were validated using data originated by a real and by a simulated robot formed by 12 servomotors with a gripper.
Cloud computing task scheduling strategy based on improved differential evolution algorithm
NASA Astrophysics Data System (ADS)
Ge, Junwei; He, Qian; Fang, Yiqiu
2017-04-01
In order to optimize the cloud computing task scheduling scheme, an improved differential evolution algorithm for cloud computing task scheduling is proposed. Firstly, the cloud computing task scheduling model, according to the model of the fitness function, and then used improved optimization calculation of the fitness function of the evolutionary algorithm, according to the evolution of generation of dynamic selection strategy through dynamic mutation strategy to ensure the global and local search ability. The performance test experiment was carried out in the CloudSim simulation platform, the experimental results show that the improved differential evolution algorithm can reduce the cloud computing task execution time and user cost saving, good implementation of the optimal scheduling of cloud computing tasks.
The LHCb Grid Simulation: Proof of Concept
NASA Astrophysics Data System (ADS)
Hushchyn, M.; Ustyuzhanin, A.; Arzymatov, K.; Roiser, S.; Baranov, A.
2017-10-01
The Worldwide LHC Computing Grid provides access to data and computational resources to analyze it for researchers with different geographical locations. The grid has a hierarchical topology with multiple sites distributed over the world with varying number of CPUs, amount of disk storage and connection bandwidth. Job scheduling and data distribution strategy are key elements of grid performance. Optimization of algorithms for those tasks requires their testing on real grid which is hard to achieve. Having a grid simulator might simplify this task and therefore lead to more optimal scheduling and data placement algorithms. In this paper we demonstrate a grid simulator for the LHCb distributed computing software.
NASA Technical Reports Server (NTRS)
Hess, R. A.
1977-01-01
A brief review of some of the more pertinent applications of analytical pilot models to the prediction of aircraft handling qualities is undertaken. The relative ease with which multiloop piloting tasks can be modeled via the optimal control formulation makes the use of optimal pilot models particularly attractive for handling qualities research. To this end, a rating hypothesis is introduced which relates the numerical pilot opinion rating assigned to a particular vehicle and task to the numerical value of the index of performance resulting from an optimal pilot modeling procedure as applied to that vehicle and task. This hypothesis is tested using data from piloted simulations and is shown to be reasonable. An example concerning a helicopter landing approach is introduced to outline the predictive capability of the rating hypothesis in multiaxis piloting tasks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, G.A.; Sepehrnoori, K.
1994-09-01
The objective of this research is to develop cost-effective surfactant flooding technology by using surfactant simulation studies to evaluate and optimize alternative design strategies taking into account reservoir characteristics, process chemistry, and process design options such as horizontal wells. Task 1 is the development of an improved numerical method for our simulator that will enable us to solve a wider class of these difficult simulation problems accurately and affordably. Task 2 is the application of this simulator to the optimization of surfactant flooding to reduce its risk and cost. The goal of Task 2 is to understand and generalize themore » impact of both process and reservoir characteristics on the optimal design of surfactant flooding. We have studied the effect of process parameters such as salinity gradient, surfactant adsorption, surfactant concentration, surfactant slug size, pH, polymer concentration and well constraints on surfactant floods. In this report, we show three dimensional field scale simulation results to illustrate the impact of one important design parameter, the salinity gradient. Although the use of a salinity gradient to improve the efficiency and robustness of surfactant flooding has been studied and applied for many years, this is the first time that we have evaluated it using stochastic simulations rather than simulations using the traditional layered reservoir description. The surfactant flooding simulations were performed using The University of Texas chemical flooding simulator called UTCHEM.« less
Cloud computing task scheduling strategy based on differential evolution and ant colony optimization
NASA Astrophysics Data System (ADS)
Ge, Junwei; Cai, Yu; Fang, Yiqiu
2018-05-01
This paper proposes a task scheduling strategy DEACO based on the combination of Differential Evolution (DE) and Ant Colony Optimization (ACO), aiming at the single problem of optimization objective in cloud computing task scheduling, this paper combines the shortest task completion time, cost and load balancing. DEACO uses the solution of the DE to initialize the initial pheromone of ACO, reduces the time of collecting the pheromone in ACO in the early, and improves the pheromone updating rule through the load factor. The proposed algorithm is simulated on cloudsim, and compared with the min-min and ACO. The experimental results show that DEACO is more superior in terms of time, cost, and load.
Simulation Of Research And Development Projects
NASA Technical Reports Server (NTRS)
Miles, Ralph F.
1987-01-01
Measures of preference for alternative project plans calculated. Simulation of Research and Development Projects (SIMRAND) program aids in optimal allocation of research and development resources needed to achieve project goals. Models system subsets or project tasks as various network paths to final goal. Each path described in terms of such task variables as cost per hour, cost per unit, and availability of resources. Uncertainty incorporated by treating task variables as probabilistic random variables. Written in Microsoft FORTRAN 77.
Next-generation simulation and optimization platform for forest management and analysis
Antti Makinen; Jouni Kalliovirta; Jussi Rasinmaki
2009-01-01
Late developments in the objectives and the data collection methods of forestry create new challenges and possibilities in forest management planning. Tools in forest management and forest planning systems must be able to make good use of novel data sources, use new models, and solve complex forest planning tasks at different scales. The SIMulation and Optimization (...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolly, S; Mutic, S; Anastasio, M
Purpose: Traditionally, image quality in radiation therapy is assessed subjectively or by utilizing physically-based metrics. Some model observers exist for task-based medical image quality assessment, but almost exclusively for diagnostic imaging tasks. As opposed to disease diagnosis, the task for image observers in radiation therapy is to utilize the available images to design and deliver a radiation dose which maximizes patient disease control while minimizing normal tissue damage. The purpose of this study was to design and implement a new computer simulation model observer to enable task-based image quality assessment in radiation therapy. Methods: A modular computer simulation framework wasmore » developed to resemble the radiotherapy observer by simulating an end-to-end radiation therapy treatment. Given images and the ground-truth organ boundaries from a numerical phantom as inputs, the framework simulates an external beam radiation therapy treatment and quantifies patient treatment outcomes using the previously defined therapeutic operating characteristic (TOC) curve. As a preliminary demonstration, TOC curves were calculated for various CT acquisition and reconstruction parameters, with the goal of assessing and optimizing simulation CT image quality for radiation therapy. Sources of randomness and bias within the system were analyzed. Results: The relationship between CT imaging dose and patient treatment outcome was objectively quantified in terms of a singular value, the area under the TOC (AUTOC) curve. The AUTOC decreases more rapidly for low-dose imaging protocols. AUTOC variation introduced by the dose optimization algorithm was approximately 0.02%, at the 95% confidence interval. Conclusion: A model observer has been developed and implemented to assess image quality based on radiation therapy treatment efficacy. It enables objective determination of appropriate imaging parameter values (e.g. imaging dose). Framework flexibility allows for incorporation of additional modules to include any aspect of the treatment process, and therefore has great potential for both assessment and optimization within radiation therapy.« less
SIMRAND I- SIMULATION OF RESEARCH AND DEVELOPMENT PROJECTS
NASA Technical Reports Server (NTRS)
Miles, R. F.
1994-01-01
The Simulation of Research and Development Projects program (SIMRAND) aids in the optimal allocation of R&D resources needed to achieve project goals. SIMRAND models the system subsets or project tasks as various network paths to a final goal. Each path is described in terms of task variables such as cost per hour, cost per unit, availability of resources, etc. Uncertainty is incorporated by treating task variables as probabilistic random variables. SIMRAND calculates the measure of preference for each alternative network. The networks yielding the highest utility function (or certainty equivalence) are then ranked as the optimal network paths. SIMRAND has been used in several economic potential studies at NASA's Jet Propulsion Laboratory involving solar dish power systems and photovoltaic array construction. However, any project having tasks which can be reduced to equations and related by measures of preference can be modeled. SIMRAND analysis consists of three phases: reduction, simulation, and evaluation. In the reduction phase, analytical techniques from probability theory and simulation techniques are used to reduce the complexity of the alternative networks. In the simulation phase, a Monte Carlo simulation is used to derive statistics on the variables of interest for each alternative network path. In the evaluation phase, the simulation statistics are compared and the networks are ranked in preference by a selected decision rule. The user must supply project subsystems in terms of equations based on variables (for example, parallel and series assembly line tasks in terms of number of items, cost factors, time limits, etc). The associated cumulative distribution functions and utility functions for each variable must also be provided (allowable upper and lower limits, group decision factors, etc). SIMRAND is written in Microsoft FORTRAN 77 for batch execution and has been implemented on an IBM PC series computer operating under DOS.
Introduction to SIMRAND: Simulation of research and development project
NASA Technical Reports Server (NTRS)
Miles, R. F., Jr.
1982-01-01
SIMRAND: SIMulation of Research ANd Development Projects is a methodology developed to aid the engineering and management decision process in the selection of the optimal set of systems or tasks to be funded on a research and development project. A project may have a set of systems or tasks under consideration for which the total cost exceeds the allocated budget. Other factors such as personnel and facilities may also enter as constraints. Thus the project's management must select, from among the complete set of systems or tasks under consideration, a partial set that satisfies all project constraints. The SIMRAND methodology uses analytical techniques and probability theory, decision analysis of management science, and computer simulation, in the selection of this optimal partial set. The SIMRAND methodology is truly a management tool. It initially specifies the information that must be generated by the engineers, thus providing information for the management direction of the engineers, and it ranks the alternatives according to the preferences of the decision makers.
The atomic simulation environment-a Python library for working with atoms.
Hjorth Larsen, Ask; Jørgen Mortensen, Jens; Blomqvist, Jakob; Castelli, Ivano E; Christensen, Rune; Dułak, Marcin; Friis, Jesper; Groves, Michael N; Hammer, Bjørk; Hargus, Cory; Hermes, Eric D; Jennings, Paul C; Bjerre Jensen, Peter; Kermode, James; Kitchin, John R; Leonhard Kolsbjerg, Esben; Kubal, Joseph; Kaasbjerg, Kristen; Lysgaard, Steen; Bergmann Maronsson, Jón; Maxson, Tristan; Olsen, Thomas; Pastewka, Lars; Peterson, Andrew; Rostgaard, Carsten; Schiøtz, Jakob; Schütt, Ole; Strange, Mikkel; Thygesen, Kristian S; Vegge, Tejs; Vilhelmsen, Lasse; Walter, Michael; Zeng, Zhenhua; Jacobsen, Karsten W
2017-07-12
The atomic simulation environment (ASE) is a software package written in the Python programming language with the aim of setting up, steering, and analyzing atomistic simulations. In ASE, tasks are fully scripted in Python. The powerful syntax of Python combined with the NumPy array library make it possible to perform very complex simulation tasks. For example, a sequence of calculations may be performed with the use of a simple 'for-loop' construction. Calculations of energy, forces, stresses and other quantities are performed through interfaces to many external electronic structure codes or force fields using a uniform interface. On top of this calculator interface, ASE provides modules for performing many standard simulation tasks such as structure optimization, molecular dynamics, handling of constraints and performing nudged elastic band calculations.
The atomic simulation environment—a Python library for working with atoms
NASA Astrophysics Data System (ADS)
Hjorth Larsen, Ask; Jørgen Mortensen, Jens; Blomqvist, Jakob; Castelli, Ivano E.; Christensen, Rune; Dułak, Marcin; Friis, Jesper; Groves, Michael N.; Hammer, Bjørk; Hargus, Cory; Hermes, Eric D.; Jennings, Paul C.; Bjerre Jensen, Peter; Kermode, James; Kitchin, John R.; Leonhard Kolsbjerg, Esben; Kubal, Joseph; Kaasbjerg, Kristen; Lysgaard, Steen; Bergmann Maronsson, Jón; Maxson, Tristan; Olsen, Thomas; Pastewka, Lars; Peterson, Andrew; Rostgaard, Carsten; Schiøtz, Jakob; Schütt, Ole; Strange, Mikkel; Thygesen, Kristian S.; Vegge, Tejs; Vilhelmsen, Lasse; Walter, Michael; Zeng, Zhenhua; Jacobsen, Karsten W.
2017-07-01
The atomic simulation environment (ASE) is a software package written in the Python programming language with the aim of setting up, steering, and analyzing atomistic simulations. In ASE, tasks are fully scripted in Python. The powerful syntax of Python combined with the NumPy array library make it possible to perform very complex simulation tasks. For example, a sequence of calculations may be performed with the use of a simple ‘for-loop’ construction. Calculations of energy, forces, stresses and other quantities are performed through interfaces to many external electronic structure codes or force fields using a uniform interface. On top of this calculator interface, ASE provides modules for performing many standard simulation tasks such as structure optimization, molecular dynamics, handling of constraints and performing nudged elastic band calculations.
Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster
2017-12-01
This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.
Tagliabue, Michele; Pedrocchi, Alessandra; Pozzo, Thierry; Ferrigno, Giancarlo
2008-01-01
In spite of the complexity of human motor behavior, difficulties in mathematical modeling have restricted to rather simple movements attempts to identify the motor planning criterion used by the central nervous system. This paper presents a novel-simulation technique able to predict the "desired trajectory" corresponding to a wide range of kinematic and kinetic optimality criteria for tasks involving many degrees of freedom and the coordination between goal achievement and balance maintenance. Employment of proper time discretization, inverse dynamic methods and constrained optimization technique are combined. The application of this simulator to a planar whole body pointing movement shows its effectiveness in managing system nonlinearities and instability as well as in ensuring the anatomo-physiological feasibility of predicted motor plans. In addition, the simulator's capability to simultaneously optimize competing movement aspects represents an interesting opportunity for the motor control community, in which the coexistence of several controlled variables has been hypothesized.
Automated parameterization of intermolecular pair potentials using global optimization techniques
NASA Astrophysics Data System (ADS)
Krämer, Andreas; Hülsmann, Marco; Köddermann, Thorsten; Reith, Dirk
2014-12-01
In this work, different global optimization techniques are assessed for the automated development of molecular force fields, as used in molecular dynamics and Monte Carlo simulations. The quest of finding suitable force field parameters is treated as a mathematical minimization problem. Intricate problem characteristics such as extremely costly and even abortive simulations, noisy simulation results, and especially multiple local minima naturally lead to the use of sophisticated global optimization algorithms. Five diverse algorithms (pure random search, recursive random search, CMA-ES, differential evolution, and taboo search) are compared to our own tailor-made solution named CoSMoS. CoSMoS is an automated workflow. It models the parameters' influence on the simulation observables to detect a globally optimal set of parameters. It is shown how and why this approach is superior to other algorithms. Applied to suitable test functions and simulations for phosgene, CoSMoS effectively reduces the number of required simulations and real time for the optimization task.
TRACC_PB SOSS Integrated Traffic Simulation for CLT Ramp Operation
NASA Technical Reports Server (NTRS)
Okuniek, Nikolai; Zhu, Zhifan
2017-01-01
This presentation provides the current task under the NASA-DLR research collaboration for airport surface. It presents the effort done to adapt TRACC and SOSS software components to simulate airport (CLT) ramp area traffic management using TRACC's conflict free taxi trajectory optimization and SOSS's fast time simulation platform.
Blum, Yvonne; Vejdani, Hamid R; Birn-Jeffery, Aleksandra V; Hubicki, Christian M; Hurst, Jonathan W; Daley, Monica A
2014-01-01
To achieve robust and stable legged locomotion in uneven terrain, animals must effectively coordinate limb swing and stance phases, which involve distinct yet coupled dynamics. Recent theoretical studies have highlighted the critical influence of swing-leg trajectory on stability, disturbance rejection, leg loading and economy of walking and running. Yet, simulations suggest that not all these factors can be simultaneously optimized. A potential trade-off arises between the optimal swing-leg trajectory for disturbance rejection (to maintain steady gait) versus regulation of leg loading (for injury avoidance and economy). Here we investigate how running guinea fowl manage this potential trade-off by comparing experimental data to predictions of hypothesis-based simulations of running over a terrain drop perturbation. We use a simple model to predict swing-leg trajectory and running dynamics. In simulations, we generate optimized swing-leg trajectories based upon specific hypotheses for task-level control priorities. We optimized swing trajectories to achieve i) constant peak force, ii) constant axial impulse, or iii) perfect disturbance rejection (steady gait) in the stance following a terrain drop. We compare simulation predictions to experimental data on guinea fowl running over a visible step down. Swing and stance dynamics of running guinea fowl closely match simulations optimized to regulate leg loading (priorities i and ii), and do not match the simulations optimized for disturbance rejection (priority iii). The simulations reinforce previous findings that swing-leg trajectory targeting disturbance rejection demands large increases in stance leg force following a terrain drop. Guinea fowl negotiate a downward step using unsteady dynamics with forward acceleration, and recover to steady gait in subsequent steps. Our results suggest that guinea fowl use swing-leg trajectory consistent with priority for load regulation, and not for steadiness of gait. Swing-leg trajectory optimized for load regulation may facilitate economy and injury avoidance in uneven terrain.
Blum, Yvonne; Vejdani, Hamid R.; Birn-Jeffery, Aleksandra V.; Hubicki, Christian M.; Hurst, Jonathan W.; Daley, Monica A.
2014-01-01
To achieve robust and stable legged locomotion in uneven terrain, animals must effectively coordinate limb swing and stance phases, which involve distinct yet coupled dynamics. Recent theoretical studies have highlighted the critical influence of swing-leg trajectory on stability, disturbance rejection, leg loading and economy of walking and running. Yet, simulations suggest that not all these factors can be simultaneously optimized. A potential trade-off arises between the optimal swing-leg trajectory for disturbance rejection (to maintain steady gait) versus regulation of leg loading (for injury avoidance and economy). Here we investigate how running guinea fowl manage this potential trade-off by comparing experimental data to predictions of hypothesis-based simulations of running over a terrain drop perturbation. We use a simple model to predict swing-leg trajectory and running dynamics. In simulations, we generate optimized swing-leg trajectories based upon specific hypotheses for task-level control priorities. We optimized swing trajectories to achieve i) constant peak force, ii) constant axial impulse, or iii) perfect disturbance rejection (steady gait) in the stance following a terrain drop. We compare simulation predictions to experimental data on guinea fowl running over a visible step down. Swing and stance dynamics of running guinea fowl closely match simulations optimized to regulate leg loading (priorities i and ii), and do not match the simulations optimized for disturbance rejection (priority iii). The simulations reinforce previous findings that swing-leg trajectory targeting disturbance rejection demands large increases in stance leg force following a terrain drop. Guinea fowl negotiate a downward step using unsteady dynamics with forward acceleration, and recover to steady gait in subsequent steps. Our results suggest that guinea fowl use swing-leg trajectory consistent with priority for load regulation, and not for steadiness of gait. Swing-leg trajectory optimized for load regulation may facilitate economy and injury avoidance in uneven terrain. PMID:24979750
Modeling the effects of high-G stress on pilots in a tracking task
NASA Technical Reports Server (NTRS)
Korn, J.; Kleinman, D. L.
1978-01-01
Air-to-air tracking experiments were conducted at the Aerospace Medical Research Laboratories using both fixed and moving base dynamic environment simulators. The obtained data, which includes longitudinal error of a simulated air-to-air tracking task as well as other auxiliary variables, was analyzed using an ensemble averaging method. In conjunction with these experiments, the optimal control model is applied to model a human operator under high-G stress.
NASA Technical Reports Server (NTRS)
Sweet, Barbara T.; Kaiser, Mary K.
2013-01-01
Although current technology simulator visual systems can achieve extremely realistic levels they do not completely replicate the experience of a pilot sitting in the cockpit, looking at the outside world. Some differences in experience are due to visual artifacts, or perceptual features that would not be present in a naturally viewed scene. Others are due to features that are missing from the simulated scene. In this paper, these differences will be defined and discussed. The significance of these differences will be examined as a function of several particular operational tasks. A framework to facilitate the choice of visual system characteristics based on operational task requirements will be proposed.
NASA Astrophysics Data System (ADS)
Maghami, Mahsa; Sukthankar, Gita
In this paper, we introduce an agent-based simulation for investigating the impact of social factors on the formation and evolution of task-oriented groups. Task-oriented groups are created explicitly to perform a task, and all members derive benefits from task completion. However, even in cases when all group members act in a way that is locally optimal for task completion, social forces that have mild effects on choice of associates can have a measurable impact on task completion performance. In this paper, we show how our simulation can be used to model the impact of stereotypes on group formation. In our simulation, stereotypes are based on observable features, learned from prior experience, and only affect an agent's link formation preferences. Even without assuming stereotypes affect the agents' willingness or ability to complete tasks, the long-term modifications that stereotypes have on the agents' social network impair the agents' ability to form groups with sufficient diversity of skills, as compared to agents who form links randomly. An interesting finding is that this effect holds even in cases where stereotype preference and skill existence are completely uncorrelated.
Optimizing spectral CT parameters for material classification tasks
NASA Astrophysics Data System (ADS)
Rigie, D. S.; La Rivière, P. J.
2016-06-01
In this work, we propose a framework for optimizing spectral CT imaging parameters and hardware design with regard to material classification tasks. Compared with conventional CT, many more parameters must be considered when designing spectral CT systems and protocols. These choices will impact material classification performance in a non-obvious, task-dependent way with direct implications for radiation dose reduction. In light of this, we adapt Hotelling Observer formalisms typically applied to signal detection tasks to the spectral CT, material-classification problem. The result is a rapidly computable metric that makes it possible to sweep out many system configurations, generating parameter optimization curves (POC’s) that can be used to select optimal settings. The proposed model avoids restrictive assumptions about the basis-material decomposition (e.g. linearity) and incorporates signal uncertainty with a stochastic object model. This technique is demonstrated on dual-kVp and photon-counting systems for two different, clinically motivated material classification tasks (kidney stone classification and plaque removal). We show that the POC’s predicted with the proposed analytic model agree well with those derived from computationally intensive numerical simulation studies.
Optimizing Spectral CT Parameters for Material Classification Tasks
Rigie, D. S.; La Rivière, P. J.
2017-01-01
In this work, we propose a framework for optimizing spectral CT imaging parameters and hardware design with regard to material classification tasks. Compared with conventional CT, many more parameters must be considered when designing spectral CT systems and protocols. These choices will impact material classification performance in a non-obvious, task-dependent way with direct implications for radiation dose reduction. In light of this, we adapt Hotelling Observer formalisms typically applied to signal detection tasks to the spectral CT, material-classification problem. The result is a rapidly computable metric that makes it possible to sweep out many system configurations, generating parameter optimization curves (POC’s) that can be used to select optimal settings. The proposed model avoids restrictive assumptions about the basis-material decomposition (e.g. linearity) and incorporates signal uncertainty with a stochastic object model. This technique is demonstrated on dual-kVp and photon-counting systems for two different, clinically motivated material classification tasks (kidney stone classification and plaque removal). We show that the POC’s predicted with the proposed analytic model agree well with those derived from computationally intensive numerical simulation studies. PMID:27227430
Advanced helicopter cockpit and control configurations for helicopter combat missions
NASA Technical Reports Server (NTRS)
Haworth, Loran A.; Atencio, Adolph, Jr.; Bivens, Courtland; Shively, Robert; Delgado, Daniel
1987-01-01
Two piloted simulations were conducted by the U.S. Army Aeroflightdynamics Directorate to evaluate workload and helicopter-handling qualities requirements for single pilot operation in a combat Nap-of-the-Earth environment. The single-pilot advanced cockpit engineering simulation (SPACES) investigations were performed on the NASA Ames Vertical Motion Simulator, using the Advanced Digital Optical Control System control laws and an advanced concepts glass cockpit. The first simulation (SPACES I) compared single pilot to dual crewmember operation for the same flight tasks to determine differences between dual and single ratings, and to discover which control laws enabled adequate single-pilot helicopter operation. The SPACES II simulation concentrated on single-pilot operations and use of control laws thought to be viable candidates for single pilot operations workload. Measures detected significant differences between single-pilot task segments. Control system configurations were task dependent, demonstrating a need for inflight reconfigurable control system to match the optimal control system with the required task.
Impact of topographic mask models on scanner matching solutions
NASA Astrophysics Data System (ADS)
Tyminski, Jacek K.; Pomplun, Jan; Renwick, Stephen P.
2014-03-01
Of keen interest to the IC industry are advanced computational lithography applications such as Optical Proximity Correction of IC layouts (OPC), scanner matching by optical proximity effect matching (OPEM), and Source Optimization (SO) and Source-Mask Optimization (SMO) used as advanced reticle enhancement techniques. The success of these tasks is strongly dependent on the integrity of the lithographic simulators used in computational lithography (CL) optimizers. Lithographic mask models used by these simulators are key drivers impacting the accuracy of the image predications, and as a consequence, determine the validity of these CL solutions. Much of the CL work involves Kirchhoff mask models, a.k.a. thin masks approximation, simplifying the treatment of the mask near-field images. On the other hand, imaging models for hyper-NA scanner require that the interactions of the illumination fields with the mask topography be rigorously accounted for, by numerically solving Maxwell's Equations. The simulators used to predict the image formation in the hyper-NA scanners must rigorously treat the masks topography and its interaction with the scanner illuminators. Such imaging models come at a high computational cost and pose challenging accuracy vs. compute time tradeoffs. Additional complication comes from the fact that the performance metrics used in computational lithography tasks show highly non-linear response to the optimization parameters. Finally, the number of patterns used for tasks such as OPC, OPEM, SO, or SMO range from tens to hundreds. These requirements determine the complexity and the workload of the lithography optimization tasks. The tools to build rigorous imaging optimizers based on first-principles governing imaging in scanners are available, but the quantifiable benefits they might provide are not very well understood. To quantify the performance of OPE matching solutions, we have compared the results of various imaging optimization trials obtained with Kirchhoff mask models to those obtained with rigorous models involving solutions of Maxwell's Equations. In both sets of trials, we used sets of large numbers of patterns, with specifications representative of CL tasks commonly encountered in hyper-NA imaging. In this report we present OPEM solutions based on various mask models and discuss the models' impact on hyper- NA scanner matching accuracy. We draw conclusions on the accuracy of results obtained with thin mask models vs. the topographic OPEM solutions. We present various examples representative of the scanner image matching for patterns representative of the current generation of IC designs.
NASA Technical Reports Server (NTRS)
Baron, S.; Lancraft, R.; Zacharias, G.
1980-01-01
The optimal control model (OCM) of the human operator is used to predict the effect of simulator characteristics on pilot performance and workload. The piloting task studied is helicopter hover. Among the simulator characteristics considered were (computer generated) visual display resolution, field of view and time delay.
Vanniyasingam, Thuva; Cunningham, Charles E; Foster, Gary; Thabane, Lehana
2016-01-01
Objectives Discrete choice experiments (DCEs) are routinely used to elicit patient preferences to improve health outcomes and healthcare services. While many fractional factorial designs can be created, some are more statistically optimal than others. The objective of this simulation study was to investigate how varying the number of (1) attributes, (2) levels within attributes, (3) alternatives and (4) choice tasks per survey will improve or compromise the statistical efficiency of an experimental design. Design and methods A total of 3204 DCE designs were created to assess how relative design efficiency (d-efficiency) is influenced by varying the number of choice tasks (2–20), alternatives (2–5), attributes (2–20) and attribute levels (2–5) of a design. Choice tasks were created by randomly allocating attribute and attribute level combinations into alternatives. Outcome Relative d-efficiency was used to measure the optimality of each DCE design. Results DCE design complexity influenced statistical efficiency. Across all designs, relative d-efficiency decreased as the number of attributes and attribute levels increased. It increased for designs with more alternatives. Lastly, relative d-efficiency converges as the number of choice tasks increases, where convergence may not be at 100% statistical optimality. Conclusions Achieving 100% d-efficiency is heavily dependent on the number of attributes, attribute levels, choice tasks and alternatives. Further exploration of overlaps and block sizes are needed. This study's results are widely applicable for researchers interested in creating optimal DCE designs to elicit individual preferences on health services, programmes, policies and products. PMID:27436671
NASA Technical Reports Server (NTRS)
Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke
1989-01-01
Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.
A model for the pilot's use of motion cues in roll-axis tracking tasks
NASA Technical Reports Server (NTRS)
Levison, W. H.; Junker, A. M.
1977-01-01
Simulated target-following and disturbance-regulation tasks were explored with subjects using visual-only and combined visual and motion cues. The effects of motion cues on task performance and pilot response behavior were appreciably different for the two task configurations and were consistent with data reported in earlier studies for similar task configurations. The optimal-control model for pilot/vehicle systems provided a task-independent framework for accounting for the pilot's use of motion cues. Specifically, the availability of motion cues was modeled by augmenting the set of perceptual variables to include position, rate, acceleration, and accleration-rate of the motion simulator, and results were consistent with the hypothesis of attention-sharing between visual and motion variables. This straightforward informational model allowed accurate model predictions of the effects of motion cues on a variety of response measures for both the target-following and disturbance-regulation tasks.
SENSOR++: Simulation of Remote Sensing Systems from Visible to Thermal Infrared
NASA Astrophysics Data System (ADS)
Paproth, C.; Schlüßler, E.; Scherbaum, P.; Börner, A.
2012-07-01
During the development process of a remote sensing system, the optimization and the verification of the sensor system are important tasks. To support these tasks, the simulation of the sensor and its output is valuable. This enables the developers to test algorithms, estimate errors, and evaluate the capabilities of the whole sensor system before the final remote sensing system is available and produces real data. The presented simulation concept, SENSOR++, consists of three parts. The first part is the geometric simulation which calculates where the sensor looks at by using a ray tracing algorithm. This also determines whether the observed part of the scene is shadowed or not. The second part describes the radiometry and results in the spectral at-sensor radiance from the visible spectrum to the thermal infrared according to the simulated sensor type. In the case of earth remote sensing, it also includes a model of the radiative transfer through the atmosphere. The final part uses the at-sensor radiance to generate digital images by using an optical and an electronic sensor model. Using SENSOR++ for an optimization requires the additional application of task-specific data processing algorithms. The principle of the simulation approach is explained, all relevant concepts of SENSOR++ are discussed, and first examples of its use are given, for example a camera simulation for a moon lander. Finally, the verification of SENSOR++ is demonstrated.
Multi-tasking arbitration and behaviour design for human-interactive robots
NASA Astrophysics Data System (ADS)
Kobayashi, Yuichi; Onishi, Masaki; Hosoe, Shigeyuki; Luo, Zhiwei
2013-05-01
Robots that interact with humans in household environments are required to handle multiple real-time tasks simultaneously, such as carrying objects, collision avoidance and conversation with human. This article presents a design framework for the control and recognition processes to meet these requirements taking into account stochastic human behaviour. The proposed design method first introduces a Petri net for synchronisation of multiple tasks. The Petri net formulation is converted to Markov decision processes and processed in an optimal control framework. Three tasks (safety confirmation, object conveyance and conversation) interact and are expressed by the Petri net. Using the proposed framework, tasks that normally tend to be designed by integrating many if-then rules can be designed in a systematic manner in a state estimation and optimisation framework from the viewpoint of the shortest time optimal control. The proposed arbitration method was verified by simulations and experiments using RI-MAN, which was developed for interactive tasks with humans.
An optimal control model approach to the design of compensators for simulator delay
NASA Technical Reports Server (NTRS)
Baron, S.; Lancraft, R.; Caglayan, A.
1982-01-01
The effects of display delay on pilot performance and workload and of the design of the filters to ameliorate these effects were investigated. The optimal control model for pilot/vehicle analysis was used both to determine the potential delay effects and to design the compensators. The model was applied to a simple roll tracking task and to a complex hover task. The results confirm that even small delays can degrade performance and impose a workload penalty. A time-domain compensator designed by using the optimal control model directly appears capable of providing extensive compensation for these effects even in multi-input, multi-output problems.
Risk Reduction and Resource Pooling on a Cooperation Task
ERIC Educational Resources Information Center
Pietras, Cynthia J.; Cherek, Don R.; Lane, Scott D.; Tcheremissine, Oleg
2006-01-01
Two experiments investigated choice in adult humans on a simulated cooperation task to evaluate a risk-reduction account of sharing based on the energy-budget rule. The energy-budget rule is an optimal foraging model that predicts risk-averse choices when net energy gains exceed energy requirements (positive energy budget) and risk-prone choices…
Abdullahi, Mohammed; Ngadi, Md Asri
2016-01-01
Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.
Abdullahi, Mohammed; Ngadi, Md Asri
2016-01-01
Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan. PMID:27348127
Meyer, Georg F.; Wong, Li Ting; Timson, Emma; Perfect, Philip; White, Mark D.
2012-01-01
We argue that objective fidelity evaluation of virtual environments, such as flight simulation, should be human-performance-centred and task-specific rather than measure the match between simulation and physical reality. We show how principled experimental paradigms and behavioural models to quantify human performance in simulated environments that have emerged from research in multisensory perception provide a framework for the objective evaluation of the contribution of individual cues to human performance measures of fidelity. We present three examples in a flight simulation environment as a case study: Experiment 1: Detection and categorisation of auditory and kinematic motion cues; Experiment 2: Performance evaluation in a target-tracking task; Experiment 3: Transferrable learning of auditory motion cues. We show how the contribution of individual cues to human performance can be robustly evaluated for each task and that the contribution is highly task dependent. The same auditory cues that can be discriminated and are optimally integrated in experiment 1, do not contribute to target-tracking performance in an in-flight refuelling simulation without training, experiment 2. In experiment 3, however, we demonstrate that the auditory cue leads to significant, transferrable, performance improvements with training. We conclude that objective fidelity evaluation requires a task-specific analysis of the contribution of individual cues. PMID:22957068
Vanniyasingam, Thuva; Cunningham, Charles E; Foster, Gary; Thabane, Lehana
2016-07-19
Discrete choice experiments (DCEs) are routinely used to elicit patient preferences to improve health outcomes and healthcare services. While many fractional factorial designs can be created, some are more statistically optimal than others. The objective of this simulation study was to investigate how varying the number of (1) attributes, (2) levels within attributes, (3) alternatives and (4) choice tasks per survey will improve or compromise the statistical efficiency of an experimental design. A total of 3204 DCE designs were created to assess how relative design efficiency (d-efficiency) is influenced by varying the number of choice tasks (2-20), alternatives (2-5), attributes (2-20) and attribute levels (2-5) of a design. Choice tasks were created by randomly allocating attribute and attribute level combinations into alternatives. Relative d-efficiency was used to measure the optimality of each DCE design. DCE design complexity influenced statistical efficiency. Across all designs, relative d-efficiency decreased as the number of attributes and attribute levels increased. It increased for designs with more alternatives. Lastly, relative d-efficiency converges as the number of choice tasks increases, where convergence may not be at 100% statistical optimality. Achieving 100% d-efficiency is heavily dependent on the number of attributes, attribute levels, choice tasks and alternatives. Further exploration of overlaps and block sizes are needed. This study's results are widely applicable for researchers interested in creating optimal DCE designs to elicit individual preferences on health services, programmes, policies and products. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Software Would Largely Automate Design of Kalman Filter
NASA Technical Reports Server (NTRS)
Chuang, Jason C. H.; Negast, William J.
2005-01-01
Embedded Navigation Filter Automatic Designer (ENFAD) is a computer program being developed to automate the most difficult tasks in designing embedded software to implement a Kalman filter in a navigation system. The most difficult tasks are selection of error states of the filter and tuning of filter parameters, which are timeconsuming trial-and-error tasks that require expertise and rarely yield optimum results. An optimum selection of error states and filter parameters depends on navigation-sensor and vehicle characteristics, and on filter processing time. ENFAD would include a simulation module that would incorporate all possible error states with respect to a given set of vehicle and sensor characteristics. The first of two iterative optimization loops would vary the selection of error states until the best filter performance was achieved in Monte Carlo simulations. For a fixed selection of error states, the second loop would vary the filter parameter values until an optimal performance value was obtained. Design constraints would be satisfied in the optimization loops. Users would supply vehicle and sensor test data that would be used to refine digital models in ENFAD. Filter processing time and filter accuracy would be computed by ENFAD.
Simulating optoelectronic systems for remote sensing with SENSOR
NASA Astrophysics Data System (ADS)
Boerner, Anko
2003-04-01
The consistent end-to-end simulation of airborne and spaceborne remote sensing systems is an important task and sometimes the only way for the adaptation and optimization of a sensor and its observation conditions, the choice and test of algorithms for data processing, error estimation and the evaluation of the capabilities of the whole sensor system. The presented software simulator SENSOR (Software ENvironment for the Simulation of Optical Remote sensing systems) includes a full model of the sensor hardware, the observed scene, and the atmosphere in between. It allows the simulation of a wide range of optoelectronic systems for remote sensing. The simulator consists of three parts. The first part describes the geometrical relations between scene, sun, and the remote sensing system using a ray tracing algorithm. The second part of the simulation environment considers the radiometry. It calculates the at-sensor radiance using a pre-calculated multidimensional lookup-table taking the atmospheric influence on the radiation into account. Part three consists of an optical and an electronic sensor model for the generation of digital images. Using SENSOR for an optimization requires the additional application of task-specific data processing algorithms. The principle of the end-to-end-simulation approach is explained, all relevant concepts of SENSOR are discussed, and examples of its use are given. The verification of SENSOR is demonstrated.
Differences between racing and non-racing drivers: A simulator study using eye-tracking
de Groot, Stefan; Happee, Riender; de Winter, Joost C. F.
2017-01-01
Motorsport has developed into a professional international competition. However, limited research is available on the perceptual and cognitive skills of racing drivers. By means of a racing simulator, we compared the driving performance of seven racing drivers with ten non-racing drivers. Participants were tasked to drive the fastest possible lap time. Additionally, both groups completed a choice reaction time task and a tracking task. Results from the simulator showed faster lap times, higher steering activity, and a more optimal racing line for the racing drivers than for the non-racing drivers. The non-racing drivers’ gaze behavior corresponded to the tangent point model, whereas racing drivers showed a more variable gaze behavior combined with larger head rotations while cornering. Results from the choice reaction time task and tracking task showed no statistically significant difference between the two groups. Our results are consistent with the current consensus in sports sciences in that task-specific differences exist between experts and novices while there are no major differences in general cognitive and motor abilities. PMID:29121090
Li, Xuejun; Xu, Jia; Yang, Yun
2015-01-01
Cloud workflow system is a kind of platform service based on cloud computing. It facilitates the automation of workflow applications. Between cloud workflow system and its counterparts, market-oriented business model is one of the most prominent factors. The optimization of task-level scheduling in cloud workflow system is a hot topic. As the scheduling is a NP problem, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been proposed to optimize the cost. However, they have the characteristic of premature convergence in optimization process and therefore cannot effectively reduce the cost. To solve these problems, Chaotic Particle Swarm Optimization (CPSO) algorithm with chaotic sequence and adaptive inertia weight factor is applied to present the task-level scheduling. Chaotic sequence with high randomness improves the diversity of solutions, and its regularity assures a good global convergence. Adaptive inertia weight factor depends on the estimate value of cost. It makes the scheduling avoid premature convergence by properly balancing between global and local exploration. The experimental simulation shows that the cost obtained by our scheduling is always lower than the other two representative counterparts.
Li, Xuejun; Xu, Jia; Yang, Yun
2015-01-01
Cloud workflow system is a kind of platform service based on cloud computing. It facilitates the automation of workflow applications. Between cloud workflow system and its counterparts, market-oriented business model is one of the most prominent factors. The optimization of task-level scheduling in cloud workflow system is a hot topic. As the scheduling is a NP problem, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been proposed to optimize the cost. However, they have the characteristic of premature convergence in optimization process and therefore cannot effectively reduce the cost. To solve these problems, Chaotic Particle Swarm Optimization (CPSO) algorithm with chaotic sequence and adaptive inertia weight factor is applied to present the task-level scheduling. Chaotic sequence with high randomness improves the diversity of solutions, and its regularity assures a good global convergence. Adaptive inertia weight factor depends on the estimate value of cost. It makes the scheduling avoid premature convergence by properly balancing between global and local exploration. The experimental simulation shows that the cost obtained by our scheduling is always lower than the other two representative counterparts. PMID:26357510
Attention control learning in the decision space using state estimation
NASA Astrophysics Data System (ADS)
Gharaee, Zahra; Fatehi, Alireza; Mirian, Maryam S.; Nili Ahmadabadi, Majid
2016-05-01
The main goal of this paper is modelling attention while using it in efficient path planning of mobile robots. The key challenge in concurrently aiming these two goals is how to make an optimal, or near-optimal, decision in spite of time and processing power limitations, which inherently exist in a typical multi-sensor real-world robotic application. To efficiently recognise the environment under these two limitations, attention of an intelligent agent is controlled by employing the reinforcement learning framework. We propose an estimation method using estimated mixture-of-experts task and attention learning in perceptual space. An agent learns how to employ its sensory resources, and when to stop observing, by estimating its perceptual space. In this paper, static estimation of the state space in a learning task problem, which is examined in the WebotsTM simulator, is performed. Simulation results show that a robot learns how to achieve an optimal policy with a controlled cost by estimating the state space instead of continually updating sensory information.
The Bayesian reader: explaining word recognition as an optimal Bayesian decision process.
Norris, Dennis
2006-04-01
This article presents a theory of visual word recognition that assumes that, in the tasks of word identification, lexical decision, and semantic categorization, human readers behave as optimal Bayesian decision makers. This leads to the development of a computational model of word recognition, the Bayesian reader. The Bayesian reader successfully simulates some of the most significant data on human reading. The model accounts for the nature of the function relating word frequency to reaction time and identification threshold, the effects of neighborhood density and its interaction with frequency, and the variation in the pattern of neighborhood density effects seen in different experimental tasks. Both the general behavior of the model and the way the model predicts different patterns of results in different tasks follow entirely from the assumption that human readers approximate optimal Bayesian decision makers. ((c) 2006 APA, all rights reserved).
Optimization of Simulation-Based Training Systems: Model Description, Implementation, and Evaluation
1990-06-01
Taskcs01 for Instructional _______Cue ______ 0__ Fq’o~eturesResponse 1A I Analyze Tasks %Requirements 018 TaskfrFieiy0-m Learning fo ielt asl... academic instruction on aircraft systems, emergency procedures, and tactics. Although some Army aviators enter the AH-I AQC immediately after completing...from low to high fidelity, and (d) tasks could not be trained to standard using academic training only. The tasks that were chosen are enumerated in
Particle swarm optimization based space debris surveillance network scheduling
NASA Astrophysics Data System (ADS)
Jiang, Hai; Liu, Jing; Cheng, Hao-Wen; Zhang, Yao
2017-02-01
The increasing number of space debris has created an orbital debris environment that poses increasing impact risks to existing space systems and human space flights. For the safety of in-orbit spacecrafts, we should optimally schedule surveillance tasks for the existing facilities to allocate resources in a manner that most significantly improves the ability to predict and detect events involving affected spacecrafts. This paper analyzes two criteria that mainly affect the performance of a scheduling scheme and introduces an artificial intelligence algorithm into the scheduling of tasks of the space debris surveillance network. A new scheduling algorithm based on the particle swarm optimization algorithm is proposed, which can be implemented in two different ways: individual optimization and joint optimization. Numerical experiments with multiple facilities and objects are conducted based on the proposed algorithm, and simulation results have demonstrated the effectiveness of the proposed algorithm.
Control of complex physically simulated robot groups
NASA Astrophysics Data System (ADS)
Brogan, David C.
2001-10-01
Actuated systems such as robots take many forms and sizes but each requires solving the difficult task of utilizing available control inputs to accomplish desired system performance. Coordinated groups of robots provide the opportunity to accomplish more complex tasks, to adapt to changing environmental conditions, and to survive individual failures. Similarly, groups of simulated robots, represented as graphical characters, can test the design of experimental scenarios and provide autonomous interactive counterparts for video games. The complexity of writing control algorithms for these groups currently hinders their use. A combination of biologically inspired heuristics, search strategies, and optimization techniques serve to reduce the complexity of controlling these real and simulated characters and to provide computationally feasible solutions.
Optimizing Utilization of Detectors
2016-03-01
provide a quantifiable process to determine how much time should be allocated to each task sharing the same asset . This optimized expected time... allocation is calculated by numerical analysis and Monte Carlo simulation. Numerical analysis determines the expectation by involving an integral and...determines the optimum time allocation of the asset by repeatedly running experiments to approximate the expectation of the random variables. This
Improved configuration control for redundant robots
NASA Technical Reports Server (NTRS)
Seraji, H.; Colbaugh, R.
1990-01-01
This article presents a singularity-robust task-prioritized reformulation of the configuration control scheme for redundant robot manipulators. This reformulation suppresses large joint velocities near singularities, at the expense of small task trajectory errors. This is achieved by optimally reducing the joint velocities to induce minimal errors in the task performance by modifying the task trajectories. Furthermore, the same framework provides a means for assignment of priorities between the basic task of end-effector motion and the user-defined additional task for utilizing redundancy. This allows automatic relaxation of the additional task constraints in favor of the desired end-effector motion, when both cannot be achieved exactly. The improved configuration control scheme is illustrated for a variety of additional tasks, and extensive simulation results are presented.
Stochastic optimization of GeantV code by use of genetic algorithms
Amadio, G.; Apostolakis, J.; Bandieramonte, M.; ...
2017-10-01
GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less
Stochastic optimization of GeantV code by use of genetic algorithms
NASA Astrophysics Data System (ADS)
Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Behera, S. P.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Hariri, F.; Jun, S. Y.; Konstantinov, D.; Kumawat, H.; Ivantchenko, V.; Lima, G.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.
2017-10-01
GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) and handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. The goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.
Stochastic optimization of GeantV code by use of genetic algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amadio, G.; Apostolakis, J.; Bandieramonte, M.
GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less
Optimized Assistive Human-Robot Interaction Using Reinforcement Learning.
Modares, Hamidreza; Ranatunga, Isura; Lewis, Frank L; Popa, Dan O
2016-03-01
An intelligent human-robot interaction (HRI) system with adjustable robot behavior is presented. The proposed HRI system assists the human operator to perform a given task with minimum workload demands and optimizes the overall human-robot system performance. Motivated by human factor studies, the presented control structure consists of two control loops. First, a robot-specific neuro-adaptive controller is designed in the inner loop to make the unknown nonlinear robot behave like a prescribed robot impedance model as perceived by a human operator. In contrast to existing neural network and adaptive impedance-based control methods, no information of the task performance or the prescribed robot impedance model parameters is required in the inner loop. Then, a task-specific outer-loop controller is designed to find the optimal parameters of the prescribed robot impedance model to adjust the robot's dynamics to the operator skills and minimize the tracking error. The outer loop includes the human operator, the robot, and the task performance details. The problem of finding the optimal parameters of the prescribed robot impedance model is transformed into a linear quadratic regulator (LQR) problem which minimizes the human effort and optimizes the closed-loop behavior of the HRI system for a given task. To obviate the requirement of the knowledge of the human model, integral reinforcement learning is used to solve the given LQR problem. Simulation results on an x - y table and a robot arm, and experimental implementation results on a PR2 robot confirm the suitability of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raskin, Cody; Owen, J. Michael
Creating spherical initial conditions in smoothed particle hydrodynamics simulations that are spherically conformal is a difficult task. Here in this paper, we describe two algorithmic methods for evenly distributing points on surfaces that when paired can be used to build three-dimensional spherical objects with optimal equipartition of volume between particles, commensurate with an arbitrary radial density function. We demonstrate the efficacy of our method against stretched lattice arrangements on the metrics of hydrodynamic stability, spherical conformity, and the harmonic power distribution of gravitational settling oscillations. We further demonstrate how our method is highly optimized for simulating multi-material spheres, such asmore » planets with core–mantle boundaries.« less
Raskin, Cody; Owen, J. Michael
2016-03-24
Creating spherical initial conditions in smoothed particle hydrodynamics simulations that are spherically conformal is a difficult task. Here in this paper, we describe two algorithmic methods for evenly distributing points on surfaces that when paired can be used to build three-dimensional spherical objects with optimal equipartition of volume between particles, commensurate with an arbitrary radial density function. We demonstrate the efficacy of our method against stretched lattice arrangements on the metrics of hydrodynamic stability, spherical conformity, and the harmonic power distribution of gravitational settling oscillations. We further demonstrate how our method is highly optimized for simulating multi-material spheres, such asmore » planets with core–mantle boundaries.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raskin, Cody; Owen, J. Michael
2016-04-01
Creating spherical initial conditions in smoothed particle hydrodynamics simulations that are spherically conformal is a difficult task. Here, we describe two algorithmic methods for evenly distributing points on surfaces that when paired can be used to build three-dimensional spherical objects with optimal equipartition of volume between particles, commensurate with an arbitrary radial density function. We demonstrate the efficacy of our method against stretched lattice arrangements on the metrics of hydrodynamic stability, spherical conformity, and the harmonic power distribution of gravitational settling oscillations. We further demonstrate how our method is highly optimized for simulating multi-material spheres, such as planets with core–mantlemore » boundaries.« less
CAMS as a tool for human factors research in spaceflight
NASA Astrophysics Data System (ADS)
Sauer, Juergen
2004-01-01
The paper reviews a number of research studies that were carried out with a PC-based task environment called Cabin Air Management System (CAMS) simulating the operation of a spacecraft's life support system. As CAMS was a multiple task environment, it allowed the measurement of performance at different levels. Four task components of different priority were embedded in the task environment: diagnosis and repair of system faults, maintaining atmospheric parameters in a safe state, acknowledgement of system alarms (reaction time), and keeping a record of critical system resources (prospective memory). Furthermore, the task environment permitted the examination of different task management strategies and changes in crew member state (fatigue, anxiety, mental effort). A major goal of the research programme was to examine how crew members adapted to various forms of sub-optimal working conditions, such as isolation and confinement, sleep deprivation and noise. None of the studies provided evidence for decrements in primary task performance. However, the results showed a number of adaptive responses of crew members to adjust to the different sub-optimal working conditions. There was evidence for adjustments in information sampling strategies (usually reductions in sampling frequency) as a result of unfavourable working conditions. The results also showed selected decrements in secondary task performance. Prospective memory seemed to be somewhat more vulnerable to sub-optimal working conditions than performance on the reaction time task. Finally, suggestions are made for future research with the CAMS environment.
Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.
Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V
2016-01-01
Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.
NASA Astrophysics Data System (ADS)
Platisa, Ljiljana; Vansteenkiste, Ewout; Goossens, Bart; Marchessoux, Cédric; Kimpe, Tom; Philips, Wilfried
2009-02-01
Medical-imaging systems are designed to aid medical specialists in a specific task. Therefore, the physical parameters of a system need to optimize the task performance of a human observer. This requires measurements of human performance in a given task during the system optimization. Typically, psychophysical studies are conducted for this purpose. Numerical observer models have been successfully used to predict human performance in several detection tasks. Especially, the task of signal detection using a channelized Hotelling observer (CHO) in simulated images has been widely explored. However, there are few studies done for clinically acquired images that also contain anatomic noise. In this paper, we investigate the performance of a CHO in the task of detecting lung nodules in real radiographic images of the chest. To evaluate variability introduced by the limited available data, we employ a commonly used study of a multi-reader multi-case (MRMC) scenario. It accounts for both case and reader variability. Finally, we use the "oneshot" methods to estimate the MRMC variance of the area under the ROC curve (AUC). The obtained AUC compares well to those reported for human observer study on a similar data set. Furthermore, the "one-shot" analysis implies a fairly consistent performance of the CHO with the variance of AUC below 0.002. This indicates promising potential for numerical observers in optimization of medical imaging displays and encourages further investigation on the subject.
Use of Linear Perspective Scene Cues in a Simulated Height Regulation Task
NASA Technical Reports Server (NTRS)
Levison, W. H.; Warren, R.
1984-01-01
As part of a long-term effort to quantify the effects of visual scene cuing and non-visual motion cuing in flight simulators, an experimental study of the pilot's use of linear perspective cues in a simulated height-regulation task was conducted. Six test subjects performed a fixed-base tracking task with a visual display consisting of a simulated horizon and a perspective view of a straight, infinitely-long roadway of constant width. Experimental parameters were (1) the central angle formed by the roadway perspective and (2) the display gain. The subject controlled only the pitch/height axis; airspeed, bank angle, and lateral track were fixed in the simulation. The average RMS height error score for the least effective display configuration was about 25% greater than the score for the most effective configuration. Overall, larger and more highly significant effects were observed for the pitch and control scores. Model analysis was performed with the optimal control pilot model to characterize the pilot's use of visual scene cues, with the goal of obtaining a consistent set of independent model parameters to account for display effects.
Optimal control of a hybrid rhythmic-discrete task: the bouncing ball revisited.
Ronsse, Renaud; Wei, Kunlin; Sternad, Dagmar
2010-05-01
Rhythmically bouncing a ball with a racket is a hybrid task that combines continuous rhythmic actuation of the racket with the control of discrete impact events between racket and ball. This study presents experimental data and a two-layered modeling framework that explicitly addresses the hybrid nature of control: a first discrete layer calculates the state to reach at impact and the second continuous layer smoothly drives the racket to this desired state, based on optimality principles. The testbed for this hybrid model is task performance at a range of increasingly slower tempos. When slowing the rhythm of the bouncing actions, the continuous cycles become separated into a sequence of discrete movements interspersed by dwell times and directed to achieve the desired impact. Analyses of human performance show increasing variability of performance measures with slower tempi, associated with a change in racket trajectories from approximately sinusoidal to less symmetrical velocity profiles. Matching results of model simulations give support to a hybrid control model based on optimality, and therefore suggest that optimality principles are applicable to the sensorimotor control of complex movements such as ball bouncing.
Domańska, Barbara; Stumpp, Oliver; Poon, Steven; Oray, Serkan; Mountian, Irina; Pichon, Clovis
2018-01-01
We incorporated patient feedback from human factors studies (HFS) in the patient-centric design and validation of ava ® , an electromechanical device (e-Device) for self-injecting the anti-tumor necrosis factor certolizumab pegol (CZP). Healthcare professionals, caregivers, healthy volunteers, and patients with rheumatoid arthritis, psoriatic arthritis, ankylosing spondylitis, or Crohn's disease participated in 11 formative HFS to optimize the e-Device design through intended user feedback; nine studies involved simulated injections. Formative participant questionnaire feedback was collected following e-Device prototype handling. Validation HFS (one EU study and one US study) assessed the safe and effective setup and use of the e-Device using 22 predefined critical tasks. Task outcomes were categorized as "failures" if participants did not succeed within three attempts. Two hundred eighty-three participants entered formative (163) and validation (120) HFS; 260 participants performed one or more simulated e-Device self-injections. Design changes following formative HFS included alterations to buttons and the graphical user interface screen. All validation HFS participants completed critical tasks necessary for CZP dose delivery, with minimal critical task failures (12 of 572 critical tasks, 2.1%, in the EU study, and 2 of 5310 critical tasks, less than 0.1%, in the US study). CZP e-Device development was guided by intended user feedback through HFS, ensuring the final design addressed patients' needs. In both validation studies, participants successfully performed all critical tasks, demonstrating safe and effective e-Device self-injections. UCB Pharma. Plain language summary available on the journal website.
TTSA: An Effective Scheduling Approach for Delay Bounded Tasks in Hybrid Clouds.
Yuan, Haitao; Bi, Jing; Tan, Wei; Zhou, MengChu; Li, Bo Hu; Li, Jianqiang
2017-11-01
The economy of scale provided by cloud attracts a growing number of organizations and industrial companies to deploy their applications in cloud data centers (CDCs) and to provide services to users around the world. The uncertainty of arriving tasks makes it a big challenge for private CDC to cost-effectively schedule delay bounded tasks without exceeding their delay bounds. Unlike previous studies, this paper takes into account the cost minimization problem for private CDC in hybrid clouds, where the energy price of private CDC and execution price of public clouds both show the temporal diversity. Then, this paper proposes a temporal task scheduling algorithm (TTSA) to effectively dispatch all arriving tasks to private CDC and public clouds. In each iteration of TTSA, the cost minimization problem is modeled as a mixed integer linear program and solved by a hybrid simulated-annealing particle-swarm-optimization. The experimental results demonstrate that compared with the existing methods, the optimal or suboptimal scheduling strategy produced by TTSA can efficiently increase the throughput and reduce the cost of private CDC while meeting the delay bounds of all the tasks.
Error rate information in attention allocation pilot models
NASA Technical Reports Server (NTRS)
Faulkner, W. H.; Onstott, E. D.
1977-01-01
The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.
Simulation of short-term electric load using an artificial neural network
NASA Astrophysics Data System (ADS)
Ivanin, O. A.
2018-01-01
While solving the task of optimizing operation modes and equipment composition of small energy complexes or other tasks connected with energy planning, it is necessary to have data on energy loads of a consumer. Usually, there is a problem with obtaining real load charts and detailed information about the consumer, because a method of load-charts simulation on the basis of minimal information should be developed. The analysis of work devoted to short-term loads prediction allows choosing artificial neural networks as a most suitable mathematical instrument for solving this problem. The article provides an overview of applied short-term load simulation methods; it describes the advantages of artificial neural networks and offers a neural network structure for electric loads of residential buildings simulation. The results of modeling loads with proposed method and the estimation of its error are presented.
Simulation-based training in flexible fibreoptic intubation: A randomised study.
Nilsson, Philip M; Russell, Lene; Ringsted, Charlotte; Hertz, Peter; Konge, Lars
2015-09-01
Flexible fibreoptic intubation (FOI) is a key element in difficult airway management. Training of FOI skills is an important part of the anaesthesiology curriculum. Simulation-based training has been shown to be effective when learning FOI, but the optimal structure of the training is debated. The aspect of dividing the training into segments (part-task training) or assembling into one piece (whole-task training) has not been studied. The aims of this study were to compare the effect of training the motor skills of FOI as part-task training or as whole-task training and to relate the performance levels achieved by the novices to the standard of performance of experienced FOI practitioners. A randomised controlled study. Centre for Clinical Education, University of Copenhagen and the Capital Region of Denmark, between January and April 2013. Twenty-three anaesthesia residents in their first year of training in anaesthesiology with no experience in FOI, and 10 anaesthesia consultants experienced in FOI. The novices to FOI were allocated randomly to receive either part-task or whole-task training of FOI on virtual reality simulators. Procedures were subsequently trained on a manikin and assessed by an experienced anaesthesiologist. The experienced group was assessed in the same manner with no prior simulation-based training. The primary outcome measure was the score of performance on testing FOI skills on a manikin. A positive learning effect was observed in both the part-task training group and the whole-task training group. There was no statistically significant difference in final performance scores of the two novice groups (P = 0.61). Furthermore, both groups of novices were able to improve their skill level significantly by the end of manikin training to levels comparable to the experienced anaesthesiologists. Part-task training did not prove more effective than whole-task training when training novices in FOI skills. FOI is very suitable for simulation-based training and segmentation of the procedure during training is not necessary.
Electric Grid Expansion Planning with High Levels of Variable Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadley, Stanton W.; You, Shutang; Shankar, Mallikarjun
2016-02-01
Renewables are taking a large proportion of generation capacity in U.S. power grids. As their randomness has increasing influence on power system operation, it is necessary to consider their impact on system expansion planning. To this end, this project studies the generation and transmission expansion co-optimization problem of the US Eastern Interconnection (EI) power grid with a high wind power penetration rate. In this project, the generation and transmission expansion problem for the EI system is modeled as a mixed-integer programming (MIP) problem. This study analyzed a time series creation method to capture the diversity of load and wind powermore » across balancing regions in the EI system. The obtained time series can be easily introduced into the MIP co-optimization problem and then solved robustly through available MIP solvers. Simulation results show that the proposed time series generation method and the expansion co-optimization model and can improve the expansion result significantly after considering the diversity of wind and load across EI regions. The improved expansion plan that combines generation and transmission will aid system planners and policy makers to maximize the social welfare. This study shows that modelling load and wind variations and diversities across balancing regions will produce significantly different expansion result compared with former studies. For example, if wind is modeled in more details (by increasing the number of wind output levels) so that more wind blocks are considered in expansion planning, transmission expansion will be larger and the expansion timing will be earlier. Regarding generation expansion, more wind scenarios will slightly reduce wind generation expansion in the EI system and increase the expansion of other generation such as gas. Also, adopting detailed wind scenarios will reveal that it may be uneconomic to expand transmission networks for transmitting a large amount of wind power through a long distance in the EI system. Incorporating more details of renewables in expansion planning will inevitably increase the computational burden. Therefore, high performance computing (HPC) techniques are urgently needed for power system operation and planning optimization. As a scoping study task, this project tested some preliminary parallel computation techniques such as breaking down the simulation task into several sub-tasks based on chronology splitting or sample splitting, and then assigning these sub-tasks to different cores. Testing results show significant time reduction when a simulation task is split into several sub-tasks for parallel execution.« less
NASA Technical Reports Server (NTRS)
1975-01-01
A program was conducted which included the design of a set of simplified simulation tasks, design of apparatus and breadboard TV equipment for task performance, and the implementation of a number of simulation tests. Performance measurements were made under controlled conditions and the results analyzed to permit evaluation of the relative merits (effectivity) of various TV systems. Burden factors were subsequently generated for each TV system to permit tradeoff evaluation of system characteristics against performance. For the general remote operation mission, the 2-view system is recommended. This system is characterized and the corresponding equipment specifications were generated.
Simulation-optimization model for production planning in the blood supply chain.
Osorio, Andres F; Brailsford, Sally C; Smith, Honora K; Forero-Matiz, Sonia P; Camacho-Rodríguez, Bernardo A
2017-12-01
Production planning in the blood supply chain is a challenging task. Many complex factors such as uncertain supply and demand, blood group proportions, shelf life constraints and different collection and production methods have to be taken into account, and thus advanced methodologies are required for decision making. This paper presents an integrated simulation-optimization model to support both strategic and operational decisions in production planning. Discrete-event simulation is used to represent the flows through the supply chain, incorporating collection, production, storing and distribution. On the other hand, an integer linear optimization model running over a rolling planning horizon is used to support daily decisions, such as the required number of donors, collection methods and production planning. This approach is evaluated using real data from a blood center in Colombia. The results show that, using the proposed model, key indicators such as shortages, outdated units, donors required and cost are improved.
Aircraft Flight Modeling During the Optimization of Gas Turbine Engine Working Process
NASA Astrophysics Data System (ADS)
Tkachenko, A. Yu; Kuz'michev, V. S.; Krupenich, I. N.
2018-01-01
The article describes a method for simulating the flight of the aircraft along a predetermined path, establishing a functional connection between the parameters of the working process of gas turbine engine and the efficiency criteria of the aircraft. This connection is necessary for solving the optimization tasks of the conceptual design stage of the engine according to the systems approach. Engine thrust level, in turn, influences the operation of aircraft, thus making accurate simulation of the aircraft behavior during flight necessary for obtaining the correct solution. The described mathematical model of aircraft flight provides the functional connection between the airframe characteristics, working process of gas turbine engines (propulsion system), ambient and flight conditions and flight profile features. This model provides accurate results of flight simulation and the resulting aircraft efficiency criteria, required for optimization of working process and control function of a gas turbine engine.
Procedural virtual reality simulation in minimally invasive surgery.
Våpenstad, Cecilie; Buzink, Sonja N
2013-02-01
Simulation of procedural tasks has the potential to bridge the gap between basic skills training outside the operating room (OR) and performance of complex surgical tasks in the OR. This paper provides an overview of procedural virtual reality (VR) simulation currently available on the market and presented in scientific literature for laparoscopy (LS), flexible gastrointestinal endoscopy (FGE), and endovascular surgery (EVS). An online survey was sent to companies and research groups selling or developing procedural VR simulators, and a systematic search was done for scientific publications presenting or applying VR simulators to train or assess procedural skills in the PUBMED and SCOPUS databases. The results of five simulator companies were included in the survey. In the literature review, 116 articles were analyzed (45 on LS, 43 on FGE, 28 on EVS), presenting a total of 23 simulator systems. The companies stated to altogether offer 78 procedural tasks (33 for LS, 12 for FGE, 33 for EVS), of which 17 also were found in the literature review. Although study type and used outcomes vary between the three different fields, approximately 90 % of the studies presented in the retrieved publications for LS found convincing evidence to confirm the validity or added value of procedural VR simulation. This was the case in approximately 75 % for FGE and EVS. Procedural training using VR simulators has been found to improve clinical performance. There is nevertheless a large amount of simulated procedural tasks that have not been validated. Future research should focus on the optimal use of procedural simulators in the most effective training setups and further investigate the benefits of procedural VR simulation to improve clinical outcome.
Accelerating Dust Storm Simulation by Balancing Task Allocation in Parallel Computing Environment
NASA Astrophysics Data System (ADS)
Gui, Z.; Yang, C.; XIA, J.; Huang, Q.; YU, M.
2013-12-01
Dust storm has serious negative impacts on environment, human health, and assets. The continuing global climate change has increased the frequency and intensity of dust storm in the past decades. To better understand and predict the distribution, intensity and structure of dust storm, a series of dust storm models have been developed, such as Dust Regional Atmospheric Model (DREAM), the NMM meteorological module (NMM-dust) and Chinese Unified Atmospheric Chemistry Environment for Dust (CUACE/Dust). The developments and applications of these models have contributed significantly to both scientific research and our daily life. However, dust storm simulation is a data and computing intensive process. Normally, a simulation for a single dust storm event may take several days or hours to run. It seriously impacts the timeliness of prediction and potential applications. To speed up the process, high performance computing is widely adopted. By partitioning a large study area into small subdomains according to their geographic location and executing them on different computing nodes in a parallel fashion, the computing performance can be significantly improved. Since spatiotemporal correlations exist in the geophysical process of dust storm simulation, each subdomain allocated to a node need to communicate with other geographically adjacent subdomains to exchange data. Inappropriate allocations may introduce imbalance task loads and unnecessary communications among computing nodes. Therefore, task allocation method is the key factor, which may impact the feasibility of the paralleling. The allocation algorithm needs to carefully leverage the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire system. This presentation introduces two algorithms for such allocation and compares them with evenly distributed allocation method. Specifically, 1) In order to get optimized solutions, a quadratic programming based modeling method is proposed. This algorithm performs well with small amount of computing tasks. However, its efficiency decreases significantly as the subdomain number and computing node number increase. 2) To compensate performance decreasing for large scale tasks, a K-Means clustering based algorithm is introduced. Instead of dedicating to get optimized solutions, this method can get relatively good feasible solutions within acceptable time. However, it may introduce imbalance communication for nodes or node-isolated subdomains. This research shows both two algorithms have their own strength and weakness for task allocation. A combination of the two algorithms is under study to obtain a better performance. Keywords: Scheduling; Parallel Computing; Load Balance; Optimization; Cost Model
Dubrowski, Adam; Alani, Sabrina; Bankovic, Tina; Crowe, Andrea; Pollard, Megan
2015-11-02
Simulation is an important training tool used in a variety of influential fields. However, development of simulation scenarios - the key component of simulation - occurs in isolation; sharing of scenarios is almost non-existent. This can make simulation use a costly task in terms of the resources and time and the possible redundancy of efforts. To alleviate these issues, the goal is to strive for an open communication of practice (CoP) surrounding simulation. To facilitate this goal, this report describes a set of guidelines for writing technical reports about simulation use for educating health professionals. Using an accepted set of guidelines will allow for homogeneity when building simulation scenarios and facilitate open sharing among simulation users. In addition to optimizing simulation efforts in institutions that are currently using simulation as an educational tool, the development of such a repository may have direct implications on developing countries, where simulation is only starting to be used systematically. Our project facilitates equivalent and global access to information, knowledge, and highest-caliber education - in this context, simulation - collectively, the building blocks of optimal healthcare.
Writing Technical Reports for Simulation in Education for Health Professionals: Suggested Guidelines
Alani, Sabrina; Bankovic, Tina; Crowe, Andrea; Pollard, Megan
2015-01-01
Simulation is an important training tool used in a variety of influential fields. However, development of simulation scenarios - the key component of simulation – occurs in isolation; sharing of scenarios is almost non-existent. This can make simulation use a costly task in terms of the resources and time and the possible redundancy of efforts. To alleviate these issues, the goal is to strive for an open communication of practice (CoP) surrounding simulation. To facilitate this goal, this report describes a set of guidelines for writing technical reports about simulation use for educating health professionals. Using an accepted set of guidelines will allow for homogeneity when building simulation scenarios and facilitate open sharing among simulation users. In addition to optimizing simulation efforts in institutions that are currently using simulation as an educational tool, the development of such a repository may have direct implications on developing countries, where simulation is only starting to be used systematically. Our project facilitates equivalent and global access to information, knowledge, and highest-caliber education - in this context, simulation – collectively, the building blocks of optimal healthcare. PMID:26677421
Data-Adaptable Modeling and Optimization for Runtime Adaptable Systems
2016-06-08
execution scenarios e . Enables model -guided optimization algorithms that outperform state-of-the-art f. Understands the overhead of system...the Data-Adaptable System Model (DASM), that facilitates design by enabling the designer to: 1) specify both an application’s task flow as well as...systems. The MILAN [3] framework specializes in the design, simulation , and synthesis of System On Chip (SoC) applications using model -based techniques
Experimental task-based optimization of a four-camera variable-pinhole small-animal SPECT system
NASA Astrophysics Data System (ADS)
Hesterman, Jacob Y.; Kupinski, Matthew A.; Furenlid, Lars R.; Wilson, Donald W.
2005-04-01
We have previously utilized lumpy object models and simulated imaging systems in conjunction with the ideal observer to compute figures of merit for hardware optimization. In this paper, we describe the development of methods and phantoms necessary to validate or experimentally carry out these optimizations. Our study was conducted on a four-camera small-animal SPECT system that employs interchangeable pinhole plates to operate under a variety of pinhole configurations and magnifications (representing optimizable system parameters). We developed a small-animal phantom capable of producing random backgrounds for each image sequence. The task chosen for the study was the detection of a 2mm diameter sphere within the phantom-generated random background. A total of 138 projection images were used, half of which included the signal. As our observer, we employed the channelized Hotelling observer (CHO) with Laguerre-Gauss channels. The signal-to-noise (SNR) of this observer was used to compare different system configurations. Results indicate agreement between experimental and simulated data with higher detectability rates found for multiple-camera, multiple-pinhole, and high-magnification systems, although it was found that mixtures of magnifications often outperform systems employing a single magnification. This work will serve as a basis for future studies pertaining to system hardware optimization.
Optimized in vivo detection of dopamine release using 18F-fallypride PET.
Ceccarini, Jenny; Vrieze, Elske; Koole, Michel; Muylle, Tom; Bormans, Guy; Claes, Stephan; Van Laere, Koen
2012-10-01
The high-affinity D(2/3) PET radioligand (18)F-fallypride offers the possibility of measuring both striatal and extrastriatal dopamine release during activation paradigms. When a single (18)F-fallypride scanning protocol is used, task timing is critical to the ability to explore both striatal and extrastriatal dopamine release simultaneously. We evaluated the sensitivity and optimal timing of task administration for a single (18)F-fallypride PET protocol and the linearized simplified reference region kinetic model in detecting both striatal and extrastriatal reward-induced dopamine release, using human and simulation studies. Ten healthy volunteers underwent a single-bolus (18)F-fallypride PET protocol. A reward responsiveness learning task was initiated at 100 min after injection. PET data were analyzed using the linearized simplified reference region model, which accounts for time-dependent changes in (18)F-fallypride displacement. Voxel-based statistical maps, reflecting task-induced D(2/3) ligand displacement, and volume-of-interest-based analysis were performed to localize areas with increased ligand displacement after task initiation, thought to be proportional to changes in endogenous dopamine release (γ parameter). Simulated time-activity curves for baseline and hypothetical dopamine release functions (different peak heights of dopamine and task timings) were generated using the enhanced receptor-binding kinetic model to investigate γ as a function of these parameters. The reward task induced increased ligand displacement in extrastriatal regions of the reward circuit, including the medial orbitofrontal cortex, ventromedial prefrontal cortex, and dorsal anterior cingulate cortex. For task timing of 100 min, ligand displacement was found for the striatum only when peak height of dopamine was greater than 240 nM, whereas for frontal regions, γ was always positive for all task timings and peak heights of dopamine. Simulation results for a peak height of dopamine of 200 nM showed that an effect of striatal ligand displacement could be detected only when task timing was greater than 120 min. The prefrontal and anterior cingulate cortices are involved in reward responsiveness that can be measured using (18)F-fallypride PET in a single scanning session. To measure both striatal and extrastriatal dopamine release, the height of dopamine released and task timing need to be considered in designing activation studies depending on regional D(2/3) density.
2006-10-01
The objective was to construct a bridge between existing and future microscopic simulation codes ( kMC , MD, MC, BD, LB etc.) and traditional, continuum...kinetic Monte Carlo, kMC , equilibrium MC, Lattice-Boltzmann, LB, Brownian Dynamics, BD, or general agent-based, AB) simulators. It also, fortuitously...cond-mat/0310460 at arXiv.org. 27. Coarse Projective kMC Integration: Forward/Reverse Initial and Boundary Value Problems", R. Rico-Martinez, C. W
Optimal dynamic voltage scaling for wireless sensor nodes with real-time constraints
NASA Astrophysics Data System (ADS)
Cassandras, Christos G.; Zhuang, Shixin
2005-11-01
Sensors are increasingly embedded in manufacturing systems and wirelessly networked to monitor and manage operations ranging from process and inventory control to tracking equipment and even post-manufacturing product monitoring. In building such sensor networks, a critical issue is the limited and hard to replenish energy in the devices involved. Dynamic voltage scaling is a technique that controls the operating voltage of a processor to provide desired performance while conserving energy and prolonging the overall network's lifetime. We consider such power-limited devices processing time-critical tasks which are non-preemptive, aperiodic and have uncertain arrival times. We treat voltage scaling as a dynamic optimization problem whose objective is to minimize energy consumption subject to hard or soft real-time execution constraints. In the case of hard constraints, we build on prior work (which engages a voltage scaling controller at task completion times) by developing an intra-task controller that acts at all arrival times of incoming tasks. We show that this optimization problem can be decomposed into two simpler ones whose solution leads to an algorithm that does not actually require solving any nonlinear programming problems. In the case of soft constraints, this decomposition must be partly relaxed, but it still leads to a scalable (linear in the number of tasks) algorithm. Simulation results are provided to illustrate performance improvements in systems with intra-task controllers compared to uncontrolled systems or those using inter-task control.
The impact of crosstalk on three-dimensional laparoscopic performance and workload.
Sakata, Shinichiro; Grove, Philip M; Watson, Marcus O; Stevenson, Andrew R L
2017-10-01
This is the first study to explore the effects of crosstalk from 3D laparoscopic displays on technical performance and workload. We studied crosstalk at magnitudes that may have been tolerated during laparoscopic surgery. Participants were 36 voluntary doctors. To minimize floor effects, participants completed their surgery rotations, and a laparoscopic suturing course for surgical trainees. We used a counterbalanced, within-subjects design in which participants were randomly assigned to complete laparoscopic tasks in one of six unique testing sequences. In a simulation laboratory, participants were randomly assigned to complete laparoscopic 'navigation in space' and suturing tasks in three viewing conditions: 2D, 3D without ghosting and 3D with ghosting. Participants calibrated their exposure to crosstalk as the maximum level of ghosting that they could tolerate without discomfort. The Randot® Stereotest was used to verify stereoacuity. The study performance metric was time to completion. The NASA TLX was used to measure workload. Normal threshold stereoacuity (40-20 second of arc) was verified in all participants. Comparing optimal 3D with 2D viewing conditions, mean performance times were 2.8 and 1.6 times faster in laparoscopic navigation in space and suturing tasks respectively (p< .001). Comparing optimal 3D with suboptimal 3D viewing conditions, mean performance times were 2.9 times faster in both tasks (p< .001). Mean workload in 2D was 1.5 and 1.3 times greater than in optimal 3D viewing, for navigation in space and suturing tasks respectively (p< .001). Mean workload associated with suboptimal 3D was 1.3 times greater than optimal 3D in both laparoscopic tasks (p< .001). There was no significant relationship between the magnitude of ghosting score, laparoscopic performance and workload. Our findings highlight the advantages of 3D displays when used optimally, and their shortcomings when used sub-optimally, on both laparoscopic performance and workload.
Passive motion paradigm: an alternative to optimal control.
Mohan, Vishwanathan; Morasso, Pietro
2011-01-01
IN THE LAST YEARS, OPTIMAL CONTROL THEORY (OCT) HAS EMERGED AS THE LEADING APPROACH FOR INVESTIGATING NEURAL CONTROL OF MOVEMENT AND MOTOR COGNITION FOR TWO COMPLEMENTARY RESEARCH LINES: behavioral neuroscience and humanoid robotics. In both cases, there are general problems that need to be addressed, such as the "degrees of freedom (DoFs) problem," the common core of production, observation, reasoning, and learning of "actions." OCT, directly derived from engineering design techniques of control systems quantifies task goals as "cost functions" and uses the sophisticated formal tools of optimal control to obtain desired behavior (and predictions). We propose an alternative "softer" approach passive motion paradigm (PMP) that we believe is closer to the biomechanics and cybernetics of action. The basic idea is that actions (overt as well as covert) are the consequences of an internal simulation process that "animates" the body schema with the attractor dynamics of force fields induced by the goal and task-specific constraints. This internal simulation offers the brain a way to dynamically link motor redundancy with task-oriented constraints "at runtime," hence solving the "DoFs problem" without explicit kinematic inversion and cost function computation. We argue that the function of such computational machinery is not only restricted to shaping motor output during action execution but also to provide the self with information on the feasibility, consequence, understanding and meaning of "potential actions." In this sense, taking into account recent developments in neuroscience (motor imagery, simulation theory of covert actions, mirror neuron system) and in embodied robotics, PMP offers a novel framework for understanding motor cognition that goes beyond the engineering control paradigm provided by OCT. Therefore, the paper is at the same time a review of the PMP rationale, as a computational theory, and a perspective presentation of how to develop it for designing better cognitive architectures.
Composing problem solvers for simulation experimentation: a case study on steady state estimation.
Leye, Stefan; Ewald, Roland; Uhrmacher, Adelinde M
2014-01-01
Simulation experiments involve various sub-tasks, e.g., parameter optimization, simulation execution, or output data analysis. Many algorithms can be applied to such tasks, but their performance depends on the given problem. Steady state estimation in systems biology is a typical example for this: several estimators have been proposed, each with its own (dis-)advantages. Experimenters, therefore, must choose from the available options, even though they may not be aware of the consequences. To support those users, we propose a general scheme to aggregate such algorithms to so-called synthetic problem solvers, which exploit algorithm differences to improve overall performance. Our approach subsumes various aggregation mechanisms, supports automatic configuration from training data (e.g., via ensemble learning or portfolio selection), and extends the plugin system of the open source modeling and simulation framework James II. We show the benefits of our approach by applying it to steady state estimation for cell-biological models.
An improved simulation based biomechanical model to estimate static muscle loadings
NASA Technical Reports Server (NTRS)
Rajulu, Sudhakar L.; Marras, William S.; Woolford, Barbara
1991-01-01
The objectives of this study are to show that the characteristics of an intact muscle are different from those of an isolated muscle and to describe a simulation based model. This model, unlike the optimization based models, accounts for the redundancy in the musculoskeletal system in predicting the amount of forces generated within a muscle. The results of this study show that the loading of the primary muscle is increased by the presence of other muscle activities. Hence, the previous models based on optimization techniques may underestimate the severity of the muscle and joint loadings which occur during manual material handling tasks.
NASA Astrophysics Data System (ADS)
Dolly, Steven R.; Anastasio, Mark A.; Yu, Lifeng; Li, Hua
2017-03-01
In current radiation therapy practice, image quality is still assessed subjectively or by utilizing physically-based metrics. Recently, a methodology for objective task-based image quality (IQ) assessment in radiation therapy was proposed by Barrett et al.1 In this work, we present a comprehensive implementation and evaluation of this new IQ assessment methodology. A modular simulation framework was designed to perform an automated, computer-simulated end-to-end radiation therapy treatment. A fully simulated framework was created that utilizes new learning-based stochastic object models (SOM) to obtain known organ boundaries, generates a set of images directly from the numerical phantoms created with the SOM, and automates the image segmentation and treatment planning steps of a radiation therapy work ow. By use of this computational framework, therapeutic operating characteristic (TOC) curves can be computed and the area under the TOC curve (AUTOC) can be employed as a figure-of-merit to guide optimization of different components of the treatment planning process. The developed computational framework is employed to optimize X-ray CT pre-treatment imaging. We demonstrate that use of the radiation therapy-based-based IQ measures lead to different imaging parameters than obtained by use of physical-based measures.
ATTDES: An Expert System for Satellite Attitude Determination and Control. 2
NASA Technical Reports Server (NTRS)
Mackison, Donald L.; Gifford, Kevin
1996-01-01
The design, analysis, and flight operations of satellite attitude determintion and attitude control systems require extensive mathematical formulations, optimization studies, and computer simulation. This is best done by an analyst with extensive education and experience. The development of programs such as ATTDES permit the use of advanced techniques by those with less experience. Typical tasks include the mission analysis to select stabilization and damping schemes, attitude determination sensors and algorithms, and control system designs to meet program requirements. ATTDES is a system that includes all of these activities, including high fidelity orbit environment models that can be used for preliminary analysis, parameter selection, stabilization schemes, the development of estimators covariance analyses, and optimization, and can support ongoing orbit activities. The modification of existing simulations to model new configurations for these purposes can be an expensive, time consuming activity that becomes a pacing item in the development and operation of such new systems. The use of an integrated tool such as ATTDES significantly reduces the effort and time required for these tasks.
Optimizing Disaster Relief: Real-Time Operational and Tactical Decision Support
1993-01-01
efficiencies in completing the tAsks. Allocations recognize task priorities and the logistica l effects of geographic prox- imity, In addition...as if they ar~ collocated. Arcs connect loc-•I J>airs of zones to represent feasible dTrect point-to-point transportation and bear cost> ror...data to thl.’ de >~red level of aggregation. We have tested ARES manuall)’ ;mtl by replacins tbc deci~ion maker wrlh the decision simulator which
Evolutionary online behaviour learning and adaptation in real robots.
Silva, Fernando; Correia, Luís; Christensen, Anders Lyhne
2017-07-01
Online evolution of behavioural control on real robots is an open-ended approach to autonomous learning and adaptation: robots have the potential to automatically learn new tasks and to adapt to changes in environmental conditions, or to failures in sensors and/or actuators. However, studies have so far almost exclusively been carried out in simulation because evolution in real hardware has required several days or weeks to produce capable robots. In this article, we successfully evolve neural network-based controllers in real robotic hardware to solve two single-robot tasks and one collective robotics task. Controllers are evolved either from random solutions or from solutions pre-evolved in simulation. In all cases, capable solutions are found in a timely manner (1 h or less). Results show that more accurate simulations may lead to higher-performing controllers, and that completing the optimization process in real robots is meaningful, even if solutions found in simulation differ from solutions in reality. We furthermore demonstrate for the first time the adaptive capabilities of online evolution in real robotic hardware, including robots able to overcome faults injected in the motors of multiple units simultaneously, and to modify their behaviour in response to changes in the task requirements. We conclude by assessing the contribution of each algorithmic component on the performance of the underlying evolutionary algorithm.
A collimator optimization method for quantitative imaging: application to Y-90 bremsstrahlung SPECT.
Rong, Xing; Frey, Eric C
2013-08-01
Post-therapy quantitative 90Y bremsstrahlung single photon emission computed tomography (SPECT) has shown great potential to provide reliable activity estimates, which are essential for dose verification. Typically 90Y imaging is performed with high- or medium-energy collimators. However, the energy spectrum of 90Y bremsstrahlung photons is substantially different than typical for these collimators. In addition, dosimetry requires quantitative images, and collimators are not typically optimized for such tasks. Optimizing a collimator for 90Y imaging is both novel and potentially important. Conventional optimization methods are not appropriate for 90Y bremsstrahlung photons, which have a continuous and broad energy distribution. In this work, the authors developed a parallel-hole collimator optimization method for quantitative tasks that is particularly applicable to radionuclides with complex emission energy spectra. The authors applied the proposed method to develop an optimal collimator for quantitative 90Y bremsstrahlung SPECT in the context of microsphere radioembolization. To account for the effects of the collimator on both the bias and the variance of the activity estimates, the authors used the root mean squared error (RMSE) of the volume of interest activity estimates as the figure of merit (FOM). In the FOM, the bias due to the null space of the image formation process was taken in account. The RMSE was weighted by the inverse mass to reflect the application to dosimetry; for a different application, more relevant weighting could easily be adopted. The authors proposed a parameterization for the collimator that facilitates the incorporation of the important factors (geometric sensitivity, geometric resolution, and septal penetration fraction) determining collimator performance, while keeping the number of free parameters describing the collimator small (i.e., two parameters). To make the optimization results for quantitative 90Y bremsstrahlung SPECT more general, the authors simulated multiple tumors of various sizes in the liver. The authors realistically simulated human anatomy using a digital phantom and the image formation process using a previously validated and computationally efficient method for modeling the image-degrading effects including object scatter, attenuation, and the full collimator-detector response (CDR). The scatter kernels and CDR function tables used in the modeling method were generated using a previously validated Monte Carlo simulation code. The hole length, hole diameter, and septal thickness of the obtained optimal collimator were 84, 3.5, and 1.4 mm, respectively. Compared to a commercial high-energy general-purpose collimator, the optimal collimator improved the resolution and FOM by 27% and 18%, respectively. The proposed collimator optimization method may be useful for improving quantitative SPECT imaging for radionuclides with complex energy spectra. The obtained optimal collimator provided a substantial improvement in quantitative performance for the microsphere radioembolization task considered.
Cherry, Kendra M.; Lenze, Eric J.
2014-01-01
Neurological rehabilitation involving motor training has resulted in clinically meaningful improvements in function but is unable to eliminate many of the impairments associated with neurological injury. Thus there is a growing need for interventions that facilitate motor learning during rehabilitation therapy, to optimize recovery. d-Cycloserine (DCS), a partial N-methyl-d-aspartate (NMDA) receptor agonist that enhances neurotransmission throughout the central nervous system (Ressler KJ, Rothbaum BO, Tannenbaum L, Anderson P, Graap K, Zimand E, Hodges L, Davis M. Arch Gen Psychiatry 61: 1136–1144, 2004), has been shown to facilitate declarative and emotional learning. We therefore tested whether combining DCS with motor training facilitates motor learning after stroke in a series of two experiments. Forty-one healthy adults participated in experiment I, and twenty adults with stroke participated in experiment II of this two-session, double-blind study. Session one consisted of baseline assessment, subject randomization, and oral administration of DCS or placebo (250 mg). Subjects then participated in training on a balancing task, a simulated feeding task, and a cognitive task. Subjects returned 1–3 days later for posttest assessment. We found that all subjects had improved performance from pretest to posttest on the balancing task, the simulated feeding task, and the cognitive task. Subjects who were given DCS before motor training, however, did not show enhanced learning on the balancing task, the simulated feeding task, or the associative recognition task compared with subjects given placebo. Moreover, training on the balancing task did not generalize to a similar, untrained balance task. Our findings suggest that DCS does not enhance motor learning or motor skill generalization in neurologically intact adults or in adults with stroke. PMID:24671538
Simulative design and process optimization of the two-stage stretch-blow molding process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hopmann, Ch.; Rasche, S.; Windeck, C.
2015-05-22
The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development timemore » and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress.« less
Simulative design and process optimization of the two-stage stretch-blow molding process
NASA Astrophysics Data System (ADS)
Hopmann, Ch.; Rasche, S.; Windeck, C.
2015-05-01
The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development time and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress.
Adaptive effort investment in cognitive and physical tasks: a neurocomputational model
Verguts, Tom; Vassena, Eliana; Silvetti, Massimo
2015-01-01
Despite its importance in everyday life, the computational nature of effort investment remains poorly understood. We propose an effort model obtained from optimality considerations, and a neurocomputational approximation to the optimal model. Both are couched in the framework of reinforcement learning. It is shown that choosing when or when not to exert effort can be adaptively learned, depending on rewards, costs, and task difficulty. In the neurocomputational model, the limbic loop comprising anterior cingulate cortex (ACC) and ventral striatum in the basal ganglia allocates effort to cortical stimulus-action pathways whenever this is valuable. We demonstrate that the model approximates optimality. Next, we consider two hallmark effects from the cognitive control literature, namely proportion congruency and sequential congruency effects. It is shown that the model exerts both proactive and reactive cognitive control. Then, we simulate two physical effort tasks. In line with empirical work, impairing the model's dopaminergic pathway leads to apathetic behavior. Thus, we conceptually unify the exertion of cognitive and physical effort, studied across a variety of literatures (e.g., motivation and cognitive control) and animal species. PMID:25805978
Kinematically Optimal Robust Control of Redundant Manipulators
NASA Astrophysics Data System (ADS)
Galicki, M.
2017-12-01
This work deals with the problem of the robust optimal task space trajectory tracking subject to finite-time convergence. Kinematic and dynamic equations of a redundant manipulator are assumed to be uncertain. Moreover, globally unbounded disturbances are allowed to act on the manipulator when tracking the trajectory by the endeffector. Furthermore, the movement is to be accomplished in such a way as to minimize both the manipulator torques and their oscillations thus eliminating the potential robot vibrations. Based on suitably defined task space non-singular terminal sliding vector variable and the Lyapunov stability theory, we derive a class of chattering-free robust kinematically optimal controllers, based on the estimation of transpose Jacobian, which seem to be effective in counteracting both uncertain kinematics and dynamics, unbounded disturbances and (possible) kinematic and/or algorithmic singularities met on the robot trajectory. The numerical simulations carried out for a redundant manipulator of a SCARA type consisting of the three revolute kinematic pairs and operating in a two-dimensional task space, illustrate performance of the proposed controllers as well as comparisons with other well known control schemes.
NASA Astrophysics Data System (ADS)
Mansor, S. B.; Pormanafi, S.; Mahmud, A. R. B.; Pirasteh, S.
2012-08-01
In this study, a geospatial model for land use allocation was developed from the view of simulating the biological autonomous adaptability to environment and the infrastructural preference. The model was developed based on multi-agent genetic algorithm. The model was customized to accommodate the constraint set for the study area, namely the resource saving and environmental-friendly. The model was then applied to solve the practical multi-objective spatial optimization allocation problems of land use in the core region of Menderjan Basin in Iran. The first task was to study the dominant crops and economic suitability evaluation of land. Second task was to determine the fitness function for the genetic algorithms. The third objective was to optimize the land use map using economical benefits. The results has indicated that the proposed model has much better performance for solving complex multi-objective spatial optimization allocation problems and it is a promising method for generating land use alternatives for further consideration in spatial decision-making.
Stochastic Averaging for Constrained Optimization With Application to Online Resource Allocation
NASA Astrophysics Data System (ADS)
Chen, Tianyi; Mokhtari, Aryan; Wang, Xin; Ribeiro, Alejandro; Giannakis, Georgios B.
2017-06-01
Existing approaches to resource allocation for nowadays stochastic networks are challenged to meet fast convergence and tolerable delay requirements. The present paper leverages online learning advances to facilitate stochastic resource allocation tasks. By recognizing the central role of Lagrange multipliers, the underlying constrained optimization problem is formulated as a machine learning task involving both training and operational modes, with the goal of learning the sought multipliers in a fast and efficient manner. To this end, an order-optimal offline learning approach is developed first for batch training, and it is then generalized to the online setting with a procedure termed learn-and-adapt. The novel resource allocation protocol permeates benefits of stochastic approximation and statistical learning to obtain low-complexity online updates with learning errors close to the statistical accuracy limits, while still preserving adaptation performance, which in the stochastic network optimization context guarantees queue stability. Analysis and simulated tests demonstrate that the proposed data-driven approach improves the delay and convergence performance of existing resource allocation schemes.
Task-based lens design with application to digital mammography
NASA Astrophysics Data System (ADS)
Chen, Liying; Barrett, Harrison H.
2005-01-01
Recent advances in model observers that predict human perceptual performance now make it possible to optimize medical imaging systems for human task performance. We illustrate the procedure by considering the design of a lens for use in an optically coupled digital mammography system. The channelized Hotelling observer is used to model human performance, and the channels chosen are differences of Gaussians. The task performed by the model observer is detection of a lesion at a random but known location in a clustered lumpy background mimicking breast tissue. The entire system is simulated with a Monte Carlo application according to physics principles, and the main system component under study is the imaging lens that couples a fluorescent screen to a CCD detector. The signal-to-noise ratio (SNR) of the channelized Hotelling observer is used to quantify this detectability of the simulated lesion (signal) on the simulated mammographic background. Plots of channelized Hotelling SNR versus signal location for various lens apertures, various working distances, and various focusing places are presented. These plots thus illustrate the trade-off between coupling efficiency and blur in a task-based manner. In this way, the channelized Hotelling SNR is used as a merit function for lens design.
Task-based lens design, with application to digital mammography
NASA Astrophysics Data System (ADS)
Chen, Liying
Recent advances in model observers that predict human perceptual performance now make it possible to optimize medical imaging systems for human task performance. We illustrate the procedure by considering the design of a lens for use in an optically coupled digital mammography system. The channelized Hotelling observer is used to model human performance, and the channels chosen are differences of Gaussians (DOGs). The task performed by the model observer is detection of a lesion at a random but known location in a clustered lumpy background mimicking breast tissue. The entire system is simulated with a Monte Carlo application according to the physics principles, and the main system component under study is the imaging lens that couples a fluorescent screen to a CCD detector. The SNR of the channelized Hotelling observer is used to quantify the detectability of the simulated lesion (signal) upon the simulated mammographic background. In this work, plots of channelized Hotelling SNR vs. signal location for various lens apertures, various working distances, and various focusing places are shown. These plots thus illustrate the trade-off between coupling efficiency and blur in a task-based manner. In this way, the channelized Hotelling SNR is used as a merit function for lens design.
NASA Technical Reports Server (NTRS)
Miles, R. F., Jr.
1986-01-01
A research and development (R&D) project often involves a number of decisions that must be made concerning which subset of systems or tasks are to be undertaken to achieve the goal of the R&D project. To help in this decision making, SIMRAND (SIMulation of Research ANd Development Projects) is a methodology for the selection of the optimal subset of systems or tasks to be undertaken on an R&D project. Using alternative networks, the SIMRAND methodology models the alternative subsets of systems or tasks under consideration. Each path through an alternative network represents one way of satisfying the project goals. Equations are developed that relate the system or task variables to the measure of reference. Uncertainty is incorporated by treating the variables of the equations probabilistically as random variables, with cumulative distribution functions assessed by technical experts. Analytical techniques of probability theory are used to reduce the complexity of the alternative networks. Cardinal utility functions over the measure of preference are assessed for the decision makers. A run of the SIMRAND Computer I Program combines, in a Monte Carlo simulation model, the network structure, the equations, the cumulative distribution functions, and the utility functions.
NASA Astrophysics Data System (ADS)
Koziel, Slawomir; Bekasiewicz, Adrian
2016-10-01
Multi-objective optimization of antenna structures is a challenging task owing to the high computational cost of evaluating the design objectives as well as the large number of adjustable parameters. Design speed-up can be achieved by means of surrogate-based optimization techniques. In particular, a combination of variable-fidelity electromagnetic (EM) simulations, design space reduction techniques, response surface approximation models and design refinement methods permits identification of the Pareto-optimal set of designs within a reasonable timeframe. Here, a study concerning the scalability of surrogate-assisted multi-objective antenna design is carried out based on a set of benchmark problems, with the dimensionality of the design space ranging from six to 24 and a CPU cost of the EM antenna model from 10 to 20 min per simulation. Numerical results indicate that the computational overhead of the design process increases more or less quadratically with the number of adjustable geometric parameters of the antenna structure at hand, which is a promising result from the point of view of handling even more complex problems.
Modelling and Simulation in the Design Process of Armored Vehicles
2003-03-01
trackway conditions is a demanding optimization task. Basically, a high level of ride comfort requires soft suspension tuning, whereas driving safety relies...The maximum off-road speed is generally limited by traction, input torque, driving safety and ride comfort. When obstacles are to be negotiated, the...wheel travel was defined during the mobility simulation runs. Figure 14: Ramp 1.5m at 40 kph; virtual and physical prototype Driving safety and ride
MO-FG-209-05: Towards a Feature-Based Anthropomorphic Model Observer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avanaki, A.
2016-06-15
This symposium will review recent advances in the simulation methods for evaluation of novel breast imaging systems – the subject of AAPM Task Group TG234. Our focus will be on the various approaches to development and validation of software anthropomorphic phantoms and their use in the statistical assessment of novel imaging systems using such phantoms along with computational models for the x-ray image formation process. Due to the dynamic development and complex design of modern medical imaging systems, the simulation of anatomical structures, image acquisition modalities, and the image perception and analysis offers substantial benefits of reduced cost, duration, andmore » radiation exposure, as well as the known ground-truth and wide variability in simulated anatomies. For these reasons, Virtual Clinical Trials (VCTs) have been increasingly accepted as a viable tool for preclinical assessment of x-ray and other breast imaging methods. Activities of TG234 have encompassed the optimization of protocols for simulation studies, including phantom specifications, the simulated data representation, models of the imaging process, and statistical assessment of simulated images. The symposium will discuss the state-of-the-science of VCTs for novel breast imaging systems, emphasizing recent developments and future directions. Presentations will discuss virtual phantoms for intermodality breast imaging performance comparisons, extension of the breast anatomy simulation to the cellular level, optimized integration of the simulated imaging chain, and the novel directions in the observer models design. Learning Objectives: Review novel results in developing and applying virtual phantoms for inter-modality breast imaging performance comparisons; Discuss the efforts to extend the computer simulation of breast anatomy and pathology to the cellular level; Summarize the state of the science in optimized integration of modules in the simulated imaging chain; Compare novel directions in the design of observer models for task based validation of imaging systems. PB: Research funding support from the NIH, NSF, and Komen for the Cure; NIH funded collaboration with Barco, Inc. and Hologic, Inc.; Consultant to Delaware State Univ. and NCCPM, UK. AA: Employed at Barco Healthcare.; P. Bakic, NIH: (NIGMS P20 #GM103446, NCI R01 #CA154444); M. Das, NIH Research grants.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graff, C.
This symposium will review recent advances in the simulation methods for evaluation of novel breast imaging systems – the subject of AAPM Task Group TG234. Our focus will be on the various approaches to development and validation of software anthropomorphic phantoms and their use in the statistical assessment of novel imaging systems using such phantoms along with computational models for the x-ray image formation process. Due to the dynamic development and complex design of modern medical imaging systems, the simulation of anatomical structures, image acquisition modalities, and the image perception and analysis offers substantial benefits of reduced cost, duration, andmore » radiation exposure, as well as the known ground-truth and wide variability in simulated anatomies. For these reasons, Virtual Clinical Trials (VCTs) have been increasingly accepted as a viable tool for preclinical assessment of x-ray and other breast imaging methods. Activities of TG234 have encompassed the optimization of protocols for simulation studies, including phantom specifications, the simulated data representation, models of the imaging process, and statistical assessment of simulated images. The symposium will discuss the state-of-the-science of VCTs for novel breast imaging systems, emphasizing recent developments and future directions. Presentations will discuss virtual phantoms for intermodality breast imaging performance comparisons, extension of the breast anatomy simulation to the cellular level, optimized integration of the simulated imaging chain, and the novel directions in the observer models design. Learning Objectives: Review novel results in developing and applying virtual phantoms for inter-modality breast imaging performance comparisons; Discuss the efforts to extend the computer simulation of breast anatomy and pathology to the cellular level; Summarize the state of the science in optimized integration of modules in the simulated imaging chain; Compare novel directions in the design of observer models for task based validation of imaging systems. PB: Research funding support from the NIH, NSF, and Komen for the Cure; NIH funded collaboration with Barco, Inc. and Hologic, Inc.; Consultant to Delaware State Univ. and NCCPM, UK. AA: Employed at Barco Healthcare.; P. Bakic, NIH: (NIGMS P20 #GM103446, NCI R01 #CA154444); M. Das, NIH Research grants.« less
Simulation and optimization of faceted structure for illumination
NASA Astrophysics Data System (ADS)
Liu, Lihong; Engel, Thierry; Flury, Manuel
2016-04-01
The re-direction of incoherent light using a surface containing only facets with specific angular values is proposed. A new photometric approach is adopted since the size of each facet is large in comparison with the wavelength. A reflective configuration is employed to avoid the dispersion problems of materials. The irradiance distribution of the reflected beam is determined by the angular position of each facet. In order to obtain the specific irradiance distribution, the angular position of each facet is optimized using Zemax OpticStudio 15 software. A detector is placed in the direction which is perpendicular to the reflected beam. According to the incoherent irradiance distribution on the detector, a merit function needs to be defined to pilot the optimization process. The two dimensional angular position of each facet is defined as a variable which is optimized within a specified varying range. Because the merit function needs to be updated, a macro program is carried out to update this function within Zemax. In order to reduce the complexity of the manual operation, an automatic optimization approach is established. Zemax is in charge of performing the optimization task and sending back the irradiance data to Matlab for further analysis. Several simulation results are given for the verification of the optimization method. The simulation results are compared to those obtained with the LightTools software in order to verify our optimization method.
Muscle function in glenohumeral joint stability during lifting task.
Blache, Yoann; Begon, Mickaël; Michaud, Benjamin; Desmoulins, Landry; Allard, Paul; Dal Maso, Fabien
2017-01-01
Ensuring glenohumeral stability during repetitive lifting tasks is a key factor to reduce the risk of shoulder injuries. Nevertheless, the literature reveals some lack concerning the assessment of the muscles that ensure glenohumeral stability during specific lifting tasks. Therefore, the purpose of this study was to assess the stabilization function of shoulder muscles during a lifting task. Kinematics and muscle electromyograms (n = 9) were recorded from 13 healthy adults during a bi-manual lifting task performed from the hip to the shoulder level. A generic upper-limb OpenSim model was implemented to simulate glenohumeral stability and instability by performing static optimizations with and without glenohumeral stability constraints. This procedure enabled to compute the level of shoulder muscle activity and forces in the two conditions. Without the stability constraint, the simulated movement was unstable during 74%±16% of the time. The force of the supraspinatus was significantly increased of 107% (p<0.002) when the glenohumeral stability constraint was implemented. The increased supraspinatus force led to greater compressive force (p<0.001) and smaller shear force (p<0.001), which contributed to improved glenohumeral stability. It was concluded that the supraspinatus may be the main contributor to glenohumeral stability during lifting task.
Muscle function in glenohumeral joint stability during lifting task
Begon, Mickaël; Michaud, Benjamin; Desmoulins, Landry; Allard, Paul
2017-01-01
Ensuring glenohumeral stability during repetitive lifting tasks is a key factor to reduce the risk of shoulder injuries. Nevertheless, the literature reveals some lack concerning the assessment of the muscles that ensure glenohumeral stability during specific lifting tasks. Therefore, the purpose of this study was to assess the stabilization function of shoulder muscles during a lifting task. Kinematics and muscle electromyograms (n = 9) were recorded from 13 healthy adults during a bi-manual lifting task performed from the hip to the shoulder level. A generic upper-limb OpenSim model was implemented to simulate glenohumeral stability and instability by performing static optimizations with and without glenohumeral stability constraints. This procedure enabled to compute the level of shoulder muscle activity and forces in the two conditions. Without the stability constraint, the simulated movement was unstable during 74%±16% of the time. The force of the supraspinatus was significantly increased of 107% (p<0.002) when the glenohumeral stability constraint was implemented. The increased supraspinatus force led to greater compressive force (p<0.001) and smaller shear force (p<0.001), which contributed to improved glenohumeral stability. It was concluded that the supraspinatus may be the main contributor to glenohumeral stability during lifting task. PMID:29244838
Task-Driven Orbit Design and Implementation on a Robotic C-Arm System for Cone-Beam CT.
Ouadah, S; Jacobson, M; Stayman, J W; Ehtiati, T; Weiss, C; Siewerdsen, J H
2017-03-01
This work applies task-driven optimization to the design of non-circular orbits that maximize imaging performance for a particular imaging task. First implementation of task-driven imaging on a clinical robotic C-arm system is demonstrated, and a framework for orbit calculation is described and evaluated. We implemented a task-driven imaging framework to optimize orbit parameters that maximize detectability index d '. This framework utilizes a specified Fourier domain task function and an analytical model for system spatial resolution and noise. Two experiments were conducted to test the framework. First, a simple task was considered consisting of frequencies lying entirely on the f z -axis (e.g., discrimination of structures oriented parallel to the central axial plane), and a "circle + arc" orbit was incorporated into the framework as a means to improve sampling of these frequencies, and thereby increase task-based detectability. The orbit was implemented on a robotic C-arm (Artis Zeego, Siemens Healthcare). A second task considered visualization of a cochlear implant simulated within a head phantom, with spatial frequency response emphasizing high-frequency content in the ( f y , f z ) plane of the cochlea. An optimal orbit was computed using the task-driven framework, and the resulting image was compared to that for a circular orbit. For the f z -axis task, the circle + arc orbit was shown to increase d ' by a factor of 1.20, with an improvement of 0.71 mm in a 3D edge-spread measurement for edges located far from the central plane and a decrease in streak artifacts compared to a circular orbit. For the cochlear implant task, the resulting orbit favored complementary views of high tilt angles in a 360° orbit, and d ' was increased by a factor of 1.83. This work shows that a prospective definition of imaging task can be used to optimize source-detector orbit and improve imaging performance. The method was implemented for execution of non-circular, task-driven orbits on a clinical robotic C-arm system. The framework is sufficiently general to include both acquisition parameters (e.g., orbit, kV, and mA selection) and reconstruction parameters (e.g., a spatially varying regularizer).
Task-driven orbit design and implementation on a robotic C-arm system for cone-beam CT
NASA Astrophysics Data System (ADS)
Ouadah, S.; Jacobson, M.; Stayman, J. W.; Ehtiati, T.; Weiss, C.; Siewerdsen, J. H.
2017-03-01
Purpose: This work applies task-driven optimization to the design of non-circular orbits that maximize imaging performance for a particular imaging task. First implementation of task-driven imaging on a clinical robotic C-arm system is demonstrated, and a framework for orbit calculation is described and evaluated. Methods: We implemented a task-driven imaging framework to optimize orbit parameters that maximize detectability index d'. This framework utilizes a specified Fourier domain task function and an analytical model for system spatial resolution and noise. Two experiments were conducted to test the framework. First, a simple task was considered consisting of frequencies lying entirely on the fz-axis (e.g., discrimination of structures oriented parallel to the central axial plane), and a "circle + arc" orbit was incorporated into the framework as a means to improve sampling of these frequencies, and thereby increase task-based detectability. The orbit was implemented on a robotic C-arm (Artis Zeego, Siemens Healthcare). A second task considered visualization of a cochlear implant simulated within a head phantom, with spatial frequency response emphasizing high-frequency content in the (fy, fz) plane of the cochlea. An optimal orbit was computed using the task-driven framework, and the resulting image was compared to that for a circular orbit. Results: For the fz-axis task, the circle + arc orbit was shown to increase d' by a factor of 1.20, with an improvement of 0.71 mm in a 3D edge-spread measurement for edges located far from the central plane and a decrease in streak artifacts compared to a circular orbit. For the cochlear implant task, the resulting orbit favored complementary views of high tilt angles in a 360° orbit, and d' was increased by a factor of 1.83. Conclusions: This work shows that a prospective definition of imaging task can be used to optimize source-detector orbit and improve imaging performance. The method was implemented for execution of non-circular, task-driven orbits on a clinical robotic C-arm system. The framework is sufficiently general to include both acquisition parameters (e.g., orbit, kV, and mA selection) and reconstruction parameters (e.g., a spatially varying regularizer).
A university teaching simulation facility
NASA Technical Reports Server (NTRS)
Stark, Lawrence; Kim, Won-Soo; Tendick, Frank; Tyler, Mitchell; Hannaford, Blake; Barakat, Wissam; Bergengruen, Olaf; Braddi, Louis; Eisenberg, Joseph; Ellis, Stephen
1987-01-01
An experimental telerobotics (TR) simulation is described suitable for studying human operator (HO) performance. Simple manipulator pick-and-place and tracking tasks allowed quantitative comparison of a number of calligraphic display viewing conditions. A number of control modes could be compared in this TR simulation, including displacement, rate, and acceleratory control using position and force joysticks. A homeomorphic controller turned out to be no better than joysticks; the adaptive properties of the HO can apparently permit quite good control over a variety of controller configurations and control modes. Training by optimal control example seemed helpful in preliminary experiments.
Numerical aerodynamic simulation facility. Preliminary study extension
NASA Technical Reports Server (NTRS)
1978-01-01
The production of an optimized design of key elements of the candidate facility was the primary objective of this report. This was accomplished by effort in the following tasks: (1) to further develop, optimize and describe the function description of the custom hardware; (2) to delineate trade off areas between performance, reliability, availability, serviceability, and programmability; (3) to develop metrics and models for validation of the candidate systems performance; (4) to conduct a functional simulation of the system design; (5) to perform a reliability analysis of the system design; and (6) to develop the software specifications to include a user level high level programming language, a correspondence between the programming language and instruction set and outline the operation system requirements.
Use of EPANET solver to manage water distribution in Smart City
NASA Astrophysics Data System (ADS)
Antonowicz, A.; Brodziak, R.; Bylka, J.; Mazurkiewicz, J.; Wojtecki, S.; Zakrzewski, P.
2018-02-01
Paper presents a method of using EPANET solver to support manage water distribution system in Smart City. The main task is to develop the application that allows remote access to the simulation model of the water distribution network developed in the EPANET environment. Application allows to perform both single and cyclic simulations with the specified step of changing the values of the selected process variables. In the paper the architecture of application was shown. The application supports the selection of the best device control algorithm using optimization methods. Optimization procedures are possible with following methods: brute force, SLSQP (Sequential Least SQuares Programming), Modified Powell Method. Article was supplemented by example of using developed computer tool.
Accelerating sino-atrium computer simulations with graphic processing units.
Zhang, Hong; Xiao, Zheng; Lin, Shien-fong
2015-01-01
Sino-atrial node cells (SANCs) play a significant role in rhythmic firing. To investigate their role in arrhythmia and interactions with the atrium, computer simulations based on cellular dynamic mathematical models are generally used. However, the large-scale computation usually makes research difficult, given the limited computational power of Central Processing Units (CPUs). In this paper, an accelerating approach with Graphic Processing Units (GPUs) is proposed in a simulation consisting of the SAN tissue and the adjoining atrium. By using the operator splitting method, the computational task was made parallel. Three parallelization strategies were then put forward. The strategy with the shortest running time was further optimized by considering block size, data transfer and partition. The results showed that for a simulation with 500 SANCs and 30 atrial cells, the execution time taken by the non-optimized program decreased 62% with respect to a serial program running on CPU. The execution time decreased by 80% after the program was optimized. The larger the tissue was, the more significant the acceleration became. The results demonstrated the effectiveness of the proposed GPU-accelerating methods and their promising applications in more complicated biological simulations.
Multi-Satellite Scheduling Approach for Dynamic Areal Tasks Triggered by Emergent Disasters
NASA Astrophysics Data System (ADS)
Niu, X. N.; Zhai, X. J.; Tang, H.; Wu, L. X.
2016-06-01
The process of satellite mission scheduling, which plays a significant role in rapid response to emergent disasters, e.g. earthquake, is used to allocate the observation resources and execution time to a series of imaging tasks by maximizing one or more objectives while satisfying certain given constraints. In practice, the information obtained of disaster situation changes dynamically, which accordingly leads to the dynamic imaging requirement of users. We propose a satellite scheduling model to address dynamic imaging tasks triggered by emergent disasters. The goal of proposed model is to meet the emergency response requirements so as to make an imaging plan to acquire rapid and effective information of affected area. In the model, the reward of the schedule is maximized. To solve the model, we firstly present a dynamic segmenting algorithm to partition area targets. Then the dynamic heuristic algorithm embedding in a greedy criterion is designed to obtain the optimal solution. To evaluate the model, we conduct experimental simulations in the scene of Wenchuan Earthquake. The results show that the simulated imaging plan can schedule satellites to observe a wider scope of target area. We conclude that our satellite scheduling model can optimize the usage of satellite resources so as to obtain images in disaster response in a more timely and efficient manner.
Optimal inference with suboptimal models: Addiction and active Bayesian inference
Schwartenbeck, Philipp; FitzGerald, Thomas H.B.; Mathys, Christoph; Dolan, Ray; Wurst, Friedrich; Kronbichler, Martin; Friston, Karl
2015-01-01
When casting behaviour as active (Bayesian) inference, optimal inference is defined with respect to an agent’s beliefs – based on its generative model of the world. This contrasts with normative accounts of choice behaviour, in which optimal actions are considered in relation to the true structure of the environment – as opposed to the agent’s beliefs about worldly states (or the task). This distinction shifts an understanding of suboptimal or pathological behaviour away from aberrant inference as such, to understanding the prior beliefs of a subject that cause them to behave less ‘optimally’ than our prior beliefs suggest they should behave. Put simply, suboptimal or pathological behaviour does not speak against understanding behaviour in terms of (Bayes optimal) inference, but rather calls for a more refined understanding of the subject’s generative model upon which their (optimal) Bayesian inference is based. Here, we discuss this fundamental distinction and its implications for understanding optimality, bounded rationality and pathological (choice) behaviour. We illustrate our argument using addictive choice behaviour in a recently described ‘limited offer’ task. Our simulations of pathological choices and addictive behaviour also generate some clear hypotheses, which we hope to pursue in ongoing empirical work. PMID:25561321
Chen, Stephanie I; Visser, Troy A W; Huf, Samuel; Loft, Shayne
2017-09-01
Automation can improve operator performance and reduce workload, but can also degrade operator situation awareness (SA) and the ability to regain manual control. In 3 experiments, we examined the extent to which automation could be designed to benefit performance while ensuring that individuals maintained SA and could regain manual control. Participants completed a simulated submarine track management task under varying task load. The automation was designed to facilitate information acquisition and analysis, but did not make task decisions. Relative to a condition with no automation, the continuous use of automation improved performance and reduced subjective workload, but degraded SA. Automation that was engaged and disengaged by participants as required (adaptable automation) moderately improved performance and reduced workload relative to no automation, but degraded SA. Automation engaged and disengaged based on task load (adaptive automation) provided no benefit to performance or workload, and degraded SA relative to no automation. Automation never led to significant return-to-manual deficits. However, all types of automation led to degraded performance on a nonautomated task that shared information processing requirements with automated tasks. Given these outcomes, further research is urgently required to establish how to design automation to maximize performance while keeping operators cognitively engaged. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
A versatile multi-objective FLUKA optimization using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Vlachoudis, Vasilis; Antoniucci, Guido Arnau; Mathot, Serge; Kozlowska, Wioletta Sandra; Vretenar, Maurizio
2017-09-01
Quite often Monte Carlo simulation studies require a multi phase-space optimization, a complicated task, heavily relying on the operator experience and judgment. Examples of such calculations are shielding calculations with stringent conditions in the cost, in residual dose, material properties and space available, or in the medical field optimizing the dose delivered to a patient under a hadron treatment. The present paper describes our implementation inside flair[1] the advanced user interface of FLUKA[2,3] of a multi-objective Genetic Algorithm[Erreur ! Source du renvoi introuvable.] to facilitate the search for the optimum solution.
Evolutionary online behaviour learning and adaptation in real robots
Correia, Luís; Christensen, Anders Lyhne
2017-01-01
Online evolution of behavioural control on real robots is an open-ended approach to autonomous learning and adaptation: robots have the potential to automatically learn new tasks and to adapt to changes in environmental conditions, or to failures in sensors and/or actuators. However, studies have so far almost exclusively been carried out in simulation because evolution in real hardware has required several days or weeks to produce capable robots. In this article, we successfully evolve neural network-based controllers in real robotic hardware to solve two single-robot tasks and one collective robotics task. Controllers are evolved either from random solutions or from solutions pre-evolved in simulation. In all cases, capable solutions are found in a timely manner (1 h or less). Results show that more accurate simulations may lead to higher-performing controllers, and that completing the optimization process in real robots is meaningful, even if solutions found in simulation differ from solutions in reality. We furthermore demonstrate for the first time the adaptive capabilities of online evolution in real robotic hardware, including robots able to overcome faults injected in the motors of multiple units simultaneously, and to modify their behaviour in response to changes in the task requirements. We conclude by assessing the contribution of each algorithmic component on the performance of the underlying evolutionary algorithm. PMID:28791130
The 14th Annual Conference on Manual Control. [digital simulation of human operator dynamics
NASA Technical Reports Server (NTRS)
1978-01-01
Human operator dynamics during actual manual control or while monitoring the automatic control systems involved in air-to-air tracking, automobile driving, the operator of undersea vehicles, and remote handling are examined. Optimal control models and the use of mathematical theory in representing man behavior in complex man machine system tasks are discussed with emphasis on eye/head tracking and scanning; perception and attention allocation; decision making; and motion simulation and effects.
Adaptive optimal training of animal behavior
NASA Astrophysics Data System (ADS)
Bak, Ji Hyun; Choi, Jung Yoon; Akrami, Athena; Witten, Ilana; Pillow, Jonathan
Neuroscience experiments often require training animals to perform tasks designed to elicit various sensory, cognitive, and motor behaviors. Training typically involves a series of gradual adjustments of stimulus conditions and rewards in order to bring about learning. However, training protocols are usually hand-designed, and often require weeks or months to achieve a desired level of task performance. Here we combine ideas from reinforcement learning and adaptive optimal experimental design to formulate methods for efficient training of animal behavior. Our work addresses two intriguing problems at once: first, it seeks to infer the learning rules underlying an animal's behavioral changes during training; second, it seeks to exploit these rules to select stimuli that will maximize the rate of learning toward a desired objective. We develop and test these methods using data collected from rats during training on a two-interval sensory discrimination task. We show that we can accurately infer the parameters of a learning algorithm that describes how the animal's internal model of the task evolves over the course of training. We also demonstrate by simulation that our method can provide a substantial speedup over standard training methods.
Multi-disciplinary optimization of aeroservoelastic systems
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1991-01-01
New methods were developed for efficient aeroservoelastic analysis and optimization. The main target was to develop a method for investigating large structural variations using a single set of modal coordinates. This task was accomplished by basing the structural modal coordinates on normal modes calculated with a set of fictitious masses loading the locations of anticipated structural changes. The following subject areas are covered: (1) modal coordinates for aeroelastic analysis with large local structural variations; and (2) time simulation of flutter with large stiffness changes.
Optimal Iterative Task Scheduling for Parallel Simulations.
1991-03-01
State University, Pullman, Washington. November 1976. 19. Grimaldi , Ralph P . Discrete and Combinatorial Mathematics. Addison-Wesley. June 1989. 20...2 4.8.1 Problem Description .. .. .. .. ... .. ... .... 4-25 4.8.2 Reasons for Level-Strate- p Failure. .. .. .. .. ... 4-26...f- I CA A* overview................................ C-1 C .2 Sample A* r......................... .... C-I C-3 Evaluation P
Clustering molecular dynamics trajectories for optimizing docking experiments.
De Paris, Renata; Quevedo, Christian V; Ruiz, Duncan D; Norberto de Souza, Osmar; Barros, Rodrigo C
2015-01-01
Molecular dynamics simulations of protein receptors have become an attractive tool for rational drug discovery. However, the high computational cost of employing molecular dynamics trajectories in virtual screening of large repositories threats the feasibility of this task. Computational intelligence techniques have been applied in this context, with the ultimate goal of reducing the overall computational cost so the task can become feasible. Particularly, clustering algorithms have been widely used as a means to reduce the dimensionality of molecular dynamics trajectories. In this paper, we develop a novel methodology for clustering entire trajectories using structural features from the substrate-binding cavity of the receptor in order to optimize docking experiments on a cloud-based environment. The resulting partition was selected based on three clustering validity criteria, and it was further validated by analyzing the interactions between 20 ligands and a fully flexible receptor (FFR) model containing a 20 ns molecular dynamics simulation trajectory. Our proposed methodology shows that taking into account features of the substrate-binding cavity as input for the k-means algorithm is a promising technique for accurately selecting ensembles of representative structures tailored to a specific ligand.
Wiener, Scott; Haddock, Peter; Shichman, Steven; Dorin, Ryan
2015-11-01
To define the time needed by urology residents to attain proficiency in computer-aided robotic surgery to aid in the refinement of a robotic surgery simulation curriculum. We undertook a retrospective review of robotic skills training data acquired during January 2012 to December 2014 from junior (postgraduate year [PGY] 2-3) and senior (PGY4-5) urology residents using the da Vinci Skills Simulator. We determined the number of training sessions attended and the level of proficiency achieved by junior and senior residents in attempting 11 basic or 6 advanced tasks, respectively. Junior residents successfully completed 9.9 ± 1.8 tasks, with 62.5% completing all 11 basic tasks. The maximal cumulative success rate of junior residents completing basic tasks was 89.8%, which was achieved within 7.0 ± 1.5 hours of training. Of senior residents, 75% successfully completed all six advanced tasks. Senior residents attended 6.3 ± 3.5 hours of training during which 5.1 ± 1.6 tasks were completed. The maximal cumulative success rate of senior residents completing advanced tasks was 85.4%. When designing and implementing an effective robotic surgical training curriculum, an allocation of 10 hours of training may be optimal to allow junior and senior residents to achieve an acceptable level of surgical proficiency in basic and advanced robotic surgical skills, respectively. These data help guide the design and scheduling of a residents training curriculum within the time constraints of a resident's workload.
Optimal consensus algorithm integrated with obstacle avoidance
NASA Astrophysics Data System (ADS)
Wang, Jianan; Xin, Ming
2013-01-01
This article proposes a new consensus algorithm for the networked single-integrator systems in an obstacle-laden environment. A novel optimal control approach is utilised to achieve not only multi-agent consensus but also obstacle avoidance capability with minimised control efforts. Three cost functional components are defined to fulfil the respective tasks. In particular, an innovative nonquadratic obstacle avoidance cost function is constructed from an inverse optimal control perspective. The other two components are designed to ensure consensus and constrain the control effort. The asymptotic stability and optimality are proven. In addition, the distributed and analytical optimal control law only requires local information based on the communication topology to guarantee the proposed behaviours, rather than all agents' information. The consensus and obstacle avoidance are validated through simulations.
Acquisition of Skill Proficiency Over Multiple Sessions of a Novel Rover Simulation
NASA Technical Reports Server (NTRS)
Dean, S. L.; DeDios,Y. E.; MacDougall, H. G.; Moore, S. T.; Wood, S. J.
2011-01-01
Following long-duration exploration transits, adaptive changes in sensorimotor function may impair the crew's ability to safely perform manual control tasks such as operating pressurized rovers. Postflight performance will also be influenced by the level of preflight skill proficiency they have attained. The purpose of this study was to characterize the acquisition of skills in a motion-based rover simulation over multiple sessions, and to investigate the effects of varying the simulation scenarios. METHODS: Twenty healthy subjects were tested in 5 sessions, with 1-3 days between sessions. Each session consisted of a serial presentation of 8 discrete tasks to be completed as quickly and accurately as possible. Each task consisted of 1) perspective-taking, using a map that defined a docking target, 2) navigation toward the target around a Martian outpost, and 3) docking a side hatch of the rover to a visually guided target. The simulator utilized a Stewart-type motion base (CKAS, Australia), single-seat cabin with triple scene projection covering 150 deg horizontal by 50 deg vertical, and joystick controller. Subjects were randomly assigned to a control group (tasks identical in the first 4 sessions) or a varied-practice group. The dependent variables for each task included accuracy toward the target and time to completion. RESULTS: The greatest improvements in time to completion occurred during the docking phase. The varied-practice group showed more improvement in perspective-taking accuracy. Perspective-taking accuracy was also affected by the relative orientation of the rover to the docking target. Skill acquisition was correlated with self-ratings of previous gaming experience. DISCUSSION: Varying task selection and difficulty will optimize the preflight acquisition of skills when performing novel operational tasks. Simulation of operational manual control will provide functionally relevant evidence regarding the impact of sensorimotor adaptation on early surface operations and what countermeasures are needed. Learning Objective: The use of a motion-based simulation to investigate decrements in the proficiency to operate pressurized rovers during early surface operations of space exploration missions, along with the acquisition of skill proficiency needed during the preflight phase of the mission.
NASA Technical Reports Server (NTRS)
Chappell, Steven P.; Abercromby, Andrew F.; Gernhardt, Michael L.
2011-01-01
The ultimate success of future human space exploration missions is dependent on the ability to perform extravehicular activity (EVA) tasks effectively, efficiently, and safely, whether those tasks represent a nominal mode of operation or a contingency capability. To optimize EVA systems for the best human performance, it is critical to study the effects of varying key factors such as suit center of gravity (CG), suit mass, and gravity level. During the 2-week NASA Extreme Environment Mission Operations (NEEMO) 14 mission, four crewmembers performed a series of EVA tasks under different simulated EVA suit configurations and used full-scale mockups of a Space Exploration Vehicle (SEV) rover and lander. NEEMO is an underwater spaceflight analog that allows a true mission-like operational environment and uses buoyancy effects and added weight to simulate different gravity levels. Quantitative and qualitative data collected during NEEMO 14, as well as from spacesuit tests in parabolic flight and with overhead suspension, are being used to directly inform ongoing hardware and operations concept development of the SEV, exploration EVA systems, and future EVA suits. OBJECTIVE: To compare human performance across different weight and CG configurations. METHODS: Four subjects were weighed out to simulate reduced gravity and wore either a specially designed rig to allow adjustment of CG or a PLSS mockup. Subjects completed tasks including level ambulation, incline/decline ambulation, standing from the kneeling and prone position, picking up objects, shoveling, ladder climbing, incapacitated crewmember handling, and small and large payload transfer. Subjective compensation, exertion, task acceptability, and duration data as well as photo and video were collected. RESULTS: There appear to be interactions between CG, weight, and task. CGs nearest the subject s natural CG are the most predictable in terms of acceptable performance across tasks. Future research should focus on understanding the interactions between CG, mass, and subject differences.
NRA8-21 Cycle 2 RBCC Turbopump Risk Reduction
NASA Technical Reports Server (NTRS)
Ferguson, Thomas V.; Williams, Morgan; Marcu, Bogdan
2004-01-01
This project was composed of three sub-tasks. The objective of the first task was to use the CFD code INS3D to generate both on- and off-design predictions for the consortium optimized impeller flowfield. The results of the flow simulations are given in the first section. The objective of the second task was to construct a turbomachinery testing database comprised of measurements made on several different impellers, an inducer and a diffuser. The data was in the form of static pressure measurements as well as laser velocimeter measurements of velocities and flow angles within the stated components. Several databases with this information were created for these components. The third subtask objective was two-fold: first, to validate the Enigma CFD code for pump diffuser analysis, and secondly, to perform steady and unsteady analyses on some wide flow range diffuser concepts using Enigma. The code was validated using the consortium optimized impeller database and then applied to two different concepts for wide flow diffusers.
Strategic workload management and decision biases in aviation
NASA Technical Reports Server (NTRS)
Raby, Mireille; Wickens, Christopher D.
1994-01-01
Thirty pilots flew three simulated landing approaches under conditions of low, medium, and high workload. Workload conditions were created by varying time pressure and external communications requirements. Our interest was in how the pilots strategically managed or adapted to the increasing workload. We independently assessed the pilot's ranking of the priority of different discrete tasks during the approach and landing. Pilots were found to sacrifice some aspects of primary flight control as workload increased. For discrete tasks, increasing workload increased the amount of time in performing the high priority tasks, decreased the time in performing those of lowest priority, and did not affect duration of performance episodes or optimality of scheduling of tasks of any priority level. Individual differences analysis revealed that high-performing subjects scheduled discrete tasks earlier in the flight and shifted more often between different activities.
Passive Motion Paradigm: An Alternative to Optimal Control
Mohan, Vishwanathan; Morasso, Pietro
2011-01-01
In the last years, optimal control theory (OCT) has emerged as the leading approach for investigating neural control of movement and motor cognition for two complementary research lines: behavioral neuroscience and humanoid robotics. In both cases, there are general problems that need to be addressed, such as the “degrees of freedom (DoFs) problem,” the common core of production, observation, reasoning, and learning of “actions.” OCT, directly derived from engineering design techniques of control systems quantifies task goals as “cost functions” and uses the sophisticated formal tools of optimal control to obtain desired behavior (and predictions). We propose an alternative “softer” approach passive motion paradigm (PMP) that we believe is closer to the biomechanics and cybernetics of action. The basic idea is that actions (overt as well as covert) are the consequences of an internal simulation process that “animates” the body schema with the attractor dynamics of force fields induced by the goal and task-specific constraints. This internal simulation offers the brain a way to dynamically link motor redundancy with task-oriented constraints “at runtime,” hence solving the “DoFs problem” without explicit kinematic inversion and cost function computation. We argue that the function of such computational machinery is not only restricted to shaping motor output during action execution but also to provide the self with information on the feasibility, consequence, understanding and meaning of “potential actions.” In this sense, taking into account recent developments in neuroscience (motor imagery, simulation theory of covert actions, mirror neuron system) and in embodied robotics, PMP offers a novel framework for understanding motor cognition that goes beyond the engineering control paradigm provided by OCT. Therefore, the paper is at the same time a review of the PMP rationale, as a computational theory, and a perspective presentation of how to develop it for designing better cognitive architectures. PMID:22207846
Multi-robot task allocation based on two dimensional artificial fish swarm algorithm
NASA Astrophysics Data System (ADS)
Zheng, Taixiong; Li, Xueqin; Yang, Liangyi
2007-12-01
The problem of task allocation for multiple robots is to allocate more relative-tasks to less relative-robots so as to minimize the processing time of these tasks. In order to get optimal multi-robot task allocation scheme, a twodimensional artificial swarm algorithm based approach is proposed in this paper. In this approach, the normal artificial fish is extended to be two dimension artificial fish. In the two dimension artificial fish, each vector of primary artificial fish is extended to be an m-dimensional vector. Thus, each vector can express a group of tasks. By redefining the distance between artificial fish and the center of artificial fish, the behavior of two dimension fish is designed and the task allocation algorithm based on two dimension artificial swarm algorithm is put forward. At last, the proposed algorithm is applied to the problem of multi-robot task allocation and comparer with GA and SA based algorithm is done. Simulation and compare result shows the proposed algorithm is effective.
Choi, Younggeun; Gordon, James; Park, Hyeshin; Schweighofer, Nicolas
2011-08-03
Current guidelines for rehabilitation of arm and hand function after stroke recommend that motor training focus on realistic tasks that require reaching and manipulation and engage the patient intensively, actively, and adaptively. Here, we investigated the feasibility of a novel robotic task-practice system, ADAPT, designed in accordance with such guidelines. At each trial, ADAPT selects a functional task according to a training schedule and with difficulty based on previous performance. Once the task is selected, the robot picks up and presents the corresponding tool, simulates the dynamics of the tasks, and the patient interacts with the tool to perform the task. Five participants with chronic stroke with mild to moderate impairments (> 9 months post-stroke; Fugl-Meyer arm score 49.2 ± 5.6) practiced four functional tasks (selected out of six in a pre-test) with ADAPT for about one and half hour and 144 trials in a pseudo-random schedule of 3-trial blocks per task. No adverse events occurred and ADAPT successfully presented the six functional tasks without human intervention for a total of 900 trials. Qualitative analysis of trajectories showed that ADAPT simulated the desired task dynamics adequately, and participants reported good, although not excellent, task fidelity. During training, the adaptive difficulty algorithm progressively increased task difficulty leading towards an optimal challenge point based on performance; difficulty was then continuously adjusted to keep performance around the challenge point. Furthermore, the time to complete all trained tasks decreased significantly from pretest to one-hour post-test. Finally, post-training questionnaires demonstrated positive patient acceptance of ADAPT. ADAPT successfully provided adaptive progressive training for multiple functional tasks based on participant's performance. Our encouraging results establish the feasibility of ADAPT; its efficacy will next be tested in a clinical trial.
In situ and in-transit analysis of cosmological simulations
Friesen, Brian; Almgren, Ann; Lukic, Zarija; ...
2016-08-24
Modern cosmological simulations have reached the trillion-element scale, rendering data storage and subsequent analysis formidable tasks. To address this circumstance, we present a new MPI-parallel approach for analysis of simulation data while the simulation runs, as an alternative to the traditional workflow consisting of periodically saving large data sets to disk for subsequent ‘offline’ analysis. We demonstrate this approach in the compressible gasdynamics/N-body code Nyx, a hybrid MPI+OpenMP code based on the BoxLib framework, used for large-scale cosmological simulations. We have enabled on-the-fly workflows in two different ways: one is a straightforward approach consisting of all MPI processes periodically haltingmore » the main simulation and analyzing each component of data that they own (‘ in situ’). The other consists of partitioning processes into disjoint MPI groups, with one performing the simulation and periodically sending data to the other ‘sidecar’ group, which post-processes it while the simulation continues (‘in-transit’). The two groups execute their tasks asynchronously, stopping only to synchronize when a new set of simulation data needs to be analyzed. For both the in situ and in-transit approaches, we experiment with two different analysis suites with distinct performance behavior: one which finds dark matter halos in the simulation using merge trees to calculate the mass contained within iso-density contours, and another which calculates probability distribution functions and power spectra of various fields in the simulation. Both are common analysis tasks for cosmology, and both result in summary statistics significantly smaller than the original data set. We study the behavior of each type of analysis in each workflow in order to determine the optimal configuration for the different data analysis algorithms.« less
Case studies on optimization problems in MATLAB and COMSOL multiphysics by means of the livelink
NASA Astrophysics Data System (ADS)
Ozana, Stepan; Pies, Martin; Docekal, Tomas
2016-06-01
LiveLink for COMSOL is a tool that integrates COMSOL Multiphysics with MATLAB to extend one's modeling with scripting programming in the MATLAB environment. It allows user to utilize the full power of MATLAB and its toolboxes in preprocessing, model manipulation, and post processing. At first, the head script launches COMSOL with MATLAB and defines initial value of all parameters, refers to the objective function J described in the objective function and creates and runs the defined optimization task. Once the task is launches, the COMSOL model is being called in the iteration loop (from MATLAB environment by use of API interface), changing defined optimization parameters so that the objective function is minimized, using fmincon function to find a local or global minimum of constrained linear or nonlinear multivariable function. Once the minimum is found, it returns exit flag, terminates optimization and returns the optimized values of the parameters. The cooperation with MATLAB via LiveLink enhances a powerful computational environment with complex multiphysics simulations. The paper will introduce using of the LiveLink for COMSOL for chosen case studies in the field of technical cybernetics and bioengineering.
Individual differences in strategic flight management and scheduling
NASA Technical Reports Server (NTRS)
Wickens, Christopher D.; Raby, Mireille
1991-01-01
A group of 30 instrument-rated pilots was made to fly simulator approaches to three airports under conditions of low, medium, and high workload conditions. An analysis is presently conducted of the difference in discrete task scheduling between the group of 10 highest and 10 lowest performing pilots in the sample; this categorization was based on the mean of various flight-profile measures. The two groups were found to differ from each other only in terms of the time when specific events were conducted, and of the optimality of scheduling for certain high-priority tasks. These results are assessed in view of the relative independence of task-management skills from aircraft-control skills.
Honing process optimization algorithms
NASA Astrophysics Data System (ADS)
Kadyrov, Ramil R.; Charikov, Pavel N.; Pryanichnikova, Valeria V.
2018-03-01
This article considers the relevance of honing processes for creating high-quality mechanical engineering products. The features of the honing process are revealed and such important concepts as the task for optimization of honing operations, the optimal structure of the honing working cycles, stepped and stepless honing cycles, simulation of processing and its purpose are emphasized. It is noted that the reliability of the mathematical model determines the quality parameters of the honing process control. An algorithm for continuous control of the honing process is proposed. The process model reliably describes the machining of a workpiece in a sufficiently wide area and can be used to operate the CNC machine CC743.
Finite Element Based Optimization of Material Parameters for Enhanced Ballistic Protection
NASA Astrophysics Data System (ADS)
Ramezani, Arash; Huber, Daniel; Rothe, Hendrik
2013-06-01
The threat imposed by terrorist attacks is a major hazard for military installations, vehicles and other items. The large amounts of firearms and projectiles that are available, pose serious threats to military forces and even civilian facilities. An important task for international research and development is to avert danger to life and limb. This work will evaluate the effect of modern armor with numerical simulations. It will also provide a brief overview of ballistic tests in order to offer some basic knowledge of the subject, serving as a basis for the comparison of simulation results. The objective of this work is to develop and improve the modern armor used in the security sector. Numerical simulations should replace the expensive ballistic tests and find vulnerabilities of items and structures. By progressively changing the material parameters, the armor is to be optimized. Using a sensitivity analysis, information regarding decisive variables is yielded and vulnerabilities are easily found and eliminated afterwards. To facilitate the simulation, advanced numerical techniques have been employed in the analyses.
Optimizing Dynamical Network Structure for Pinning Control
NASA Astrophysics Data System (ADS)
Orouskhani, Yasin; Jalili, Mahdi; Yu, Xinghuo
2016-04-01
Controlling dynamics of a network from any initial state to a final desired state has many applications in different disciplines from engineering to biology and social sciences. In this work, we optimize the network structure for pinning control. The problem is formulated as four optimization tasks: i) optimizing the locations of driver nodes, ii) optimizing the feedback gains, iii) optimizing simultaneously the locations of driver nodes and feedback gains, and iv) optimizing the connection weights. A newly developed population-based optimization technique (cat swarm optimization) is used as the optimization method. In order to verify the methods, we use both real-world networks, and model scale-free and small-world networks. Extensive simulation results show that the optimal placement of driver nodes significantly outperforms heuristic methods including placing drivers based on various centrality measures (degree, betweenness, closeness and clustering coefficient). The pinning controllability is further improved by optimizing the feedback gains. We also show that one can significantly improve the controllability by optimizing the connection weights.
A neurocomputational theory of how explicit learning bootstraps early procedural learning.
Paul, Erick J; Ashby, F Gregory
2013-01-01
It is widely accepted that human learning and memory is mediated by multiple memory systems that are each best suited to different requirements and demands. Within the domain of categorization, at least two systems are thought to facilitate learning: an explicit (declarative) system depending largely on the prefrontal cortex, and a procedural (non-declarative) system depending on the basal ganglia. Substantial evidence suggests that each system is optimally suited to learn particular categorization tasks. However, it remains unknown precisely how these systems interact to produce optimal learning and behavior. In order to investigate this issue, the present research evaluated the progression of learning through simulation of categorization tasks using COVIS, a well-known model of human category learning that includes both explicit and procedural learning systems. Specifically, the model's parameter space was thoroughly explored in procedurally learned categorization tasks across a variety of conditions and architectures to identify plausible interaction architectures. The simulation results support the hypothesis that one-way interaction between the systems occurs such that the explicit system "bootstraps" learning early on in the procedural system. Thus, the procedural system initially learns a suboptimal strategy employed by the explicit system and later refines its strategy. This bootstrapping could be from cortical-striatal projections that originate in premotor or motor regions of cortex, or possibly by the explicit system's control of motor responses through basal ganglia-mediated loops.
Neef, N A; Lensbower, J; Hockersmith, I; DePalma, V; Gray, K
1990-01-01
We analyzed the role of the range of variation in training exemplars as a contextual variable influencing the effects of in vivo versus simulation training in producing generalized responding. Four mentally retarded adults received single case instruction, followed by general case instruction, on washing machine and dryer use; one task was taught using actual appliances (in vivo) and the other using simulation. In vivo and simulation training were counterbalanced across the two tasks for the 2 subject pairs, using a within-subjects Latin square design. With both paradigms, more errors were made after single case than after general case instruction during probe sessions with untrained washing machines and dryers. These results suggest that generalization errors were affected by the range of training exemplars and not by the use of simulated versus natural training stimuli. Although both general case simulation and general case in vivo training facilitated generalized performance of laundry skills, an analysis of training time and costs indicated that the former approach was more efficient. The study illustrates a methodology for studying complex interactions and guiding decisions on the optimal use of instructional alternatives. PMID:2074236
Jagiełło, Władysław; Wójcicki, Zbigniew; Barczyński, Bartłomiej J; Litwiniuk, Artur; Kalina, Roman Maciej
2014-01-01
The aim of this study is the methodology of optimal choice of firefighters to solve difficult rescue tasks. 27 firefighters were analyzed: aged from 22-50 years of age, and with 2-27 years of work experience. Body balance disturbance tolerance skills (BBDTS) measured by the 'Rotational Test' (RT) and time of transition (back and forth) on a 4 meter beam located 3 meters above the ground, was the criterion for simulation of a rescue task (SRT). RT and SRT were carried out first in a sports tracksuit and then in protective clothing. A total of 4 results of the RT and SRT is the substantive base of the 4 rankings. The correlation of the RT and SRT results with 3 criteria for estimating BBDTS and 2 categories ranged from 0.478 (p<0.01) - 0.884 (p<0.01) and the results of SRT 0.911 (p<0.01). The basic ranking very highly correlated indicators of SRT (0.860 and 0.844), while the 6 indicators of RT only 2 (0.396 and 0.381; p<0.05). There was no correlation between the results of the RT and SRT, but there was an important partial correlation of these variables, but only then was the effect stabilized. The Rotational Test is a simple and easy to use tool for measuring body balance disturbance tolerance skills. However, the BBDTS typology is an accurate criteria for forecasting on this basis, including the results of accurate motor simulations, and the periodic ability of firefighters to solve the most difficult rescue tasks.
Guo, Yu; Dong, Daoyi; Shu, Chuan-Cun
2018-04-04
Achieving fast and efficient quantum state transfer is a fundamental task in physics, chemistry and quantum information science. However, the successful implementation of the perfect quantum state transfer also requires robustness under practically inevitable perturbative defects. Here, we demonstrate how an optimal and robust quantum state transfer can be achieved by shaping the spectral phase of an ultrafast laser pulse in the framework of frequency domain quantum optimal control theory. Our numerical simulations of the single dibenzoterrylene molecule as well as in atomic rubidium show that optimal and robust quantum state transfer via spectral phase modulated laser pulses can be achieved by incorporating a filtering function of the frequency into the optimization algorithm, which in turn has potential applications for ultrafast robust control of photochemical reactions.
Optimizing a reconfigurable material via evolutionary computation
NASA Astrophysics Data System (ADS)
Wilken, Sam; Miskin, Marc Z.; Jaeger, Heinrich M.
2015-08-01
Rapid prototyping by combining evolutionary computation with simulations is becoming a powerful tool for solving complex design problems in materials science. This method of optimization operates in a virtual design space that simulates potential material behaviors and after completion needs to be validated by experiment. However, in principle an evolutionary optimizer can also operate on an actual physical structure or laboratory experiment directly, provided the relevant material parameters can be accessed by the optimizer and information about the material's performance can be updated by direct measurements. Here we provide a proof of concept of such direct, physical optimization by showing how a reconfigurable, highly nonlinear material can be tuned to respond to impact. We report on an entirely computer controlled laboratory experiment in which a 6 ×6 grid of electromagnets creates a magnetic field pattern that tunes the local rigidity of a concentrated suspension of ferrofluid and iron filings. A genetic algorithm is implemented and tasked to find field patterns that minimize the force transmitted through the suspension. Searching within a space of roughly 1010 possible configurations, after testing only 1500 independent trials the algorithm identifies an optimized configuration of layered rigid and compliant regions.
Measuring pilot workload in a moving-base simulator. I Asynchronous secondary choice-reaction task
NASA Technical Reports Server (NTRS)
Kantowitz, B. H.; Hart, S. G.; Bortolussi, M. R.
1983-01-01
The de facto method for measuring airplane pilot workload is based upon subjective ratings. While researchers agree that such subjective data should be bolstered by using objective behavioral measures, results to date have been mixed. No clear objective technique has surfaced as the metric of choice. It is believed that this difficulty is in part due to neglect of theoretical work in psychology that predicts some of the difficulties that are inherent in a futile search for 'the one and only' best secondary task to measure workload. An initial study that used both subjective ratings and an asynchronous choice-reaction secondary task was conducted to determine if such a secondary task could indeed meet the methodological constraints imposed by current theories of attention. Two variants of a flight scenario were combined with two levels of the secondary task. Appropriate single-task control conditions were also included. Results give grounds for cautious optimism but indicate that future research should use synchronous secondary tasks where possible.
Heuristic-based information acquisition and decision making among pilots.
Wiggins, Mark W; Bollwerk, Sandra
2006-01-01
This research was designed to examine the impact of heuristic-based approaches to the acquisition of task-related information on the selection of an optimal alternative during simulated in-flight decision making. The work integrated features of naturalistic and normative decision making and strategies of information acquisition within a computer-based, decision support framework. The study comprised two phases, the first of which involved familiarizing pilots with three different heuristic-based strategies of information acquisition: frequency, elimination by aspects, and majority of confirming decisions. The second stage enabled participants to choose one of the three strategies of information acquisition to resolve a fourth (choice) scenario. The results indicated that task-oriented experience, rather than the information acquisition strategies, predicted the selection of the optimal alternative. It was also evident that of the three strategies available, the elimination by aspects information acquisition strategy was preferred by most participants. It was concluded that task-oriented experience, rather than the process of information acquisition, predicted task accuracy during the decision-making task. It was also concluded that pilots have a preference for one particular approach to information acquisition. Applications of outcomes of this research include the development of decision support systems that adapt to the information-processing capabilities and preferences of users.
Siragusa, Enrico; Haiminen, Niina; Utro, Filippo; Parida, Laxmi
2017-10-09
Computer simulations can be used to study population genetic methods, models and parameters, as well as to predict potential outcomes. For example, in plant populations, predicting the outcome of breeding operations can be studied using simulations. In-silico construction of populations with pre-specified characteristics is an important task in breeding optimization and other population genetic studies. We present two linear time Simulation using Best-fit Algorithms (SimBA) for two classes of problems where each co-fits two distributions: SimBA-LD fits linkage disequilibrium and minimum allele frequency distributions, while SimBA-hap fits founder-haplotype and polyploid allele dosage distributions. An incremental gap-filling version of previously introduced SimBA-LD is here demonstrated to accurately fit the target distributions, allowing efficient large scale simulations. SimBA-hap accuracy and efficiency is demonstrated by simulating tetraploid populations with varying numbers of founder haplotypes, we evaluate both a linear time greedy algoritm and an optimal solution based on mixed-integer programming. SimBA is available on http://researcher.watson.ibm.com/project/5669.
Predicting Flows of Rarefied Gases
NASA Technical Reports Server (NTRS)
LeBeau, Gerald J.; Wilmoth, Richard G.
2005-01-01
DSMC Analysis Code (DAC) is a flexible, highly automated, easy-to-use computer program for predicting flows of rarefied gases -- especially flows of upper-atmospheric, propulsion, and vented gases impinging on spacecraft surfaces. DAC implements the direct simulation Monte Carlo (DSMC) method, which is widely recognized as standard for simulating flows at densities so low that the continuum-based equations of computational fluid dynamics are invalid. DAC enables users to model complex surface shapes and boundary conditions quickly and easily. The discretization of a flow field into computational grids is automated, thereby relieving the user of a traditionally time-consuming task while ensuring (1) appropriate refinement of grids throughout the computational domain, (2) determination of optimal settings for temporal discretization and other simulation parameters, and (3) satisfaction of the fundamental constraints of the method. In so doing, DAC ensures an accurate and efficient simulation. In addition, DAC can utilize parallel processing to reduce computation time. The domain decomposition needed for parallel processing is completely automated, and the software employs a dynamic load-balancing mechanism to ensure optimal parallel efficiency throughout the simulation.
Visual-search models for location-known detection tasks
NASA Astrophysics Data System (ADS)
Gifford, H. C.; Karbaschi, Z.; Banerjee, K.; Das, M.
2017-03-01
Lesion-detection studies that analyze a fixed target position are generally considered predictive of studies involving lesion search, but the extent of the correlation often goes untested. The purpose of this work was to develop a visual-search (VS) model observer for location-known tasks that, coupled with previous work on localization tasks, would allow efficient same-observer assessments of how search and other task variations can alter study outcomes. The model observer featured adjustable parameters to control the search radius around the fixed lesion location and the minimum separation between suspicious locations. Comparisons were made against human observers, a channelized Hotelling observer and a nonprewhitening observer with eye filter in a two-alternative forced-choice study with simulated lumpy background images containing stationary anatomical and quantum noise. These images modeled single-pinhole nuclear medicine scans with different pinhole sizes. When the VS observer's search radius was optimized with training images, close agreement was obtained with human-observer results. Some performance differences between the humans could be explained by varying the model observer's separation parameter. The range of optimal pinhole sizes identified by the VS observer was in agreement with the range determined with the channelized Hotelling observer.
The Use of Human Factors Simulation to Conserve Operations Expense
NASA Technical Reports Server (NTRS)
Hamilton, George S.; Dischinger, H. Charles, Jr.; Wu, Hsin-I.
1999-01-01
In preparation for on-orbit operations, NASA performs experiments aboard a KC-135 which performs parabolic maneuvers, resulting in short periods of microgravity. While considerably less expensive than space operations, the use of this aircraft is costly. Simulation of tasks to be performed during the flight can allow the participants to optimize hardware configuration and crew interaction prior to flight. This presentation will demonstrate the utility of such simulation. The experiment simulated is the fluid dynamics of epoxy components which may be used in a patch kit in the event of meteoroid damage to the International Space Station. Improved configuration and operational efficiencies were reflected in early and increased data collection.
Linear models to perform treaty verification tasks for enhanced information security
MacGahan, Christopher J.; Kupinski, Matthew A.; Brubaker, Erik M.; ...
2016-11-12
Linear mathematical models were applied to binary-discrimination tasks relevant to arms control verification measurements in which a host party wishes to convince a monitoring party that an item is or is not treaty accountable. These models process data in list-mode format and can compensate for the presence of variability in the source, such as uncertain object orientation and location. The Hotelling observer applies an optimal set of weights to binned detector data, yielding a test statistic that is thresholded to make a decision. The channelized Hotelling observer applies a channelizing matrix to the vectorized data, resulting in a lower dimensionalmore » vector available to the monitor to make decisions. We demonstrate how incorporating additional terms in this channelizing-matrix optimization offers benefits for treaty verification. We present two methods to increase shared information and trust between the host and monitor. The first method penalizes individual channel performance in order to maximize the information available to the monitor while maintaining optimal performance. Second, we present a method that penalizes predefined sensitive information while maintaining the capability to discriminate between binary choices. Data used in this study was generated using Monte Carlo simulations for fission neutrons, accomplished with the GEANT4 toolkit. Custom models for plutonium inspection objects were measured in simulation by a radiation imaging system. Model performance was evaluated and presented using the area under the receiver operating characteristic curve.« less
Linear models to perform treaty verification tasks for enhanced information security
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacGahan, Christopher J.; Kupinski, Matthew A.; Brubaker, Erik M.
Linear mathematical models were applied to binary-discrimination tasks relevant to arms control verification measurements in which a host party wishes to convince a monitoring party that an item is or is not treaty accountable. These models process data in list-mode format and can compensate for the presence of variability in the source, such as uncertain object orientation and location. The Hotelling observer applies an optimal set of weights to binned detector data, yielding a test statistic that is thresholded to make a decision. The channelized Hotelling observer applies a channelizing matrix to the vectorized data, resulting in a lower dimensionalmore » vector available to the monitor to make decisions. We demonstrate how incorporating additional terms in this channelizing-matrix optimization offers benefits for treaty verification. We present two methods to increase shared information and trust between the host and monitor. The first method penalizes individual channel performance in order to maximize the information available to the monitor while maintaining optimal performance. Second, we present a method that penalizes predefined sensitive information while maintaining the capability to discriminate between binary choices. Data used in this study was generated using Monte Carlo simulations for fission neutrons, accomplished with the GEANT4 toolkit. Custom models for plutonium inspection objects were measured in simulation by a radiation imaging system. Model performance was evaluated and presented using the area under the receiver operating characteristic curve.« less
Linear models to perform treaty verification tasks for enhanced information security
NASA Astrophysics Data System (ADS)
MacGahan, Christopher J.; Kupinski, Matthew A.; Brubaker, Erik M.; Hilton, Nathan R.; Marleau, Peter A.
2017-02-01
Linear mathematical models were applied to binary-discrimination tasks relevant to arms control verification measurements in which a host party wishes to convince a monitoring party that an item is or is not treaty accountable. These models process data in list-mode format and can compensate for the presence of variability in the source, such as uncertain object orientation and location. The Hotelling observer applies an optimal set of weights to binned detector data, yielding a test statistic that is thresholded to make a decision. The channelized Hotelling observer applies a channelizing matrix to the vectorized data, resulting in a lower dimensional vector available to the monitor to make decisions. We demonstrate how incorporating additional terms in this channelizing-matrix optimization offers benefits for treaty verification. We present two methods to increase shared information and trust between the host and monitor. The first method penalizes individual channel performance in order to maximize the information available to the monitor while maintaining optimal performance. Second, we present a method that penalizes predefined sensitive information while maintaining the capability to discriminate between binary choices. Data used in this study was generated using Monte Carlo simulations for fission neutrons, accomplished with the GEANT4 toolkit. Custom models for plutonium inspection objects were measured in simulation by a radiation imaging system. Model performance was evaluated and presented using the area under the receiver operating characteristic curve.
Yang, Ben; Zhang, Yaocun; Qian, Yun; ...
2014-03-26
Reasonably modeling the magnitude, south-north gradient and seasonal propagation of precipitation associated with the East Asian Summer Monsoon (EASM) is a challenging task in the climate community. In this study we calibrate five key parameters in the Kain-Fritsch convection scheme in the WRF model using an efficient importance-sampling algorithm to improve the EASM simulation. We also examine the impacts of the improved EASM precipitation on other physical process. Our results suggest similar model sensitivity and values of optimized parameters across years with different EASM intensities. By applying the optimal parameters, the simulated precipitation and surface energy features are generally improved.more » The parameters related to downdraft, entrainment coefficients and CAPE consumption time (CCT) can most sensitively affect the precipitation and atmospheric features. Larger downdraft coefficient or CCT decrease the heavy rainfall frequency, while larger entrainment coefficient delays the convection development but build up more potential for heavy rainfall events, causing a possible northward shift of rainfall distribution. The CCT is the most sensitive parameter over wet region and the downdraft parameter plays more important roles over drier northern region. Long-term simulations confirm that by using the optimized parameters the precipitation distributions are better simulated in both weak and strong EASM years. Due to more reasonable simulated precipitation condensational heating, the monsoon circulations are also improved. Lastly, by using the optimized parameters the biases in the retreating (beginning) of Mei-yu (northern China rainfall) simulated by the standard WRF model are evidently reduced and the seasonal and sub-seasonal variations of the monsoon precipitation are remarkably improved.« less
NASA Astrophysics Data System (ADS)
Ghaly, Michael; Du, Yong; Links, Jonathan M.; Frey, Eric C.
2016-03-01
In SPECT imaging, collimators are a major factor limiting image quality and largely determine the noise and resolution of SPECT images. In this paper, we seek the collimator with the optimal tradeoff between image noise and resolution with respect to performance on two tasks related to myocardial perfusion SPECT: perfusion defect detection and joint detection and localization. We used the Ideal Observer (IO) operating on realistic background-known-statistically (BKS) and signal-known-exactly (SKE) data. The areas under the receiver operating characteristic (ROC) and localization ROC (LROC) curves (AUCd, AUCd+l), respectively, were used as the figures of merit for both tasks. We used a previously developed population of 54 phantoms based on the eXtended Cardiac Torso Phantom (XCAT) that included variations in gender, body size, heart size and subcutaneous adipose tissue level. For each phantom, organ uptakes were varied randomly based on distributions observed in patient data. We simulated perfusion defects at six different locations with extents and severities of 10% and 25%, respectively, which represented challenging but clinically relevant defects. The extent and severity are, respectively, the perfusion defect’s fraction of the myocardial volume and reduction of uptake relative to the normal myocardium. Projection data were generated using an analytical projector that modeled attenuation, scatter, and collimator-detector response effects, a 9% energy resolution at 140 keV, and a 4 mm full-width at half maximum (FWHM) intrinsic spatial resolution. We investigated a family of eight parallel-hole collimators that spanned a large range of sensitivity-resolution tradeoffs. For each collimator and defect location, the IO test statistics were computed using a Markov Chain Monte Carlo (MCMC) method for an ensemble of 540 pairs of defect-present and -absent images that included the aforementioned anatomical and uptake variability. Sets of test statistics were computed for both tasks and analyzed using ROC and LROC analysis methodologies. The results of this study suggest that collimators with somewhat poorer resolution and higher sensitivity than those of a typical low-energy high-resolution (LEHR) collimator were optimal for both defect detection and joint detection and localization tasks in myocardial perfusion SPECT for the range of defect sizes investigated. This study also indicates that optimizing instrumentation for a detection task may provide near-optimal performance on the more challenging detection-localization task.
NASA Astrophysics Data System (ADS)
Rock, Gilles; Fischer, Kim; Schlerf, Martin; Gerhards, Max; Udelhoven, Thomas
2017-04-01
The development and optimization of image processing algorithms requires the availability of datasets depicting every step from earth surface to the sensor's detector. The lack of ground truth data obliges to develop algorithms on simulated data. The simulation of hyperspectral remote sensing data is a useful tool for a variety of tasks such as the design of systems, the understanding of the image formation process, and the development and validation of data processing algorithms. An end-to-end simulator has been set up consisting of a forward simulator, a backward simulator and a validation module. The forward simulator derives radiance datasets based on laboratory sample spectra, applies atmospheric contributions using radiative transfer equations, and simulates the instrument response using configurable sensor models. This is followed by the backward simulation branch, consisting of an atmospheric correction (AC), a temperature and emissivity separation (TES) or a hybrid AC and TES algorithm. An independent validation module allows the comparison between input and output dataset and the benchmarking of different processing algorithms. In this study, hyperspectral thermal infrared scenes of a variety of surfaces have been simulated to analyze existing AC and TES algorithms. The ARTEMISS algorithm was optimized and benchmarked against the original implementations. The errors in TES were found to be related to incorrect water vapor retrieval. The atmospheric characterization could be optimized resulting in increasing accuracies in temperature and emissivity retrieval. Airborne datasets of different spectral resolutions were simulated from terrestrial HyperCam-LW measurements. The simulated airborne radiance spectra were subjected to atmospheric correction and TES and further used for a plant species classification study analyzing effects related to noise and mixed pixels.
The Effect of Visual Information on the Manual Approach and Landing
NASA Technical Reports Server (NTRS)
Wewerinke, P. H.
1982-01-01
The effect of visual information in combination with basic display information on the approach performance. A pre-experimental model analysis was performed in terms of the optimal control model. The resulting aircraft approach performance predictions were compared with the results of a moving base simulator program. The results illustrate that the model provides a meaningful description of the visual (scene) perception process involved in the complex (multi-variable, time varying) manual approach task with a useful predictive capability. The theoretical framework was shown to allow a straight-forward investigation of the complex interaction of a variety of task variables.
NASA Technical Reports Server (NTRS)
Pi, Xiaoqing; Mannucci, Anthony J.; Verkhoglyadova, Olga; Stephens, Philip; Iijima, Bryron A.
2013-01-01
Modeling and imaging the Earth's ionosphere as well as understanding its structures, inhomogeneities, and disturbances is a key part of NASA's Heliophysics Directorate science roadmap. This invention provides a design tool for scientific missions focused on the ionosphere. It is a scientifically important and technologically challenging task to assess the impact of a new observation system quantitatively on our capability of imaging and modeling the ionosphere. This question is often raised whenever a new satellite system is proposed, a new type of data is emerging, or a new modeling technique is developed. The proposed constellation would be part of a new observation system with more low-Earth orbiters tracking more radio occultation signals broadcast by Global Navigation Satellite System (GNSS) than those offered by the current GPS and COSMIC observation system. A simulation system was developed to fulfill this task. The system is composed of a suite of software that combines the Global Assimilative Ionospheric Model (GAIM) including first-principles and empirical ionospheric models, a multiple- dipole geomagnetic field model, data assimilation modules, observation simulator, visualization software, and orbit design, simulation, and optimization software.
Clustering Molecular Dynamics Trajectories for Optimizing Docking Experiments
De Paris, Renata; Quevedo, Christian V.; Ruiz, Duncan D.; Norberto de Souza, Osmar; Barros, Rodrigo C.
2015-01-01
Molecular dynamics simulations of protein receptors have become an attractive tool for rational drug discovery. However, the high computational cost of employing molecular dynamics trajectories in virtual screening of large repositories threats the feasibility of this task. Computational intelligence techniques have been applied in this context, with the ultimate goal of reducing the overall computational cost so the task can become feasible. Particularly, clustering algorithms have been widely used as a means to reduce the dimensionality of molecular dynamics trajectories. In this paper, we develop a novel methodology for clustering entire trajectories using structural features from the substrate-binding cavity of the receptor in order to optimize docking experiments on a cloud-based environment. The resulting partition was selected based on three clustering validity criteria, and it was further validated by analyzing the interactions between 20 ligands and a fully flexible receptor (FFR) model containing a 20 ns molecular dynamics simulation trajectory. Our proposed methodology shows that taking into account features of the substrate-binding cavity as input for the k-means algorithm is a promising technique for accurately selecting ensembles of representative structures tailored to a specific ligand. PMID:25873944
NASA Astrophysics Data System (ADS)
Tramm, John R.; Gunow, Geoffrey; He, Tim; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.
2016-05-01
In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures.
An experimental study of human pilot's scanning behavior
NASA Technical Reports Server (NTRS)
Washizu, K.; Tanaka, K.; Osawa, T.
1982-01-01
The scanning behavior and the control behavior of the pilot who manually controls the two-variable system, which is the most basic one of multi-variable systems are investigated. Two control tasks which simulate the actual airplane attitude and airspeed control were set up. In order to simulate the change of the situation where the pilot is placed, such as changes of flight phase, mission and others, the subject was requested to vary the weightings, as his control strategy, upon each task. Changes of human control dynamics and his canning properties caused by the modification of the situation were investigated. By making use of the experimental results, the optimal model of the control behavior and the scanning behavior of the pilot in the two-variable system is proposed from the standpoint of making the performance index minimal.
Cascaded systems analysis of noise and detectability in dual-energy cone-beam CT
Gang, Grace J.; Zbijewski, Wojciech; Webster Stayman, J.; Siewerdsen, Jeffrey H.
2012-01-01
Purpose: Dual-energy computed tomography and dual-energy cone-beam computed tomography (DE-CBCT) are promising modalities for applications ranging from vascular to breast, renal, hepatic, and musculoskeletal imaging. Accordingly, the optimization of imaging techniques for such applications would benefit significantly from a general theoretical description of image quality that properly incorporates factors of acquisition, reconstruction, and tissue decomposition in DE tomography. This work reports a cascaded systems analysis model that includes the Poisson statistics of x rays (quantum noise), detector model (flat-panel detectors), anatomical background, image reconstruction (filtered backprojection), DE decomposition (weighted subtraction), and simple observer models to yield a task-based framework for DE technique optimization. Methods: The theoretical framework extends previous modeling of DE projection radiography and CBCT. Signal and noise transfer characteristics are propagated through physical and mathematical stages of image formation and reconstruction. Dual-energy decomposition was modeled according to weighted subtraction of low- and high-energy images to yield the 3D DE noise-power spectrum (NPS) and noise-equivalent quanta (NEQ), which, in combination with observer models and the imaging task, yields the dual-energy detectability index (d′). Model calculations were validated with NPS and NEQ measurements from an experimental imaging bench simulating the geometry of a dedicated musculoskeletal extremities scanner. Imaging techniques, including kVp pair and dose allocation, were optimized using d′ as an objective function for three example imaging tasks: (1) kidney stone discrimination; (2) iodine vs bone in a uniform, soft-tissue background; and (3) soft tissue tumor detection on power-law anatomical background. Results: Theoretical calculations of DE NPS and NEQ demonstrated good agreement with experimental measurements over a broad range of imaging conditions. Optimization results suggest a lower fraction of total dose imparted by the low-energy acquisition, a finding consistent with previous literature. The selection of optimal kVp pair reveals the combined effect of both quantum noise and contrast in the kidney stone discrimination and soft-tissue tumor detection tasks, whereas the K-edge effect of iodine was the dominant factor in determining kVp pairs in the iodine vs bone task. The soft-tissue tumor task illustrated the benefit of dual-energy imaging in eliminating anatomical background noise and improving detectability beyond that achievable by single-energy scans. Conclusions: This work established a task-based theoretical framework that is predictive of DE image quality. The model can be utilized in optimizing a broad range of parameters in image acquisition, reconstruction, and decomposition, providing a useful tool for maximizing DE-CBCT image quality and reducing dose. PMID:22894440
NASA Astrophysics Data System (ADS)
Yu, F.; Chen, H.; Tu, K.; Wen, Q.; He, J.; Gu, X.; Wang, Z.
2018-04-01
Facing the monitoring needs of emergency responses to major disasters, combining the disaster information acquired at the first time after the disaster and the dynamic simulation result of the disaster chain evolution process, the overall plan for coordinated planning of spaceborne, airborne and ground observation resources have been designed. Based on the analysis of the characteristics of major disaster observation tasks, the key technologies of spaceborne, airborne and ground collaborative observation project are studied. For different disaster response levels, the corresponding workflow tasks are designed. On the basis of satisfying different types of disaster monitoring demands, the existing multi-satellite collaborative observation planning algorithms are compared, analyzed, and optimized.
Lorenz, Romy; Monti, Ricardo Pio; Violante, Inês R.; Anagnostopoulos, Christoforos; Faisal, Aldo A.; Montana, Giovanni; Leech, Robert
2016-01-01
Functional neuroimaging typically explores how a particular task activates a set of brain regions. Importantly though, the same neural system can be activated by inherently different tasks. To date, there is no approach available that systematically explores whether and how distinct tasks probe the same neural system. Here, we propose and validate an alternative framework, the Automatic Neuroscientist, which turns the standard fMRI approach on its head. We use real-time fMRI in combination with modern machine-learning techniques to automatically design the optimal experiment to evoke a desired target brain state. In this work, we present two proof-of-principle studies involving perceptual stimuli. In both studies optimization algorithms of varying complexity were employed; the first involved a stochastic approximation method while the second incorporated a more sophisticated Bayesian optimization technique. In the first study, we achieved convergence for the hypothesized optimum in 11 out of 14 runs in less than 10 min. Results of the second study showed how our closed-loop framework accurately and with high efficiency estimated the underlying relationship between stimuli and neural responses for each subject in one to two runs: with each run lasting 6.3 min. Moreover, we demonstrate that using only the first run produced a reliable solution at a group-level. Supporting simulation analyses provided evidence on the robustness of the Bayesian optimization approach for scenarios with low contrast-to-noise ratio. This framework is generalizable to numerous applications, ranging from optimizing stimuli in neuroimaging pilot studies to tailoring clinical rehabilitation therapy to patients and can be used with multiple imaging modalities in humans and animals. PMID:26804778
Lorenz, Romy; Monti, Ricardo Pio; Violante, Inês R; Anagnostopoulos, Christoforos; Faisal, Aldo A; Montana, Giovanni; Leech, Robert
2016-04-01
Functional neuroimaging typically explores how a particular task activates a set of brain regions. Importantly though, the same neural system can be activated by inherently different tasks. To date, there is no approach available that systematically explores whether and how distinct tasks probe the same neural system. Here, we propose and validate an alternative framework, the Automatic Neuroscientist, which turns the standard fMRI approach on its head. We use real-time fMRI in combination with modern machine-learning techniques to automatically design the optimal experiment to evoke a desired target brain state. In this work, we present two proof-of-principle studies involving perceptual stimuli. In both studies optimization algorithms of varying complexity were employed; the first involved a stochastic approximation method while the second incorporated a more sophisticated Bayesian optimization technique. In the first study, we achieved convergence for the hypothesized optimum in 11 out of 14 runs in less than 10 min. Results of the second study showed how our closed-loop framework accurately and with high efficiency estimated the underlying relationship between stimuli and neural responses for each subject in one to two runs: with each run lasting 6.3 min. Moreover, we demonstrate that using only the first run produced a reliable solution at a group-level. Supporting simulation analyses provided evidence on the robustness of the Bayesian optimization approach for scenarios with low contrast-to-noise ratio. This framework is generalizable to numerous applications, ranging from optimizing stimuli in neuroimaging pilot studies to tailoring clinical rehabilitation therapy to patients and can be used with multiple imaging modalities in humans and animals. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Generalized Minimum-Time Follow-up Approaches Applied to Tasking Electro-Optical Sensor Tasking
NASA Astrophysics Data System (ADS)
Murphy, T. S.; Holzinger, M. J.
This work proposes a methodology for tasking of sensors to search an area of state space for a particular object, group of objects, or class of objects. This work creates a general unified mathematical framework for analyzing reacquisition, search, scheduling, and custody operations. In particular, this work looks at searching for unknown space object(s) with prior knowledge in the form of a set, which can be defined via an uncorrelated track, region of state space, or a variety of other methods. The follow-up tasking can occur from a variable location and time, which often requires searching a large region of the sky. This work analyzes the area of a search region over time to inform a time optimal search method. Simulation work looks at analyzing search regions relative to a particular sensor, and testing a tasking algorithm to search through the region. The tasking algorithm is also validated on a reacquisition problem with a telescope system at Georgia Tech.
NASA Astrophysics Data System (ADS)
Hopmann, Ch.; Windeck, C.; Kurth, K.; Behr, M.; Siegbert, R.; Elgeti, S.
2014-05-01
The rheological design of profile extrusion dies is one of the most challenging tasks in die design. As no analytical solution is available, the quality and the development time for a new design highly depend on the empirical knowledge of the die manufacturer. Usually, prior to start production several time-consuming, iterative running-in trials need to be performed to check the profile accuracy and the die geometry is reworked. An alternative are numerical flow simulations. These simulations enable to calculate the melt flow through a die so that the quality of the flow distribution can be analyzed. The objective of a current research project is to improve the automated optimization of profile extrusion dies. Special emphasis is put on choosing a convenient starting geometry and parameterization, which enable for possible deformations. In this work, three commonly used design features are examined with regard to their influence on the optimization results. Based on the results, a strategy is derived to select the most relevant areas of the flow channels for the optimization. For these characteristic areas recommendations are given concerning an efficient parameterization setup that still enables adequate deformations of the flow channel geometry. Exemplarily, this approach is applied to a L-shaped profile with different wall thicknesses. The die is optimized automatically and simulation results are qualitatively compared with experimental results. Furthermore, the strategy is applied to a complex extrusion die of a floor skirting profile to prove the universal adaptability.
Morrow, Melissa M.; Rankin, Jeffery W.; Neptune, Richard R.; Kaufman, Kenton R.
2014-01-01
The primary purpose of this study was to compare static and dynamic optimization muscle force and work predictions during the push phase of wheelchair propulsion. A secondary purpose was to compare the differences in predicted shoulder and elbow kinetics and kinematics and handrim forces. The forward dynamics simulation minimized differences between simulated and experimental data (obtained from 10 manual wheelchair users) and muscle co-contraction. For direct comparison between models, the shoulder and elbow muscle moment arms and net joint moments from the dynamic optimization were used as inputs into the static optimization routine. RMS errors between model predictions were calculated to quantify model agreement. There was a wide range of individual muscle force agreement that spanned from poor (26.4 % Fmax error in the middle deltoid) to good (6.4 % Fmax error in the anterior deltoid) in the prime movers of the shoulder. The predicted muscle forces from the static optimization were sufficient to create the appropriate motion and joint moments at the shoulder for the push phase of wheelchair propulsion, but showed deviations in the elbow moment, pronation-supination motion and hand rim forces. These results suggest the static approach does not produce results similar enough to be a replacement for forward dynamics simulations, and care should be taken in choosing the appropriate method for a specific task and set of constraints. Dynamic optimization modeling approaches may be required for motions that are greatly influenced by muscle activation dynamics or that require significant co-contraction. PMID:25282075
Genetic learning in rule-based and neural systems
NASA Technical Reports Server (NTRS)
Smith, Robert E.
1993-01-01
The design of neural networks and fuzzy systems can involve complex, nonlinear, and ill-conditioned optimization problems. Often, traditional optimization schemes are inadequate or inapplicable for such tasks. Genetic Algorithms (GA's) are a class of optimization procedures whose mechanics are based on those of natural genetics. Mathematical arguments show how GAs bring substantial computational leverage to search problems, without requiring the mathematical characteristics often necessary for traditional optimization schemes (e.g., modality, continuity, availability of derivative information, etc.). GA's have proven effective in a variety of search tasks that arise in neural networks and fuzzy systems. This presentation begins by introducing the mechanism and theoretical underpinnings of GA's. GA's are then related to a class of rule-based machine learning systems called learning classifier systems (LCS's). An LCS implements a low-level production-system that uses a GA as its primary rule discovery mechanism. This presentation illustrates how, despite its rule-based framework, an LCS can be thought of as a competitive neural network. Neural network simulator code for an LCS is presented. In this context, the GA is doing more than optimizing and objective function. It is searching for an ecology of hidden nodes with limited connectivity. The GA attempts to evolve this ecology such that effective neural network performance results. The GA is particularly well adapted to this task, given its naturally-inspired basis. The LCS/neural network analogy extends itself to other, more traditional neural networks. Conclusions to the presentation discuss the implications of using GA's in ecological search problems that arise in neural and fuzzy systems.
Rodríguez-Morilla, Beatriz; Madrid, Juan A.; Molina, Enrique; Correa, Angel
2017-01-01
Vigilance usually deteriorates over prolonged driving at non-optimal times of day. Exposure to blue-enriched light has shown to enhance arousal, leading to behavioral benefits in some cognitive tasks. However, the cognitive effects of long-wavelength light have been less studied and its effects on driving performance remained to be addressed. We tested the effects of a blue-enriched white light (BWL) and a long-wavelength orange light (OL) vs. a control condition of dim light on subjective, physiological and behavioral measures at 21:45 h. Neurobehavioral tests included the Karolinska Sleepiness Scale and subjective mood scale, recording of distal-proximal temperature gradient (DPG, as index of physiological arousal), accuracy in simulated driving and reaction time in the auditory psychomotor vigilance task. The results showed that BWL decreased the DPG (reflecting enhanced arousal), while it did not improve reaction time or driving performance. Instead, blue light produced larger driving errors than OL, while performance in OL was stable along time on task. These data suggest that physiological arousal induced by light does not necessarily imply cognitive improvement. Indeed, excessive arousal might deteriorate accuracy in complex tasks requiring precision, such as driving. PMID:28690558
Development of cost-effective surfactant flooding technology. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, G.A.; Sepehrnoori, K.
1996-11-01
Task 1 of this research was the development of a high-resolution, fully implicit, finite-difference, multiphase, multicomponent, compositional simulator for chemical flooding. The major physical phenomena modeled in this simulator are dispersion, heterogeneous permeability and porosity, adsorption, interfacial tension, relative permeability and capillary desaturation, compositional phase viscosity, compositional phase density and gravity effects, capillary pressure, and aqueous-oleic-microemulsion phase behavior. Polymer and its non-Newtonian rheology properties include shear-thinning viscosity, permeability reduction, inaccessible pore volume, and adsorption. Options of constant or variable space grids and time steps, constant-pressure or constant-rate well conditions, horizontal and vertical wells, and multiple slug injections are also availablemore » in the simulator. The solution scheme used in this simulator is fully implicit. The pressure equation and the mass-conservation equations are solved simultaneously for the aqueous-phase pressure and the total concentrations of each component. A third-order-in-space, second-order-in-time finite-difference method and a new total-variation-diminishing (TVD) third-order flux limiter are used that greatly reduce numerical dispersion effects. Task 2 was the optimization of surfactant flooding. The code UTCHEM was used to simulate surfactant polymer flooding.« less
Zhang, Peng; Liu, Keping; Zhao, Bo; Li, Yuanchun
2015-01-01
Optimal guidance is essential for the soft landing task. However, due to its high computational complexities, it is hardly applied to the autonomous guidance. In this paper, a computationally inexpensive optimal guidance algorithm based on the radial basis function neural network (RBFNN) is proposed. The optimization problem of the trajectory for soft landing on asteroids is formulated and transformed into a two-point boundary value problem (TPBVP). Combining the database of initial states with the relative initial co-states, an RBFNN is trained offline. The optimal trajectory of the soft landing is determined rapidly by applying the trained network in the online guidance. The Monte Carlo simulations of soft landing on the Eros433 are performed to demonstrate the effectiveness of the proposed guidance algorithm. PMID:26367382
Flexible Fusion Structure-Based Performance Optimization Learning for Multisensor Target Tracking
Ge, Quanbo; Wei, Zhongliang; Cheng, Tianfa; Chen, Shaodong; Wang, Xiangfeng
2017-01-01
Compared with the fixed fusion structure, the flexible fusion structure with mixed fusion methods has better adjustment performance for the complex air task network systems, and it can effectively help the system to achieve the goal under the given constraints. Because of the time-varying situation of the task network system induced by moving nodes and non-cooperative target, and limitations such as communication bandwidth and measurement distance, it is necessary to dynamically adjust the system fusion structure including sensors and fusion methods in a given adjustment period. Aiming at this, this paper studies the design of a flexible fusion algorithm by using an optimization learning technology. The purpose is to dynamically determine the sensors’ numbers and the associated sensors to take part in the centralized and distributed fusion processes, respectively, herein termed sensor subsets selection. Firstly, two system performance indexes are introduced. Especially, the survivability index is presented and defined. Secondly, based on the two indexes and considering other conditions such as communication bandwidth and measurement distance, optimization models for both single target tracking and multi-target tracking are established. Correspondingly, solution steps are given for the two optimization models in detail. Simulation examples are demonstrated to validate the proposed algorithms. PMID:28481243
NASA Astrophysics Data System (ADS)
Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret
2003-12-01
A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.
[Conceptual approach to formation of a modern system of medical provision].
Belevitin, A B; Miroshnichenko, Iu V; Bunin, S A; Goriachev, A B; Krasavin, K D
2009-09-01
Within the frame of forming of a new face of medical service of the Armed Forces, were determined the principle approaches to optimization of the process of development of the system of medical supply. It was proposed to use the following principles: principle of hierarchic structuring, principle of purposeful orientation, principle of vertical task sharing, principle of horizontal task sharing, principle of complex simulation, principle of permanent perfection. The main direction of optimization of structure and composition of system of medical supply of the Armed Forces are: forming of modern institutes of medical supply--centers of support by technique and facilities on the base of central, regional storehouses, and attachment of several functions of organs of military government to them; creation of medical supply office on the base military hospitals, being basing treatment-prophylaxis institutes, in adjusted territorial zones of responsibility for the purpose of realization of complex of tasks of supplying the units and institutes, attached to them on medical support, by medical equipment. Building of medical support system is realized on three levels: Center - Military region (NAVY region) - territorial zone of responsibility.
NASA Astrophysics Data System (ADS)
DeSena, J. T.; Martin, S. R.; Clarke, J. C.; Dutrow, D. A.; Newman, A. J.
2012-06-01
As the number and diversity of sensing assets available for intelligence, surveillance and reconnaissance (ISR) operations continues to expand, the limited ability of human operators to effectively manage, control and exploit the ISR ensemble is exceeded, leading to reduced operational effectiveness. Automated support both in the processing of voluminous sensor data and sensor asset control can relieve the burden of human operators to support operation of larger ISR ensembles. In dynamic environments it is essential to react quickly to current information to avoid stale, sub-optimal plans. Our approach is to apply the principles of feedback control to ISR operations, "closing the loop" from the sensor collections through automated processing to ISR asset control. Previous work by the authors demonstrated non-myopic multiple platform trajectory control using a receding horizon controller in a closed feedback loop with a multiple hypothesis tracker applied to multi-target search and track simulation scenarios in the ground and space domains. This paper presents extensions in both size and scope of the previous work, demonstrating closed-loop control, involving both platform routing and sensor pointing, of a multisensor, multi-platform ISR ensemble tasked with providing situational awareness and performing search, track and classification of multiple moving ground targets in irregular warfare scenarios. The closed-loop ISR system is fullyrealized using distributed, asynchronous components that communicate over a network. The closed-loop ISR system has been exercised via a networked simulation test bed against a scenario in the Afghanistan theater implemented using high-fidelity terrain and imagery data. In addition, the system has been applied to space surveillance scenarios requiring tracking of space objects where current deliberative, manually intensive processes for managing sensor assets are insufficiently responsive. Simulation experiment results are presented. The algorithm to jointly optimize sensor schedules against search, track, and classify is based on recent work by Papageorgiou and Raykin on risk-based sensor management. It uses a risk-based objective function and attempts to minimize and balance the risks of misclassifying and losing track on an object. It supports the requirement to generate tasking for metric and feature data concurrently and synergistically, and account for both tracking accuracy and object characterization, jointly, in computing reward and cost for optimizing tasking decisions.
Demonstrating the Viability and Affordability of Nuclear Surface Power Systems
NASA Technical Reports Server (NTRS)
Vandyke, Melissa K.
2006-01-01
A set of tasks have been identified to help demonstrate the viability, performance, and affordability of surface fission systems. Completion of these tasks will move surface fission systems closer to reality by demonstrating affordability and performance potential. Tasks include fabrication and test of a 19-pin section of a Surface Power Unit Demonstrator (SPUD); design, fabrication, and utilization of thermal simulators optimized for surface fission' applications; design, fabrication, and utilization of GPHS module thermal simulators; design, fabrication, and test of a fission surface power system shield; and work related to potential fission surface power fuel/clad systems. Work on the SPUD will feed directly into joint NASA MSFC/NASA GRC fabrication and test of a surface power plant Engineering Development Unit (EDU). The goal of the EDU will be to perform highly realistic thermal, structural, and electrical testing on an integrated fission surface power system. Fission thermal simulator work will help enable high fidelity non-nuclear testing of pumped NaK surface fission power systems. Radioisotope thermal simulator work will help enable design and development of higher power radioisotope systems (power ultimately limited by Pu-238 availability). Shield work is designed to assess the potential of using a water neutron shield on the surface of the moon. Fuels work is geared toward assessing the current potential of using fuels that have already flown in space.
Kinjo, Ken; Uchibe, Eiji; Doya, Kenji
2013-01-01
Linearly solvable Markov Decision Process (LMDP) is a class of optimal control problem in which the Bellman's equation can be converted into a linear equation by an exponential transformation of the state value function (Todorov, 2009b). In an LMDP, the optimal value function and the corresponding control policy are obtained by solving an eigenvalue problem in a discrete state space or an eigenfunction problem in a continuous state using the knowledge of the system dynamics and the action, state, and terminal cost functions. In this study, we evaluate the effectiveness of the LMDP framework in real robot control, in which the dynamics of the body and the environment have to be learned from experience. We first perform a simulation study of a pole swing-up task to evaluate the effect of the accuracy of the learned dynamics model on the derived the action policy. The result shows that a crude linear approximation of the non-linear dynamics can still allow solution of the task, despite with a higher total cost. We then perform real robot experiments of a battery-catching task using our Spring Dog mobile robot platform. The state is given by the position and the size of a battery in its camera view and two neck joint angles. The action is the velocities of two wheels, while the neck joints were controlled by a visual servo controller. We test linear and bilinear dynamic models in tasks with quadratic and Guassian state cost functions. In the quadratic cost task, the LMDP controller derived from a learned linear dynamics model performed equivalently with the optimal linear quadratic regulator (LQR). In the non-quadratic task, the LMDP controller with a linear dynamics model showed the best performance. The results demonstrate the usefulness of the LMDP framework in real robot control even when simple linear models are used for dynamics learning.
LROC assessment of non-linear filtering methods in Ga-67 SPECT imaging
NASA Astrophysics Data System (ADS)
De Clercq, Stijn; Staelens, Steven; De Beenhouwer, Jan; D'Asseler, Yves; Lemahieu, Ignace
2006-03-01
In emission tomography, iterative reconstruction is usually followed by a linear smoothing filter to make such images more appropriate for visual inspection and diagnosis by a physician. This will result in a global blurring of the images, smoothing across edges and possibly discarding valuable image information for detection tasks. The purpose of this study is to investigate which possible advantages a non-linear, edge-preserving postfilter could have on lesion detection in Ga-67 SPECT imaging. Image quality can be defined based on the task that has to be performed on the image. This study used LROC observer studies based on a dataset created by CPU-intensive Gate Monte Carlo simulations of a voxelized digital phantom. The filters considered in this study were a linear Gaussian filter, a bilateral filter, the Perona-Malik anisotropic diffusion filter and the Catte filtering scheme. The 3D MCAT software phantom was used to simulate the distribution of Ga-67 citrate in the abdomen. Tumor-present cases had a 1-cm diameter tumor randomly placed near the edges of the anatomical boundaries of the kidneys, bone, liver and spleen. Our data set was generated out of a single noisy background simulation using the bootstrap method, to significantly reduce the simulation time and to allow for a larger observer data set. Lesions were simulated separately and added to the background afterwards. These were then reconstructed with an iterative approach, using a sufficiently large number of MLEM iterations to establish convergence. The output of a numerical observer was used in a simplex optimization method to estimate an optimal set of parameters for each postfilter. No significant improvement was found for using edge-preserving filtering techniques over standard linear Gaussian filtering.
Topology-Aware Performance Optimization and Modeling of Adaptive Mesh Refinement Codes for Exascale
Chan, Cy P.; Bachan, John D.; Kenny, Joseph P.; ...
2017-01-26
Here, we introduce a topology-aware performance optimization and modeling workflow for AMR simulation that includes two new modeling tools, ProgrAMR and Mota Mapper, which interface with the BoxLib AMR framework and the SSTmacro network simulator. ProgrAMR allows us to generate and model the execution of task dependency graphs from high-level specifications of AMR-based applications, which we demonstrate by analyzing two example AMR-based multigrid solvers with varying degrees of asynchrony. Mota Mapper generates multiobjective, network topology-aware box mappings, which we apply to optimize the data layout for the example multigrid solvers. While the sensitivity of these solvers to layout and executionmore » strategy appears to be modest for balanced scenarios, the impact of better mapping algorithms can be significant when performance is highly constrained by network hop latency. Furthermore, we show that network latency in the multigrid bottom solve is the main contributing factor preventing good scaling on exascale-class machines.« less
Topology-Aware Performance Optimization and Modeling of Adaptive Mesh Refinement Codes for Exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, Cy P.; Bachan, John D.; Kenny, Joseph P.
Here, we introduce a topology-aware performance optimization and modeling workflow for AMR simulation that includes two new modeling tools, ProgrAMR and Mota Mapper, which interface with the BoxLib AMR framework and the SSTmacro network simulator. ProgrAMR allows us to generate and model the execution of task dependency graphs from high-level specifications of AMR-based applications, which we demonstrate by analyzing two example AMR-based multigrid solvers with varying degrees of asynchrony. Mota Mapper generates multiobjective, network topology-aware box mappings, which we apply to optimize the data layout for the example multigrid solvers. While the sensitivity of these solvers to layout and executionmore » strategy appears to be modest for balanced scenarios, the impact of better mapping algorithms can be significant when performance is highly constrained by network hop latency. Furthermore, we show that network latency in the multigrid bottom solve is the main contributing factor preventing good scaling on exascale-class machines.« less
Asgharnia, Amirhossein; Shahnazi, Reza; Jamali, Ali
2018-05-11
The most studied controller for pitch control of wind turbines is proportional-integral-derivative (PID) controller. However, due to uncertainties in wind turbine modeling and wind speed profiles, the need for more effective controllers is inevitable. On the other hand, the parameters of PID controller usually are unknown and should be selected by the designer which is neither a straightforward task nor optimal. To cope with these drawbacks, in this paper, two advanced controllers called fuzzy PID (FPID) and fractional-order fuzzy PID (FOFPID) are proposed to improve the pitch control performance. Meanwhile, to find the parameters of the controllers the chaotic evolutionary optimization methods are used. Using evolutionary optimization methods not only gives us the unknown parameters of the controllers but also guarantees the optimality based on the chosen objective function. To improve the performance of the evolutionary algorithms chaotic maps are used. All the optimization procedures are applied to the 2-mass model of 5-MW wind turbine model. The proposed optimal controllers are validated using simulator FAST developed by NREL. Simulation results demonstrate that the FOFPID controller can reach to better performance and robustness while guaranteeing fewer fatigue damages in different wind speeds in comparison to FPID, fractional-order PID (FOPID) and gain-scheduling PID (GSPID) controllers. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ghaly, Michael; Links, Jonathan M.; Frey, Eric C.
2016-03-01
The collimator is the primary factor that determines the spatial resolution and noise tradeoff in myocardial perfusion SPECT images. In this paper, the goal was to find the collimator that optimizes the image quality in terms of a perfusion defect detection task. Since the optimal collimator could depend on the level of approximation of the collimator-detector response (CDR) compensation modeled in reconstruction, we performed this optimization for the cases of modeling the full CDR (including geometric, septal penetration and septal scatter responses), the geometric CDR, or no model of the CDR. We evaluated the performance on the detection task using three model observers. Two observers operated on data in the projection domain: the Ideal Observer (IO) and IO with Model-Mismatch (IO-MM). The third observer was an anthropomorphic Channelized Hotelling Observer (CHO), which operated on reconstructed images. The projection-domain observers have the advantage that they are computationally less intensive. The IO has perfect knowledge of the image formation process, i.e. it has a perfect model of the CDR. The IO-MM takes into account the mismatch between the true (complete and accurate) model and an approximate model, e.g. one that might be used in reconstruction. We evaluated the utility of these projection domain observers in optimizing instrumentation parameters. We investigated a family of 8 parallel-hole collimators, spanning a wide range of resolution and sensitivity tradeoffs, using a population of simulated projection (for the IO and IO-MM) and reconstructed (for the CHO) images that included background variability. We simulated anterolateral and inferior perfusion defects with variable extents and severities. The area under the ROC curve was estimated from the IO, IO-MM, and CHO test statistics and served as the figure-of-merit. The optimal collimator for the IO had a resolution of 9-11 mm FWHM at 10 cm, which is poorer resolution than typical collimators used for MPS. When the IO-MM and CHO used a geometric or no model of the CDR, the optimal collimator shifted toward higher resolution than that obtained using the IO and the CHO with full CDR modeling. With the optimal collimator, the IO-MM and CHO using geometric modeling gave similar performance to full CDR modeling. Collimators with poorer resolution were optimal when CDR modeling was used. The agreement of rankings between the IO-MM and CHO confirmed that the IO-MM is useful for optimization tasks when model mismatch is present due to its substantially reduced computational burden compared to the CHO.
Optimization of the computational load of a hypercube supercomputer onboard a mobile robot.
Barhen, J; Toomarian, N; Protopopescu, V
1987-12-01
A combinatorial optimization methodology is developed, which enables the efficient use of hypercube multiprocessors onboard mobile intelligent robots dedicated to time-critical missions. The methodology is implemented in terms of large-scale concurrent algorithms based either on fast simulated annealing, or on nonlinear asynchronous neural networks. In particular, analytic expressions are given for the effect of singleneuron perturbations on the systems' configuration energy. Compact neuromorphic data structures are used to model effects such as prec xdence constraints, processor idling times, and task-schedule overlaps. Results for a typical robot-dynamics benchmark are presented.
Optimization of the computational load of a hypercube supercomputer onboard a mobile robot
NASA Technical Reports Server (NTRS)
Barhen, Jacob; Toomarian, N.; Protopopescu, V.
1987-01-01
A combinatorial optimization methodology is developed, which enables the efficient use of hypercube multiprocessors onboard mobile intelligent robots dedicated to time-critical missions. The methodology is implemented in terms of large-scale concurrent algorithms based either on fast simulated annealing, or on nonlinear asynchronous neural networks. In particular, analytic expressions are given for the effect of single-neuron perturbations on the systems' configuration energy. Compact neuromorphic data structures are used to model effects such as precedence constraints, processor idling times, and task-schedule overlaps. Results for a typical robot-dynamics benchmark are presented.
NASA Astrophysics Data System (ADS)
Nazemizadeh, M.; Rahimi, H. N.; Amini Khoiy, K.
2012-03-01
This paper presents an optimal control strategy for optimal trajectory planning of mobile robots by considering nonlinear dynamic model and nonholonomic constraints of the system. The nonholonomic constraints of the system are introduced by a nonintegrable set of differential equations which represent kinematic restriction on the motion. The Lagrange's principle is employed to derive the nonlinear equations of the system. Then, the optimal path planning of the mobile robot is formulated as an optimal control problem. To set up the problem, the nonlinear equations of the system are assumed as constraints, and a minimum energy objective function is defined. To solve the problem, an indirect solution of the optimal control method is employed, and conditions of the optimality derived as a set of coupled nonlinear differential equations. The optimality equations are solved numerically, and various simulations are performed for a nonholonomic mobile robot to illustrate effectiveness of the proposed method.
Parameter Sweep and Optimization of Loosely Coupled Simulations Using the DAKOTA Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elwasif, Wael R; Bernholdt, David E; Pannala, Sreekanth
2012-01-01
The increasing availability of large scale computing capabilities has accelerated the development of high-fidelity coupled simulations. Such simulations typically involve the integration of models that implement various aspects of the complex phenomena under investigation. Coupled simulations are playing an integral role in fields such as climate modeling, earth systems modeling, rocket simulations, computational chemistry, fusion research, and many other computational fields. Model coupling provides scientists with systematic ways to virtually explore the physical, mathematical, and computational aspects of the problem. Such exploration is rarely done using a single execution of a simulation, but rather by aggregating the results from manymore » simulation runs that, together, serve to bring to light novel knowledge about the system under investigation. Furthermore, it is often the case (particularly in engineering disciplines) that the study of the underlying system takes the form of an optimization regime, where the control parameter space is explored to optimize an objective functions that captures system realizability, cost, performance, or a combination thereof. Novel and flexible frameworks that facilitate the integration of the disparate models into a holistic simulation are used to perform this research, while making efficient use of the available computational resources. In this paper, we describe the integration of the DAKOTA optimization and parameter sweep toolkit with the Integrated Plasma Simulator (IPS), a component-based framework for loosely coupled simulations. The integration allows DAKOTA to exploit the internal task and resource management of the IPS to dynamically instantiate simulation instances within a single IPS instance, allowing for greater control over the trade-off between efficiency of resource utilization and time to completion. We present a case study showing the use of the combined DAKOTA-IPS system to aid in the design of a lithium ion battery (LIB) cell, by studying a coupled system involving the electrochemistry and ion transport at the lower length scales and thermal energy transport at the device scales. The DAKOTA-IPS system provides a flexible tool for use in optimization and parameter sweep studies involving loosely coupled simulations that is suitable for use in situations where changes to the constituent components in the coupled simulation are impractical due to intellectual property or code heritage issues.« less
2011-01-01
Background Current guidelines for rehabilitation of arm and hand function after stroke recommend that motor training focus on realistic tasks that require reaching and manipulation and engage the patient intensively, actively, and adaptively. Here, we investigated the feasibility of a novel robotic task-practice system, ADAPT, designed in accordance with such guidelines. At each trial, ADAPT selects a functional task according to a training schedule and with difficulty based on previous performance. Once the task is selected, the robot picks up and presents the corresponding tool, simulates the dynamics of the tasks, and the patient interacts with the tool to perform the task. Methods Five participants with chronic stroke with mild to moderate impairments (> 9 months post-stroke; Fugl-Meyer arm score 49.2 ± 5.6) practiced four functional tasks (selected out of six in a pre-test) with ADAPT for about one and half hour and 144 trials in a pseudo-random schedule of 3-trial blocks per task. Results No adverse events occurred and ADAPT successfully presented the six functional tasks without human intervention for a total of 900 trials. Qualitative analysis of trajectories showed that ADAPT simulated the desired task dynamics adequately, and participants reported good, although not excellent, task fidelity. During training, the adaptive difficulty algorithm progressively increased task difficulty leading towards an optimal challenge point based on performance; difficulty was then continuously adjusted to keep performance around the challenge point. Furthermore, the time to complete all trained tasks decreased significantly from pretest to one-hour post-test. Finally, post-training questionnaires demonstrated positive patient acceptance of ADAPT. Conclusions ADAPT successfully provided adaptive progressive training for multiple functional tasks based on participant's performance. Our encouraging results establish the feasibility of ADAPT; its efficacy will next be tested in a clinical trial. PMID:21813010
Regression Simulation of Turbine Engine Performance - Accuracy Improvement (TASK IV)
1978-09-30
33 21 Generalized Form of the Regression Equation for the Optimized Polynomial Exponent M ethod...altitude, Mach number and power setting combinations were generated during the ARES evaluation. The orthogonal Latin Square selection procedure...pattern. In data generation , the low (L), mid (M), and high (H) values of a variable are not always the same. At some of the corner points where
NASA Technical Reports Server (NTRS)
Perkinson, J. A.
1974-01-01
The application of associative memory processor equipment to conventional host processors type systems is discussed. Efforts were made to demonstrate how such application relieves the task burden of conventional systems, and enhance system speed and efficiency. Data cover comparative theoretical performance analysis, demonstration of expanded growth capabilities, and demonstrations of actual hardware in simulated environment.
Evolutionary trade-offs and the structure of polymorphisms.
Sheftel, Hila; Szekely, Pablo; Mayo, Avi; Sella, Guy; Alon, Uri
2018-05-26
Populations of organisms show genetic differences called polymorphisms. Understanding the effects of polymorphisms is important for biology and medicine. Here, we ask which polymorphisms occur at high frequency when organisms evolve under trade-offs between multiple tasks. Multiple tasks present a problem, because it is not possible to be optimal at all tasks simultaneously and hence compromises are necessary. Recent work indicates that trade-offs lead to a simple geometry of phenotypes in the space of traits: phenotypes fall on the Pareto front, which is shaped as a polytope: a line, triangle, tetrahedron etc. The vertices of these polytopes are the optimal phenotypes for a single task. Up to now, work on this Pareto approach has not considered its genetic underpinnings. Here, we address this by asking how the polymorphism structure of a population is affected by evolution under trade-offs. We simulate a multi-task selection scenario, in which the population evolves to the Pareto front: the line segment between two archetypes or the triangle between three archetypes. We find that polymorphisms that become prevalent in the population have pleiotropic phenotypic effects that align with the Pareto front. Similarly, epistatic effects between prevalent polymorphisms are parallel to the front. Alignment with the front occurs also for asexual mating. Alignment is reduced when drift or linkage is strong, and is replaced by a more complex structure in which many perpendicular allele effects cancel out. Aligned polymorphism structure allows mating to produce offspring that stand a good chance of being optimal multi-taskers in at least one of the locales available to the species.This article is part of the theme issue 'Self-organization in cell biology'. © 2018 The Author(s).
CONSOLE: A CAD tandem for optimization-based design interacting with user-supplied simulators
NASA Technical Reports Server (NTRS)
Fan, Michael K. H.; Wang, Li-Sheng; Koninckx, Jan; Tits, Andre L.
1989-01-01
CONSOLE employs a recently developed design methodology (International Journal of Control 43:1693-1721) which provides the designer with a congenial environment to express his problem as a multiple ojective constrained optimization problem and allows him to refine his characterization of optimality when a suboptimal design is approached. To this end, in CONSOLE, the designed formulates the design problem using a high-level language and performs design task and explores tradeoff through a few short and clearly defined commands. The range of problems that can be solved efficiently using a CAD tools depends very much on the ability of this tool to be interfaced with user-supplied simulators. For instance, when designing a control system one makes use of the characteristics of the plant, and therefore, a model of the plant under study has to be made available to the CAD tool. CONSOLE allows for an easy interfacing of almost any simulator the user has available. To date CONSOLE has already been used successfully in many applications, including the design of controllers for a flexible arm and for a robotic manipulator and the solution of a parameter selection problem for a neural network.
A flexible, interactive software tool for fitting the parameters of neuronal models.
Friedrich, Péter; Vella, Michael; Gulyás, Attila I; Freund, Tamás F; Káli, Szabolcs
2014-01-01
The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool.
A flexible, interactive software tool for fitting the parameters of neuronal models
Friedrich, Péter; Vella, Michael; Gulyás, Attila I.; Freund, Tamás F.; Káli, Szabolcs
2014-01-01
The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool. PMID:25071540
Muscle coordination is habitual rather than optimal.
de Rugy, Aymar; Loeb, Gerald E; Carroll, Timothy J
2012-05-23
When sharing load among multiple muscles, humans appear to select an optimal pattern of activation that minimizes costs such as the effort or variability of movement. How the nervous system achieves this behavior, however, is unknown. Here we show that contrary to predictions from optimal control theory, habitual muscle activation patterns are surprisingly robust to changes in limb biomechanics. We first developed a method to simulate joint forces in real time from electromyographic recordings of the wrist muscles. When the model was altered to simulate the effects of paralyzing a muscle, the subjects simply increased the recruitment of all muscles to accomplish the task, rather than recruiting only the useful muscles. When the model was altered to make the force output of one muscle unusually noisy, the subjects again persisted in recruiting all muscles rather than eliminating the noisy one. Such habitual coordination patterns were also unaffected by real modifications of biomechanics produced by selectively damaging a muscle without affecting sensory feedback. Subjects naturally use different patterns of muscle contraction to produce the same forces in different pronation-supination postures, but when the simulation was based on a posture different from the actual posture, the recruitment patterns tended to agree with the actual rather than the simulated posture. The results appear inconsistent with computation of motor programs by an optimal controller in the brain. Rather, the brain may learn and recall command programs that result in muscle coordination patterns generated by lower sensorimotor circuitry that are functionally "good-enough."
NASA Astrophysics Data System (ADS)
Bella, P.; Buček, P.; Ridzoň, M.; Mojžiš, M.; Parilák, L.'
2017-02-01
Production of multi-rifled seamless steel tubes is quite a new technology in Železiarne Podbrezová. Therefore, a lot of technological questions emerges (process technology, input feedstock dimensions, material flow during drawing, etc.) Pilot experiments to fine tune the process cost a lot of time and energy. For this, numerical simulation would be an alternative solution for achieving optimal parameters in production technology. This would reduce the number of experiments needed, lowering the overall costs of development. However, to claim the numerical results to be relevant it is necessary to verify them against the actual plant trials. Searching for optimal input feedstock dimension for drawing of multi-rifled tube with dimensions Ø28.6 mm × 6.3 mm is what makes the main topic of this paper. As a secondary task, effective position of the plug - die couple has been solved via numerical simulation. Comparing the calculated results with actual numbers from plant trials a good agreement was observed.
Launch vehicle design and GNC sizing with ASTOS
NASA Astrophysics Data System (ADS)
Cremaschi, Francesco; Winter, Sebastian; Rossi, Valerio; Wiegand, Andreas
2018-03-01
The European Space Agency (ESA) is currently involved in several activities related to launch vehicle designs (Future Launcher Preparatory Program, Ariane 6, VEGA evolutions, etc.). Within these activities, ESA has identified the importance of developing a simulation infrastructure capable of supporting the multi-disciplinary design and preliminary guidance navigation and control (GNC) design of different launch vehicle configurations. Astos Solutions has developed the multi-disciplinary optimization and launcher GNC simulation and sizing tool (LGSST) under ESA contract. The functionality is integrated in the Analysis, Simulation and Trajectory Optimization Software for space applications (ASTOS) and is intended to be used from the early design phases up to phase B1 activities. ASTOS shall enable the user to perform detailed vehicle design tasks and assessment of GNC systems, covering all aspects of rapid configuration and scenario management, sizing of stages, trajectory-dependent estimation of structural masses, rigid and flexible body dynamics, navigation, guidance and control, worst case analysis, launch safety analysis, performance analysis, and reporting.
Efficient parallel architecture for highly coupled real-time linear system applications
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Homaifar, Abdollah; Barua, Soumavo
1988-01-01
A systematic procedure is developed for exploiting the parallel constructs of computation in a highly coupled, linear system application. An overall top-down design approach is adopted. Differential equations governing the application under consideration are partitioned into subtasks on the basis of a data flow analysis. The interconnected task units constitute a task graph which has to be computed in every update interval. Multiprocessing concepts utilizing parallel integration algorithms are then applied for efficient task graph execution. A simple scheduling routine is developed to handle task allocation while in the multiprocessor mode. Results of simulation and scheduling are compared on the basis of standard performance indices. Processor timing diagrams are developed on the basis of program output accruing to an optimal set of processors. Basic architectural attributes for implementing the system are discussed together with suggestions for processing element design. Emphasis is placed on flexible architectures capable of accommodating widely varying application specifics.
Variable-Complexity Multidisciplinary Optimization on Parallel Computers
NASA Technical Reports Server (NTRS)
Grossman, Bernard; Mason, William H.; Watson, Layne T.; Haftka, Raphael T.
1998-01-01
This report covers work conducted under grant NAG1-1562 for the NASA High Performance Computing and Communications Program (HPCCP) from December 7, 1993, to December 31, 1997. The objective of the research was to develop new multidisciplinary design optimization (MDO) techniques which exploit parallel computing to reduce the computational burden of aircraft MDO. The design of the High-Speed Civil Transport (HSCT) air-craft was selected as a test case to demonstrate the utility of our MDO methods. The three major tasks of this research grant included: development of parallel multipoint approximation methods for the aerodynamic design of the HSCT, use of parallel multipoint approximation methods for structural optimization of the HSCT, mathematical and algorithmic development including support in the integration of parallel computation for items (1) and (2). These tasks have been accomplished with the development of a response surface methodology that incorporates multi-fidelity models. For the aerodynamic design we were able to optimize with up to 20 design variables using hundreds of expensive Euler analyses together with thousands of inexpensive linear theory simulations. We have thereby demonstrated the application of CFD to a large aerodynamic design problem. For the predicting structural weight we were able to combine hundreds of structural optimizations of refined finite element models with thousands of optimizations based on coarse models. Computations have been carried out on the Intel Paragon with up to 128 nodes. The parallel computation allowed us to perform combined aerodynamic-structural optimization using state of the art models of a complex aircraft configurations.
NASA Astrophysics Data System (ADS)
Telban, Robert J.
While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. To address this, new human-centered motion cueing algorithms were developed. A revised "optimal algorithm" uses time-invariant filters developed by optimal control, incorporating human vestibular system models. The "nonlinear algorithm" is a novel approach that is also formulated by optimal control, but can also be updated in real time. It incorporates a new integrated visual-vestibular perception model that includes both visual and vestibular sensation and the interaction between the stimuli. A time-varying control law requires the matrix Riccati equation to be solved in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. As a result of unsatisfactory sensation, an augmented turbulence cue was added to the vertical mode for both the optimal and nonlinear algorithms. The relative effectiveness of the algorithms, in simulating aircraft maneuvers, was assessed with an eleven-subject piloted performance test conducted on the NASA Langley Visual Motion Simulator (VMS). Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach are less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.
Basner, Mathias; Rubinstein, Joshua
2011-01-01
Objective To evaluate the ability of a 3-min Psychomotor Vigilance Test (PVT) to predict fatigue related performance decrements on a simulated luggage screening task (SLST). Methods Thirty-six healthy non-professional subjects (mean age 30.8 years, 20 female) participated in a 4 day laboratory protocol including a 34 hour period of total sleep deprivation with PVT and SLST testing every 2 hours. Results Eleven and 20 lapses (355 ms threshold) on the PVT optimally divided SLST performance into high, medium, and low performance bouts with significantly decreasing threat detection performance A′. Assignment to the different SLST performance groups replicated homeostatic and circadian patterns during total sleep deprivation. Conclusions The 3 min PVT was able to predict performance on a simulated luggage screening task. Fitness-for-duty feasibility should now be tested in professional screeners and operational environments. PMID:21912278
Equation-free multiscale computation: algorithms and applications.
Kevrekidis, Ioannis G; Samaey, Giovanni
2009-01-01
In traditional physicochemical modeling, one derives evolution equations at the (macroscopic, coarse) scale of interest; these are used to perform a variety of tasks (simulation, bifurcation analysis, optimization) using an arsenal of analytical and numerical techniques. For many complex systems, however, although one observes evolution at a macroscopic scale of interest, accurate models are only given at a more detailed (fine-scale, microscopic) level of description (e.g., lattice Boltzmann, kinetic Monte Carlo, molecular dynamics). Here, we review a framework for computer-aided multiscale analysis, which enables macroscopic computational tasks (over extended spatiotemporal scales) using only appropriately initialized microscopic simulation on short time and length scales. The methodology bypasses the derivation of macroscopic evolution equations when these equations conceptually exist but are not available in closed form-hence the term equation-free. We selectively discuss basic algorithms and underlying principles and illustrate the approach through representative applications. We also discuss potential difficulties and outline areas for future research.
Basner, Mathias; Rubinstein, Joshua
2011-10-01
To evaluate the ability of a 3-minute Psychomotor Vigilance Test (PVT) to predict fatigue-related performance decrements on a simulated luggage-screening task (SLST). Thirty-six healthy nonprofessional subjects (mean age = 30.8 years, 20 women) participated in a 4-day laboratory protocol including a 34-hour period of total sleep deprivation with PVT and SLST testing every 2 hours. Eleven and 20 lapses (355-ms threshold) on the PVT optimally divided SLST performance into high-, medium-, and low-performance bouts with significantly decreasing threat detection performance A'. Assignment to the different SLST performance groups replicated homeostatic and circadian patterns during total sleep deprivation. The 3-minute PVT was able to predict performance on a simulated luggage-screening task. Fitness-for-duty feasibility should now be tested in professional screeners and operational environments.
Diverse task scheduling for individualized requirements in cloud manufacturing
NASA Astrophysics Data System (ADS)
Zhou, Longfei; Zhang, Lin; Zhao, Chun; Laili, Yuanjun; Xu, Lida
2018-03-01
Cloud manufacturing (CMfg) has emerged as a new manufacturing paradigm that provides ubiquitous, on-demand manufacturing services to customers through network and CMfg platforms. In CMfg system, task scheduling as an important means of finding suitable services for specific manufacturing tasks plays a key role in enhancing the system performance. Customers' requirements in CMfg are highly individualized, which leads to diverse manufacturing tasks in terms of execution flows and users' preferences. We focus on diverse manufacturing tasks and aim to address their scheduling issue in CMfg. First of all, a mathematical model of task scheduling is built based on analysis of the scheduling process in CMfg. To solve this scheduling problem, we propose a scheduling method aiming for diverse tasks, which enables each service demander to obtain desired manufacturing services. The candidate service sets are generated according to subtask directed graphs. An improved genetic algorithm is applied to searching for optimal task scheduling solutions. The effectiveness of the scheduling method proposed is verified by a case study with individualized customers' requirements. The results indicate that the proposed task scheduling method is able to achieve better performance than some usual algorithms such as simulated annealing and pattern search.
Analysis and simulation tools for solar array power systems
NASA Astrophysics Data System (ADS)
Pongratananukul, Nattorn
This dissertation presents simulation tools developed specifically for the design of solar array power systems. Contributions are made in several aspects of the system design phases, including solar source modeling, system simulation, and controller verification. A tool to automate the study of solar array configurations using general purpose circuit simulators has been developed based on the modeling of individual solar cells. Hierarchical structure of solar cell elements, including semiconductor properties, allows simulation of electrical properties as well as the evaluation of the impact of environmental conditions. A second developed tool provides a co-simulation platform with the capability to verify the performance of an actual digital controller implemented in programmable hardware such as a DSP processor, while the entire solar array including the DC-DC power converter is modeled in software algorithms running on a computer. This "virtual plant" allows developing and debugging code for the digital controller, and also to improve the control algorithm. One important task in solar arrays is to track the maximum power point on the array in order to maximize the power that can be delivered. Digital controllers implemented with programmable processors are particularly attractive for this task because sophisticated tracking algorithms can be implemented and revised when needed to optimize their performance. The proposed co-simulation tools are thus very valuable in developing and optimizing the control algorithm, before the system is built. Examples that demonstrate the effectiveness of the proposed methodologies are presented. The proposed simulation tools are also valuable in the design of multi-channel arrays. In the specific system that we have designed and tested, the control algorithm is implemented on a single digital signal processor. In each of the channels the maximum power point is tracked individually. In the prototype we built, off-the-shelf commercial DC-DC converters were utilized. At the end, the overall performance of the entire system was evaluated using solar array simulators capable of simulating various I-V characteristics, and also by using an electronic load. Experimental results are presented.
Spectral optimization for micro-CT.
Hupfer, Martin; Nowak, Tristan; Brauweiler, Robert; Eisa, Fabian; Kalender, Willi A
2012-06-01
To optimize micro-CT protocols with respect to x-ray spectra and thereby reduce radiation dose at unimpaired image quality. Simulations were performed to assess image contrast, noise, and radiation dose for different imaging tasks. The figure of merit used to determine the optimal spectrum was the dose-weighted contrast-to-noise ratio (CNRD). Both optimal photon energy and tube voltage were considered. Three different types of filtration were investigated for polychromatic x-ray spectra: 0.5 mm Al, 3.0 mm Al, and 0.2 mm Cu. Phantoms consisted of water cylinders of 20, 32, and 50 mm in diameter with a central insert of 9 mm which was filled with different contrast materials: an iodine-based contrast medium (CM) to mimic contrast-enhanced (CE) imaging, hydroxyapatite to mimic bone structures, and water with reduced density to mimic soft tissue contrast. Validation measurements were conducted on a commercially available micro-CT scanner using phantoms consisting of water-equivalent plastics. Measurements on a mouse cadaver were performed to assess potential artifacts like beam hardening and to further validate simulation results. The optimal photon energy for CE imaging was found at 34 keV. For bone imaging, optimal energies were 17, 20, and 23 keV for the 20, 32, and 50 mm phantom, respectively. For density differences, optimal energies varied between 18 and 50 keV for the 20 and 50 mm phantom, respectively. For the 32 mm phantom and density differences, CNRD was found to be constant within 2.5% for the energy range of 21-60 keV. For polychromatic spectra and CMs, optimal settings were 50 kV with 0.2 mm Cu filtration, allowing for a dose reduction of 58% compared to the optimal setting for 0.5 mm Al filtration. For bone imaging, optimal tube voltages were below 35 kV. For soft tissue imaging, optimal tube settings strongly depended on phantom size. For 20 mm, low voltages were preferred. For 32 mm, CNRD was found to be almost independent of tube voltage. For 50 mm, voltages larger than 50 kV were preferred. For all three phantom sizes stronger filtration led to notable dose reduction for soft tissue imaging. Validation measurements were found to match simulations well, with deviations being less than 10%. Mouse measurements confirmed simulation results. Optimal photon energies and tube settings strongly depend on both phantom size and imaging task at hand. For in vivo CE imaging and density differences, strong filtration and voltages of 50-65 kV showed good overall results. For soft tissue imaging of animals the size of a rat or larger, voltages higher than 65 kV allow to greatly reduce scan times while maintaining dose efficiency. For imaging of bone structures, usage of only minimum filtration and low tube voltages of 40 kV and below allow exploiting the high contrast of bone at very low energies. Therefore, a combination of two filtrations could prove beneficial for micro-CT: a soft filtration allowing for bone imaging at low voltages, and a variable stronger filtration (e.g., 0.2 mm Cu) for soft tissue and contrast-enhanced imaging. © 2012 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Womersley, J.; DiGiacomo, N.; Killian, K.
1990-04-01
Detailed detector design has traditionally been divided between engineering optimization for structural integrity and subsequent physicist evaluation. The availability of CAD systems for engineering design enables the tasks to be integrated by providing tools for particle simulation within the CAD system. We believe this will speed up detector design and avoid problems due to the late discovery of shortcomings in the detector. This could occur because of the slowness of traditional verification techniques (such as detailed simulation with GEANT). One such new particle simulation tool is described. It is being used with the I-DEAS CAD package for SSC detector designmore » at Martin-Marietta Astronautics and is to be released through the SSC Laboratory.« less
Heuristics in Managing Complex Clinical Decision Tasks in Experts’ Decision Making
Islam, Roosan; Weir, Charlene; Del Fiol, Guilherme
2016-01-01
Background Clinical decision support is a tool to help experts make optimal and efficient decisions. However, little is known about the high level of abstractions in the thinking process for the experts. Objective The objective of the study is to understand how clinicians manage complexity while dealing with complex clinical decision tasks. Method After approval from the Institutional Review Board (IRB), three clinical experts were interviewed the transcripts from these interviews were analyzed. Results We found five broad categories of strategies by experts for managing complex clinical decision tasks: decision conflict, mental projection, decision trade-offs, managing uncertainty and generating rule of thumb. Conclusion Complexity is created by decision conflicts, mental projection, limited options and treatment uncertainty. Experts cope with complexity in a variety of ways, including using efficient and fast decision strategies to simplify complex decision tasks, mentally simulating outcomes and focusing on only the most relevant information. Application Understanding complex decision making processes can help design allocation based on the complexity of task for clinical decision support design. PMID:27275019
Heuristics in Managing Complex Clinical Decision Tasks in Experts' Decision Making.
Islam, Roosan; Weir, Charlene; Del Fiol, Guilherme
2014-09-01
Clinical decision support is a tool to help experts make optimal and efficient decisions. However, little is known about the high level of abstractions in the thinking process for the experts. The objective of the study is to understand how clinicians manage complexity while dealing with complex clinical decision tasks. After approval from the Institutional Review Board (IRB), three clinical experts were interviewed the transcripts from these interviews were analyzed. We found five broad categories of strategies by experts for managing complex clinical decision tasks: decision conflict, mental projection, decision trade-offs, managing uncertainty and generating rule of thumb. Complexity is created by decision conflicts, mental projection, limited options and treatment uncertainty. Experts cope with complexity in a variety of ways, including using efficient and fast decision strategies to simplify complex decision tasks, mentally simulating outcomes and focusing on only the most relevant information. Understanding complex decision making processes can help design allocation based on the complexity of task for clinical decision support design.
Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Zheng, Steven; Suaning, Gregg J
2014-01-01
Simulated prosthetic vision (SPV) in normally sighted subjects is an established way of investigating the prospective efficacy of visual prosthesis designs in visually guided tasks such as mobility. To perform meaningful SPV mobility studies in computer-based environments, a credible representation of both the virtual scene to navigate and the experienced artificial vision has to be established. It is therefore prudent to make optimal use of existing hardware and software solutions when establishing a testing framework. The authors aimed at improving the realism and immersion of SPV by integrating state-of-the-art yet low-cost consumer technology. The feasibility of body motion tracking to control movement in photo-realistic virtual environments was evaluated in a pilot study. Five subjects were recruited and performed an obstacle avoidance and wayfinding task using either keyboard and mouse, gamepad or Kinect motion tracking. Walking speed and collisions were analyzed as basic measures for task performance. Kinect motion tracking resulted in lower performance as compared to classical input methods, yet results were more uniform across vision conditions. The chosen framework was successfully applied in a basic virtual task and is suited to realistically simulate real-world scenes under SPV in mobility research. Classical input peripherals remain a feasible and effective way of controlling the virtual movement. Motion tracking, despite its limitations and early state of implementation, is intuitive and can eliminate between-subject differences due to familiarity to established input methods.
NASA Astrophysics Data System (ADS)
Wen, Gezheng; Park, Subok; Markey, Mia K.
2017-03-01
Multifocal and multicentric breast cancer (MFMC), i.e., the presence of two or more tumor foci within the same breast, has an immense clinical impact on treatment planning and survival outcomes. Detecting multiple breast tumors is challenging as MFMC breast cancer is relatively uncommon, and human observers do not know the number or locations of tumors a priori. Digital breast tomosynthesis (DBT), in which an x-ray beam sweeps over a limited angular range across the breast, has the potential to improve the detection of multiple tumors.1, 2 However, prior efforts to optimize DBT image quality only considered unifocal breast cancers (e.g.,3-9), so the recommended geometries may not necessarily yield images that are informative for the task of detecting MFMC. Hence, the goal of this study is to employ a 3D multi-lesion (ml) channelized-Hotelling observer (CHO) to identify optimal DBT acquisition geometries for MFMC. Digital breast phantoms and simulated DBT scanners of different geometries (e.g., wide or narrow arc scans, different number of projections in each scan) were used to generate image data for the simulation study. Multiple 3D synthetic lesions were inserted into different breast regions to simulate MF cases and MC cases. 3D partial least squares (PLS) channels, and 3D Laguerre-Gauss (LG) channels were estimated to capture discriminant information and correlations among signals in locally varying anatomical backgrounds, enabling the model observer to make both image-level and location-specific detection decisions. The 3D ml-CHO with PLS channels outperformed that with LG channels in this study. The simulated MC cases and MC cases were not equally difficult for the ml-CHO to detect across the different simulated DBT geometries considered in this analysis. Also, the results suggest that the optimal design of DBT may vary as the task of clinical interest changes, e.g., a geometry that is better for finding at least one lesion may be worse for counting the number of lesions.
Metaheuristic Optimization and its Applications in Earth Sciences
NASA Astrophysics Data System (ADS)
Yang, Xin-She
2010-05-01
A common but challenging task in modelling geophysical and geological processes is to handle massive data and to minimize certain objectives. This can essentially be considered as an optimization problem, and thus many new efficient metaheuristic optimization algorithms can be used. In this paper, we will introduce some modern metaheuristic optimization algorithms such as genetic algorithms, harmony search, firefly algorithm, particle swarm optimization and simulated annealing. We will also discuss how these algorithms can be applied to various applications in earth sciences, including nonlinear least-squares, support vector machine, Kriging, inverse finite element analysis, and data-mining. We will present a few examples to show how different problems can be reformulated as optimization. Finally, we will make some recommendations for choosing various algorithms to suit various problems. References 1) D. H. Wolpert and W. G. Macready, No free lunch theorems for optimization, IEEE Trans. Evolutionary Computation, Vol. 1, 67-82 (1997). 2) X. S. Yang, Nature-Inspired Metaheuristic Algorithms, Luniver Press, (2008). 3) X. S. Yang, Mathematical Modelling for Earth Sciences, Dunedin Academic Press, (2008).
A simulator for surgery training: optimal sensory stimuli in a bone pinning simulation
NASA Astrophysics Data System (ADS)
Daenzer, Stefan; Fritzsche, Klaus
2008-03-01
Currently available low cost haptic devices allow inexpensive surgical training with no risk to patients. Major drawbacks of lower cost devices include limited maximum feedback force and the incapability to expose occurring moments. Aim of this work was the design and implementation of a surgical simulator that allows the evaluation of multi-sensory stimuli in order to overcome the occurring drawbacks. The simulator was built following a modular architecture to allow flexible combinations and thorough evaluation of different multi-sensory feedback modules. A Kirschner-Wire (K-Wire) tibial fracture fixation procedure was defined and implemented as a first test scenario. A set of computational metrics has been derived from the clinical requirements of the task to objectively assess the trainees performance during simulation. Sensory feedback modules for haptic and visual feedback have been developed, each in a basic and additionally in an enhanced form. First tests have shown that specific visual concepts can overcome some of the drawbacks coming along with low cost haptic devices. The simulator, the metrics and the surgery scenario together represent an important step towards a better understanding of the perception of multi-sensory feedback in complex surgical training tasks. Field studies on top of the architecture can open the way to risk-less and inexpensive surgical simulations that can keep up with traditional surgical training.
Dynamic Sensor Tasking for Space Situational Awareness via Reinforcement Learning
NASA Astrophysics Data System (ADS)
Linares, R.; Furfaro, R.
2016-09-01
This paper studies the Sensor Management (SM) problem for optical Space Object (SO) tracking. The tasking problem is formulated as a Markov Decision Process (MDP) and solved using Reinforcement Learning (RL). The RL problem is solved using the actor-critic policy gradient approach. The actor provides a policy which is random over actions and given by a parametric probability density function (pdf). The critic evaluates the policy by calculating the estimated total reward or the value function for the problem. The parameters of the policy action pdf are optimized using gradients with respect to the reward function. Both the critic and the actor are modeled using deep neural networks (multi-layer neural networks). The policy neural network takes the current state as input and outputs probabilities for each possible action. This policy is random, and can be evaluated by sampling random actions using the probabilities determined by the policy neural network's outputs. The critic approximates the total reward using a neural network. The estimated total reward is used to approximate the gradient of the policy network with respect to the network parameters. This approach is used to find the non-myopic optimal policy for tasking optical sensors to estimate SO orbits. The reward function is based on reducing the uncertainty for the overall catalog to below a user specified uncertainty threshold. This work uses a 30 km total position error for the uncertainty threshold. This work provides the RL method with a negative reward as long as any SO has a total position error above the uncertainty threshold. This penalizes policies that take longer to achieve the desired accuracy. A positive reward is provided when all SOs are below the catalog uncertainty threshold. An optimal policy is sought that takes actions to achieve the desired catalog uncertainty in minimum time. This work trains the policy in simulation by letting it task a single sensor to "learn" from its performance. The proposed approach for the SM problem is tested in simulation and good performance is found using the actor-critic policy gradient method.
Li, Heng; Su, Xiaofan; Wang, Jing; Kan, Han; Han, Tingting; Zeng, Yajie; Chai, Xinyu
2018-01-01
Current retinal prostheses can only generate low-resolution visual percepts constituted of limited phosphenes which are elicited by an electrode array and with uncontrollable color and restricted grayscale. Under this visual perception, prosthetic recipients can just complete some simple visual tasks, but more complex tasks like face identification/object recognition are extremely difficult. Therefore, it is necessary to investigate and apply image processing strategies for optimizing the visual perception of the recipients. This study focuses on recognition of the object of interest employing simulated prosthetic vision. We used a saliency segmentation method based on a biologically plausible graph-based visual saliency model and a grabCut-based self-adaptive-iterative optimization framework to automatically extract foreground objects. Based on this, two image processing strategies, Addition of Separate Pixelization and Background Pixel Shrink, were further utilized to enhance the extracted foreground objects. i) The results showed by verification of psychophysical experiments that under simulated prosthetic vision, both strategies had marked advantages over Direct Pixelization in terms of recognition accuracy and efficiency. ii) We also found that recognition performance under two strategies was tied to the segmentation results and was affected positively by the paired-interrelated objects in the scene. The use of the saliency segmentation method and image processing strategies can automatically extract and enhance foreground objects, and significantly improve object recognition performance towards recipients implanted a high-density implant. Copyright © 2017 Elsevier B.V. All rights reserved.
Rahm, Stefan; Wieser, Karl; Bauer, David E; Waibel, Felix Wa; Meyer, Dominik C; Gerber, Christian; Fucentese, Sandro F
2018-05-16
Most studies demonstrated, that training on a virtual reality based arthroscopy simulator leads to an improvement of technical skills in orthopaedic surgery. However, how long and what kind of training is optimal for young residents is unknown. In this study we tested the efficacy of a standardized, competency based training protocol on a validated virtual reality based knee- and shoulder arthroscopy simulator. Twenty residents and five experts in arthroscopy were included. All participants performed a test including knee -and shoulder arthroscopy tasks on a virtual reality knee- and shoulder arthroscopy simulator. The residents had to complete a competency based training program. Thereafter, the previously completed test was retaken. We evaluated the metric data of the simulator using a z-score and the Arthroscopic Surgery Skill Evaluation Tool (ASSET) to assess training effects in residents and performance levels in experts. The residents significantly improved from pre- to post training in the overall z-score: - 9.82 (range, - 20.35 to - 1.64) to - 2.61 (range, - 6.25 to 1.5); p < 0.001. The overall ASSET score improved from 55 (27 to 84) percent to 75 (48 to 92) percent; p < 0.001. The experts, however, achieved a significantly higher z-score in the shoulder tasks (p < 0.001 and a statistically insignificantly higher z-score in the knee tasks with a p = 0.921. The experts mean overall ASSET score (knee and shoulder) was significantly higher in the therapeutic tasks (p < 0.001) compared to the residents post training result. The use of a competency based simulator training with this specific device for 3-5 h is an effective tool to advance basic arthroscopic skills of resident in training from 0 to 5 years based on simulator measures and simulator based ASSET testing. Therefore, we conclude that this sort of training method appears useful to learn the handling of the camera, basic anatomy and the triangulation with instruments.
Blast optimization for improved dragline productivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphreys, M.; Baldwin, G.
1994-12-31
A project aimed at blast optimization for large open pit coal mines is utilizing blast monitoring and analysis techniques, advanced dragline monitoring equipment, and blast simulation software, to assess the major controlling factors affecting both blast performance and subsequent dragline productivity. This has involved collaborative work between the explosives supplier, mine operator, monitoring equipment manufacturer, and a mining research organization. The results from trial blasts and subsequently monitored dragline production have yielded promising results and continuing studies are being conducted as part of a blast optimization program. It should be stressed that the optimization of blasting practices for improved draglinemore » productivity is a site specific task, achieved through controlled and closely monitored procedures. The benefits achieved at one location can not be simply transferred to another minesite unless similar improvement strategies are first implemented.« less
NASA Astrophysics Data System (ADS)
Gambino, James; Tarver, Craig; Springer, H. Keo; White, Bradley; Fried, Laurence
2017-06-01
We present a novel method for optimizing parameters of the Ignition and Growth reactive flow (I&G) model for high explosives. The I&G model can yield accurate predictions of experimental observations. However, calibrating the model is a time-consuming task especially with multiple experiments. In this study, we couple the differential evolution global optimization algorithm to simulations of shock initiation experiments in the multi-physics code ALE3D. We develop parameter sets for HMX based explosives LX-07 and LX-10. The optimization finds the I&G model parameters that globally minimize the difference between calculated and experimental shock time of arrival at embedded pressure gauges. This work was performed under the auspices of the U.S. DOE by LLNL under contract DE-AC52-07NA27344. LLNS, LLC LLNL-ABS- 724898.
NASA Astrophysics Data System (ADS)
Liu, Xiaolin; Li, Lanfei; Sun, Hanxu
2017-12-01
Spherical flying robot can perform various tasks in the complex and varied environment to reduce labor costs. However, it is difficult to guarantee the stability of the spherical flying robot in the case of strong coupling and time-varying disturbance. In this paper, an artificial neural network controller (ANNC) based on MPSO-BFGS hybrid optimization algorithm is proposed. The MPSO algorithm is used to optimize the initial weights of the controller to avoid the local optimal solution. The BFGS algorithm is introduced to improve the convergence ability of the network. We use Lyapunov method to analyze the stability of ANNC. The controller is simulated under the condition of nonlinear coupling disturbance. The experimental results show that the proposed controller can obtain the expected value in shoter time compared with the other considered methods.
NASA Astrophysics Data System (ADS)
Zhang, Yunshen
2017-11-01
With the spiritual guidance of the Circular on the Construction of National Virtual Simulation Experimental Teaching Center by the National Department of Education, according to the requirements of construction task and work content, and based on the reality of the simulation experimental teaching center of virtual chemical laboratory at Tianjin University, this paper mainly strengthens the understanding of virtual simulation experimental teaching center from three aspects, and on this basis, this article puts forward specific construction ideas, which refer to the “four combinations, five in one, the optimization of the resources and school-enterprise cooperation”, and on this basis, this article has made effective explorations. It also shows the powerful functions of the virtual simulation experimental teaching platform in all aspects by taking the synthesis and analysis of organic compounds as an example.
Optimization and Comparison of Different Digital Mammographic Tomosynthesis Reconstruction Methods
2008-04-01
physical measurements of impulse response analysis, modulation transfer function (MTF) and noise power spectrum (NPS). (Months 5- 12). This task has...and 2 impulse -added: projection images with simulated impulse and the I /r2 shading difference. Other system blur and noise issues are not...blur, and suppressed high frequency noise . Point-by-point BP rather than traditional SAA should be considered as the basis of further deblurring
Zhang, Jianhua; Yin, Zhong; Wang, Rubin
2017-01-01
This paper developed a cognitive task-load (CTL) classification algorithm and allocation strategy to sustain the optimal operator CTL levels over time in safety-critical human-machine integrated systems. An adaptive human-machine system is designed based on a non-linear dynamic CTL classifier, which maps a set of electroencephalogram (EEG) and electrocardiogram (ECG) related features to a few CTL classes. The least-squares support vector machine (LSSVM) is used as dynamic pattern classifier. A series of electrophysiological and performance data acquisition experiments were performed on seven volunteer participants under a simulated process control task environment. The participant-specific dynamic LSSVM model is constructed to classify the instantaneous CTL into five classes at each time instant. The initial feature set, comprising 56 EEG and ECG related features, is reduced to a set of 12 salient features (including 11 EEG-related features) by using the locality preserving projection (LPP) technique. An overall correct classification rate of about 80% is achieved for the 5-class CTL classification problem. Then the predicted CTL is used to adaptively allocate the number of process control tasks between operator and computer-based controller. Simulation results showed that the overall performance of the human-machine system can be improved by using the adaptive automation strategy proposed.
Impact of police body armour and equipment on mobility.
Dempsey, Paddy C; Handcock, Phil J; Rehrer, Nancy J
2013-11-01
Body armour is used widely by law enforcement and other agencies but has received mixed reviews. This study examined the influence of stab resistant body armour (SRBA) and mandated accessories on physiological responses to, and the performance of, simulated mobility tasks. Fifty-two males (37 ± 9.2 yr, 180.7 ± 6.1 cm, 90.2 ± 11.6 kg, VO2max 50 ± 8.5 ml kg(-1) min(-1), BMI 27.6 ± 3.1, mean ± SD) completed a running VO2max test and task familiarisation. Two experimental sessions were completed (≥4 days in between) in a randomised counterbalanced order, one while wearing SRBA and appointments (loaded) and one without additional load (unloaded). During each session participants performed five mobility tasks: a balance task, an acceleration task that simulated exiting a vehicle, chin-ups, a grappling task, and a manoeuvrability task. A 5-min treadmill run (zero-incline at 13 km·h(-1), running start) was then completed. One min after the run the five mobility tasks were repeated. There was a significant decrease in performance during all tasks with loading (p < 0.001). Participants were off-balance longer; slower to complete the acceleration, grapple and mobility tasks; completed fewer chin-ups; and had greater physiological cost (↑ %HRmax, ↑ %VO2max, ↑ RER) and perceptual effort (↑ RPE) during the 5-min run. Mean performance decreases ranged from 13 to 42% while loaded, with further decreases of 6-16% noted after the 5-min run. Unloaded task performance was no different between phases. Wearing SRBA and appointments significantly reduced mobility during key task elements and resulted in greater physiological effort. These findings could have consequences for optimal function in the working environment and therefore officer and public safety. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
An Improved Simulated Annealing Technique for Enhanced Mobility in Smart Cities.
Amer, Hayder; Salman, Naveed; Hawes, Matthew; Chaqfeh, Moumena; Mihaylova, Lyudmila; Mayfield, Martin
2016-06-30
Vehicular traffic congestion is a significant problem that arises in many cities. This is due to the increasing number of vehicles that are driving on city roads of limited capacity. The vehicular congestion significantly impacts travel distance, travel time, fuel consumption and air pollution. Avoidance of traffic congestion and providing drivers with optimal paths are not trivial tasks. The key contribution of this work consists of the developed approach for dynamic calculation of optimal traffic routes. Two attributes (the average travel speed of the traffic and the roads' length) are utilized by the proposed method to find the optimal paths. The average travel speed values can be obtained from the sensors deployed in smart cities and communicated to vehicles via the Internet of Vehicles and roadside communication units. The performance of the proposed algorithm is compared to three other algorithms: the simulated annealing weighted sum, the simulated annealing technique for order preference by similarity to the ideal solution and the Dijkstra algorithm. The weighted sum and technique for order preference by similarity to the ideal solution methods are used to formulate different attributes in the simulated annealing cost function. According to the Sheffield scenario, simulation results show that the improved simulated annealing technique for order preference by similarity to the ideal solution method improves the traffic performance in the presence of congestion by an overall average of 19.22% in terms of travel time, fuel consumption and CO₂ emissions as compared to other algorithms; also, similar performance patterns were achieved for the Birmingham test scenario.
Applying Biomimetic Algorithms for Extra-Terrestrial Habitat Generation
NASA Technical Reports Server (NTRS)
Birge, Brian
2012-01-01
The objective is to simulate and optimize distributed cooperation among a network of robots tasked with cooperative excavation on an extra-terrestrial surface. Additionally to examine the concept of directed Emergence among a group of limited artificially intelligent agents. Emergence is the concept of achieving complex results from very simple rules or interactions. For example, in a termite mound each individual termite does not carry a blueprint of how to make their home in a global sense, but their interactions based strictly on local desires create a complex superstructure. Leveraging this Emergence concept applied to a simulation of cooperative agents (robots) will allow an examination of the success of non-directed group strategy achieving specific results. Specifically the simulation will be a testbed to evaluate population based robotic exploration and cooperative strategies while leveraging the evolutionary teamwork approach in the face of uncertainty about the environment and partial loss of sensors. Checking against a cost function and 'social' constraints will optimize cooperation when excavating a simulated tunnel. Agents will act locally with non-local results. The rules by which the simulated robots interact will be optimized to the simplest possible for the desired result, leveraging Emergence. Sensor malfunction and line of sight issues will be incorporated into the simulation. This approach falls under Swarm Robotics, a subset of robot control concerned with finding ways to control large groups of robots. Swarm Robotics often contains biologically inspired approaches, research comes from social insect observation but also data from among groups of herding, schooling, and flocking animals. Biomimetic algorithms applied to manned space exploration is the method under consideration for further study.
An Improved Simulated Annealing Technique for Enhanced Mobility in Smart Cities
Amer, Hayder; Salman, Naveed; Hawes, Matthew; Chaqfeh, Moumena; Mihaylova, Lyudmila; Mayfield, Martin
2016-01-01
Vehicular traffic congestion is a significant problem that arises in many cities. This is due to the increasing number of vehicles that are driving on city roads of limited capacity. The vehicular congestion significantly impacts travel distance, travel time, fuel consumption and air pollution. Avoidance of traffic congestion and providing drivers with optimal paths are not trivial tasks. The key contribution of this work consists of the developed approach for dynamic calculation of optimal traffic routes. Two attributes (the average travel speed of the traffic and the roads’ length) are utilized by the proposed method to find the optimal paths. The average travel speed values can be obtained from the sensors deployed in smart cities and communicated to vehicles via the Internet of Vehicles and roadside communication units. The performance of the proposed algorithm is compared to three other algorithms: the simulated annealing weighted sum, the simulated annealing technique for order preference by similarity to the ideal solution and the Dijkstra algorithm. The weighted sum and technique for order preference by similarity to the ideal solution methods are used to formulate different attributes in the simulated annealing cost function. According to the Sheffield scenario, simulation results show that the improved simulated annealing technique for order preference by similarity to the ideal solution method improves the traffic performance in the presence of congestion by an overall average of 19.22% in terms of travel time, fuel consumption and CO2 emissions as compared to other algorithms; also, similar performance patterns were achieved for the Birmingham test scenario. PMID:27376289
Gray, Rob
2013-08-01
Performance of a skill that involves acting on a goal object (e.g., a ball to be hit) can influence one's judgment of the size and speed of that object. The present study examined how these action-specific effects are affected when the goal of the actor is varied and they are free to choose between alternative actions. In Experiment 1, expert baseball players were asked to perform three different directional hitting tasks in a batting simulation and make interleaved perceptual judgments about three ball parameters (speed, plate crossing location, and size). Perceived ball size was largest (and perceived speed was slowest) when the ball crossing location was optimal for the particular hitting task the batter was performing (e.g., an "outside" pitch for opposite-field hitting). The magnitude of processing dependency between variables (speed vs. location and size vs. location) was positively correlated with batting performance. In Experiment 2, the action-specific effects observed in Experiment 1 were mimicked by systematically changing the ball diameter in the simulation as a function of plate crossing location. The number of swing initiations was greater when ball size was larger, and batters were more successful in the hitting task for which the larger pitches were optimal (e.g., greater number of pull hits than opposite-field hits when "inside" pitches were larger). These findings suggest attentional accentuation of goal-relevant targets underlies action-related changes in perception and are consistent with an action selection role for these effects. 2013 APA, all rights reserved
Learning-based stochastic object models for use in optimizing imaging systems
NASA Astrophysics Data System (ADS)
Dolly, Steven R.; Anastasio, Mark A.; Yu, Lifeng; Li, Hua
2017-03-01
It is widely known that the optimization of imaging systems based on objective, or task-based, measures of image quality via computer-simulation requires use of a stochastic object model (SOM). However, the development of computationally tractable SOMs that can accurately model the statistical variations in anatomy within a specified ensemble of patients remains a challenging task. Because they are established by use of image data corresponding a single patient, previously reported numerical anatomical models lack of the ability to accurately model inter- patient variations in anatomy. In certain applications, however, databases of high-quality volumetric images are available that can facilitate this task. In this work, a novel and tractable methodology for learning a SOM from a set of volumetric training images is developed. The proposed method is based upon geometric attribute distribution (GAD) models, which characterize the inter-structural centroid variations and the intra-structural shape variations of each individual anatomical structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations learned from training data. By use of the GAD models, random organ shapes and positions can be generated and integrated to form an anatomical phantom. The randomness in organ shape and position will reflect the variability of anatomy present in the training data. To demonstrate the methodology, a SOM corresponding to the pelvis of an adult male was computed and a corresponding ensemble of phantoms was created. Additionally, computer-simulated X-ray projection images corresponding to the phantoms were computed, from which tomographic images were reconstructed.
Modeling and Optimization of Multiple Unmanned Aerial Vehicles System Architecture Alternatives
Wang, Weiping; He, Lei
2014-01-01
Unmanned aerial vehicle (UAV) systems have already been used in civilian activities, although very limitedly. Confronted different types of tasks, multi UAVs usually need to be coordinated. This can be extracted as a multi UAVs system architecture problem. Based on the general system architecture problem, a specific description of the multi UAVs system architecture problem is presented. Then the corresponding optimization problem and an efficient genetic algorithm with a refined crossover operator (GA-RX) is proposed to accomplish the architecting process iteratively in the rest of this paper. The availability and effectiveness of overall method is validated using 2 simulations based on 2 different scenarios. PMID:25140328
A model for the submarine depthkeeping team
NASA Technical Reports Server (NTRS)
Ware, J. R.; Best, J. F.; Bozzi, P. J.; Kleinman, D. W.
1981-01-01
The most difficult task the depthkeeping team must face occurs during periscope-depth operations during which they may be required to maintain a submarine several hundred feet long within a foot of ordered depth and within one-half degree of ordered pitch. The difficulty is compounded by the facts that wave generated forces are extremely high, depth and pitch signals are very noisy and submarine speed is such that overall dynamics are slow. A mathematical simulation of the depthkeeping team based on the optimal control models is described. A solution of the optimal team control problem with an output control restriction (limited display to each controller) is presented.
Dr.LiTHO: a development and research lithography simulator
NASA Astrophysics Data System (ADS)
Fühner, Tim; Schnattinger, Thomas; Ardelean, Gheorghe; Erdmann, Andreas
2007-03-01
This paper introduces Dr.LiTHO, a research and development oriented lithography simulation environment developed at Fraunhofer IISB to flexibly integrate our simulation models into one coherent platform. We propose a light-weight approach to a lithography simulation environment: The use of a scripting (batch) language as an integration platform. Out of the great variety of different scripting languages, Python proved superior in many ways: It exhibits a good-natured learning-curve, it is efficient, available on virtually any platform, and provides sophisticated integration mechanisms for existing programs. In this paper, we will describe the steps, required to provide Python bindings for existing programs and to finally generate an integrated simulation environment. In addition, we will give a short introduction into selected software design demands associated with the development of such a framework. We will especially focus on testing and (both technical and user-oriented) documentation issues. Dr.LiTHO Python files contain not only all simulation parameter settings but also the simulation flow, providing maximum flexibility. In addition to relatively simple batch jobs, repetitive tasks can be pooled in libraries. And as Python is a full-blown programming language, users can add virtually any functionality, which is especially useful in the scope of simulation studies or optimization tasks, that often require masses of evaluations. Furthermore, we will give a short overview of the numerous existing Python packages. Several examples demonstrate the feasibility and productiveness of integrating Python packages into custom Dr.LiTHO scripts.
Support vector machine firefly algorithm based optimization of lens system.
Shamshirband, Shahaboddin; Petković, Dalibor; Pavlović, Nenad T; Ch, Sudheer; Altameem, Torki A; Gani, Abdullah
2015-01-01
Lens system design is an important factor in image quality. The main aspect of the lens system design methodology is the optimization procedure. Since optimization is a complex, nonlinear task, soft computing optimization algorithms can be used. There are many tools that can be employed to measure optical performance, but the spot diagram is the most useful. The spot diagram gives an indication of the image of a point object. In this paper, the spot size radius is considered an optimization criterion. Intelligent soft computing scheme support vector machines (SVMs) coupled with the firefly algorithm (FFA) are implemented. The performance of the proposed estimators is confirmed with the simulation results. The result of the proposed SVM-FFA model has been compared with support vector regression (SVR), artificial neural networks, and generic programming methods. The results show that the SVM-FFA model performs more accurately than the other methodologies. Therefore, SVM-FFA can be used as an efficient soft computing technique in the optimization of lens system designs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Epiney, Aaron Simon; Chen, Jun; Rabiti, Cristian
Continued effort to design and build a modeling and simulation framework to assess the economic viability of Nuclear Hybrid Energy Systems (NHES) was undertaken in fiscal year (FY) 2016. The purpose of this report is to document the various tasks associated with the development of such a framework and to provide a status of their progress. Several tasks have been accomplished. First, a synthetic time history generator has been developed in RAVEN, which consists of Fourier series and autoregressive moving average model. The former is used to capture the seasonal trend in historical data, while the latter is to characterizemore » the autocorrelation in residue time series (e.g., measurements with seasonal trends subtracted). As demonstration, both synthetic wind speed and grid demand are generated, showing matching statistics with database. In order to build a design and operations optimizer in RAVEN, a new type of sampler has been developed with highly object-oriented design. In particular, simultaneous perturbation stochastic approximation algorithm is implemented. The optimizer is capable to drive the model to optimize a scalar objective function without constraint in the input space, while the constraints handling is a work in progress and will be implemented to improve the optimization capability. Furthermore, a simplified cash flow model of the performance of an NHES in the electric market has been developed in Python and used as external model in RAVEN to confirm expectations on the analysis capability of RAVEN to provide insight into system economics and to test the capability of RAVEN to identify limit surfaces. Finally, an example calculation is performed that shows the integration and proper data passing in RAVEN of the synthetic time history generator, the cash flow model and the optimizer. It has been shown that the developed Python models external to RAVEN are able to communicate with RAVEN and each other through the newly developed RAVEN capability called “EnsembleModel”.« less
Increasing Optimism Protects Against Pain-Induced Impairment in Task-Shifting Performance.
Boselie, Jantine J L M; Vancleef, Linda M G; Peters, Madelon L
2017-04-01
Persistent pain can lead to difficulties in executive task performance. Three core executive functions that are often postulated are inhibition, updating, and shifting. Optimism, the tendency to expect that good things happen in the future, has been shown to protect against pain-induced performance deterioration in executive function updating. This study tested whether this protective effect of a temporary optimistic state by means of a writing and visualization exercise extended to executive function shifting. A 2 (optimism: optimism vs no optimism) × 2 (pain: pain vs no pain) mixed factorial design was conducted. Participants (N = 61) completed a shifting task once with and once without concurrent painful heat stimulation after an optimism or neutral manipulation. Results showed that shifting performance was impaired when experimental heat pain was applied during task execution, and that optimism counteracted pain-induced deterioration in task-shifting performance. Experimentally-induced heat pain impairs shifting task performance and manipulated optimism or induced optimism counteracted this pain-induced performance deterioration. Identifying psychological factors that may diminish the negative effect of persistent pain on the ability to function in daily life is imperative. Copyright © 2016 American Pain Society. Published by Elsevier Inc. All rights reserved.
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam SM, Jahangir
2017-01-01
As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems. PMID:28422080
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir
2017-04-19
As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems.
Handling qualities of large flexible control-configured aircraft
NASA Technical Reports Server (NTRS)
Swaim, R. L.
1980-01-01
The effects on handling qualities of low frequency symmetric elastic mode interaction with the rigid body dynamics of a large flexible aircraft was analyzed by use of a mathematical pilot modeling computer simulation. An extension of the optimal control model for a human pilot was made so that the mode interaction effects on the pilot's control task could be assessed. Pilot ratings were determined for a longitudinal tracking task with parametric variations in the undamped natural frequencies of the two lowest frequency symmetric elastic modes made to induce varying amounts of mode interaction. Relating numerical performance index values associated with the frequency variations used in several dynamic cases, to a numerical Cooper-Harper pilot rating has proved successful in discriminating when the mathematical pilot can or cannot separate rigid from elastic response in the tracking task.
OGUPSA sensor scheduling architecture and algorithm
NASA Astrophysics Data System (ADS)
Zhang, Zhixiong; Hintz, Kenneth J.
1996-06-01
This paper introduces a new architecture for a sensor measurement scheduler as well as a dynamic sensor scheduling algorithm called the on-line, greedy, urgency-driven, preemptive scheduling algorithm (OGUPSA). OGUPSA incorporates a preemptive mechanism which uses three policies, (1) most-urgent-first (MUF), (2) earliest- completed-first (ECF), and (3) least-versatile-first (LVF). The three policies are used successively to dynamically allocate and schedule and distribute a set of arriving tasks among a set of sensors. OGUPSA also can detect the failure of a task to meet a deadline as well as generate an optimal schedule in the sense of minimum makespan for a group of tasks with the same priorities. A side benefit is OGUPSA's ability to improve dynamic load balance among all sensors while being a polynomial time algorithm. Results of a simulation are presented for a simple sensor system.
Gomez-Cardona, Daniel; Hayes, John W; Zhang, Ran; Li, Ke; Cruz-Bastida, Juan Pablo; Chen, Guang-Hong
2018-05-01
Different low-signal correction (LSC) methods have been shown to efficiently reduce noise streaks and noise level in CT to provide acceptable images at low-radiation dose levels. These methods usually result in CT images with highly shift-variant and anisotropic spatial resolution and noise, which makes the parameter optimization process highly nontrivial. The purpose of this work was to develop a local task-based parameter optimization framework for LSC methods. Two well-known LSC methods, the adaptive trimmed mean (ATM) filter and the anisotropic diffusion (AD) filter, were used as examples to demonstrate how to use the task-based framework to optimize filter parameter selection. Two parameters, denoted by the set P, for each LSC method were included in the optimization problem. For the ATM filter, these parameters are the low- and high-signal threshold levels p l and p h ; for the AD filter, the parameters are the exponents δ and γ in the brightness gradient function. The detectability index d' under the non-prewhitening (NPW) mathematical observer model was selected as the metric for parameter optimization. The optimization problem was formulated as an unconstrained optimization problem that consisted of maximizing an objective function d'(P), where i and j correspond to the i-th imaging task and j-th spatial location, respectively. Since there is no explicit mathematical function to describe the dependence of d' on the set of parameters P for each LSC method, the optimization problem was solved via an experimentally measured d' map over a densely sampled parameter space. In this work, three high-contrast-high-frequency discrimination imaging tasks were defined to explore the parameter space of each of the LSC methods: a vertical bar pattern (task I), a horizontal bar pattern (task II), and a multidirectional feature (task III). Two spatial locations were considered for the analysis, a posterior region-of-interest (ROI) located within the noise streaks region and an anterior ROI, located further from the noise streaks region. Optimal results derived from the task-based detectability index metric were compared to other operating points in the parameter space with different noise and spatial resolution trade-offs. The optimal operating points determined through the d' metric depended on the interplay between the major spatial frequency components of each imaging task and the highly shift-variant and anisotropic noise and spatial resolution properties associated with each operating point in the LSC parameter space. This interplay influenced imaging performance the most when the major spatial frequency component of a given imaging task coincided with the direction of spatial resolution loss or with the dominant noise spatial frequency component; this was the case of imaging task II. The performance of imaging tasks I and III was influenced by this interplay in a smaller scale than imaging task II, since the major frequency component of task I was perpendicular to imaging task II, and because imaging task III did not have strong directional dependence. For both LSC methods, there was a strong dependence of the overall d' magnitude and shape of the contours on the spatial location within the phantom, particularly for imaging tasks II and III. The d' value obtained at the optimal operating point for each spatial location and imaging task was similar when comparing the LSC methods studied in this work. A local task-based detectability framework to optimize the selection of parameters for LSC methods was developed. The framework takes into account the potential shift-variant and anisotropic spatial resolution and noise properties to maximize the imaging performance of the CT system. Optimal parameters for a given LSC method depend strongly on the spatial location within the image object. © 2018 American Association of Physicists in Medicine.
NASA Technical Reports Server (NTRS)
Azzano, Christopher P.
1992-01-01
Control of a large jet transport aircraft without the use of conventional control surfaces was studied. Engine commands were used to attempt to recreate the forces and moments typically provided by the elevator, ailerons, and rudder. Necessary conditions for aircraft controllability were developed pertaining to aircraft configuration such as the number of engines and engine placement. An optimal linear quadratic regulator controller was developed for the Boeing 707-720, in particular, for regulation of its natural dynamic modes. The design used a method of assigning relative weights to the natural modes, i.e., phugoid and dutch roll, for a more intuitive selection of the cost function. A prototype pilot command interface was then integrated into the loop based on pseudorate command of both pitch and roll. Closed loop dynamics were evaluated first with a batch linear simulation and then with a real time high fidelity piloted simulation. The NASA research pilots assisted in evaluation of closed loop handling qualities for typical cruise and landing tasks. Recommendations for improvement on this preliminary study of optimal propulsion only flight control are provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klishin, G.S.; Seleznev, V.E.; Aleoshin, V.V.
1997-12-31
Gas industry enterprises such as main pipelines, compressor gas transfer stations, gas extracting complexes belong to the energy intensive industry. Accidents there can result into the catastrophes and great social, environmental and economic losses. Annually, according to the official data several dozens of large accidents take place at the pipes in the USA and Russia. That is why prevention of the accidents, analysis of the mechanisms of their development and prediction of their possible consequences are acute and important tasks nowadays. The accidents reasons are usually of a complicated character and can be presented as a complex combination of natural,more » technical and human factors. Mathematical and computer simulations are safe, rather effective and comparatively inexpensive methods of the accident analysis. It makes it possible to analyze different mechanisms of a failure occurrence and development, to assess its consequences and give recommendations to prevent it. Besides investigation of the failure cases, numerical simulation techniques play an important role in the treatment of the diagnostics results of the objects and in further construction of mathematical prognostic simulations of the object behavior in the period of time between two inspections. While solving diagnostics tasks and in the analysis of the failure cases, the techniques of theoretical mechanics, of qualitative theory of different equations, of mechanics of a continuous medium, of chemical macro-kinetics and optimizing techniques are implemented in the Conversion Design Bureau {number_sign}5 (DB{number_sign}5). Both universal and special numerical techniques and software (SW) are being developed in DB{number_sign}5 for solution of such tasks. Almost all of them are calibrated on the calculations of the simulated and full-scale experiments performed at the VNIIEF and MINATOM testing sites. It is worth noting that in the long years of work there has been established a fruitful and effective collaboration of theoreticians, mathematicians and experimentalists of the institute to solve such tasks.« less
Optimization of the production process using virtual model of a workspace
NASA Astrophysics Data System (ADS)
Monica, Z.
2015-11-01
Optimization of the production process is an element of the design cycle consisting of: problem definition, modelling, simulation, optimization and implementation. Without the use of simulation techniques, the only thing which could be achieved is larger or smaller improvement of the process, not the optimization (i.e., the best result it is possible to get for the conditions under which the process works). Optimization is generally management actions that are ultimately bring savings in time, resources, and raw materials and improve the performance of a specific process. It does not matter whether it is a service or manufacturing process. Optimizing the savings generated by improving and increasing the efficiency of the processes. Optimization consists primarily of organizational activities that require very little investment, or rely solely on the changing organization of work. Modern companies operating in a market economy shows a significant increase in interest in modern methods of production management and services. This trend is due to the high competitiveness among companies that want to achieve success are forced to continually modify the ways to manage and flexible response to changing demand. Modern methods of production management, not only imply a stable position of the company in the sector, but also influence the improvement of health and safety within the company and contribute to the implementation of more efficient rules for standardization work in the company. This is why in the paper is presented the application of such developed environment like Siemens NX to create the virtual model of a production system and to simulate as well as optimize its work. The analyzed system is the robotized workcell consisting of: machine tools, industrial robots, conveyors, auxiliary equipment and buffers. In the program could be defined the control program realizing the main task in the virtual workcell. It is possible, using this tool, to optimize both the object trajectory and the cooperation process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, M.K.
1999-05-10
Using ORNL information on the characterization of the tank waste sludges, SRTC performed extensive bench-scale vitrification studies using simulants. Several glass systems were tested to ensure the optimum glass composition (based on the glass liquidus temperature, viscosity and durability) is determined. This optimum composition will balance waste loading, melt temperature, waste form performance and disposal requirements. By optimizing the glass composition, a cost savings can be realized during vitrification of the waste. The preferred glass formulation was selected from the bench-scale studies and recommended to ORNL for further testing with samples of actual OR waste tank sludges.
Dynamic Network Selection for Multicast Services in Wireless Cooperative Networks
NASA Astrophysics Data System (ADS)
Chen, Liang; Jin, Le; He, Feng; Cheng, Hanwen; Wu, Lenan
In next generation mobile multimedia communications, different wireless access networks are expected to cooperate. However, it is a challenging task to choose an optimal transmission path in this scenario. This paper focuses on the problem of selecting the optimal access network for multicast services in the cooperative mobile and broadcasting networks. An algorithm is proposed, which considers multiple decision factors and multiple optimization objectives. An analytic hierarchy process (AHP) method is applied to schedule the service queue and an artificial neural network (ANN) is used to improve the flexibility of the algorithm. Simulation results show that by applying the AHP method, a group of weight ratios can be obtained to improve the performance of multiple objectives. And ANN method is effective to adaptively adjust weight ratios when users' new waiting threshold is generated.
Wang, Zhongqi; Yang, Bo; Kang, Yonggang; Yang, Yuan
2016-01-01
Fixture plays an important part in constraining excessive sheet metal part deformation at machining, assembly, and measuring stages during the whole manufacturing process. However, it is still a difficult and nontrivial task to design and optimize sheet metal fixture locating layout at present because there is always no direct and explicit expression describing sheet metal fixture locating layout and responding deformation. To that end, an RBF neural network prediction model is proposed in this paper to assist design and optimization of sheet metal fixture locating layout. The RBF neural network model is constructed by training data set selected by uniform sampling and finite element simulation analysis. Finally, a case study is conducted to verify the proposed method.
Wang, Zhongqi; Yang, Bo; Kang, Yonggang; Yang, Yuan
2016-01-01
Fixture plays an important part in constraining excessive sheet metal part deformation at machining, assembly, and measuring stages during the whole manufacturing process. However, it is still a difficult and nontrivial task to design and optimize sheet metal fixture locating layout at present because there is always no direct and explicit expression describing sheet metal fixture locating layout and responding deformation. To that end, an RBF neural network prediction model is proposed in this paper to assist design and optimization of sheet metal fixture locating layout. The RBF neural network model is constructed by training data set selected by uniform sampling and finite element simulation analysis. Finally, a case study is conducted to verify the proposed method. PMID:27127499
Human motion planning based on recursive dynamics and optimal control techniques
NASA Technical Reports Server (NTRS)
Lo, Janzen; Huang, Gang; Metaxas, Dimitris
2002-01-01
This paper presents an efficient optimal control and recursive dynamics-based computer animation system for simulating and controlling the motion of articulated figures. A quasi-Newton nonlinear programming technique (super-linear convergence) is implemented to solve minimum torque-based human motion-planning problems. The explicit analytical gradients needed in the dynamics are derived using a matrix exponential formulation and Lie algebra. Cubic spline functions are used to make the search space for an optimal solution finite. Based on our formulations, our method is well conditioned and robust, in addition to being computationally efficient. To better illustrate the efficiency of our method, we present results of natural looking and physically correct human motions for a variety of human motion tasks involving open and closed loop kinematic chains.
NASA Technical Reports Server (NTRS)
Biess, J. J.; Yu, Y.; Middlebrook, R. D.; Schoenfeld, A. D.
1974-01-01
A review is given of future power processing systems planned for the next 20 years, and the state-of-the-art of power processing design modeling and analysis techniques used to optimize power processing systems. A methodology of modeling and analysis of power processing equipment and systems has been formulated to fulfill future tradeoff studies and optimization requirements. Computer techniques were applied to simulate power processor performance and to optimize the design of power processing equipment. A program plan to systematically develop and apply the tools for power processing systems modeling and analysis is presented so that meaningful results can be obtained each year to aid the power processing system engineer and power processing equipment circuit designers in their conceptual and detail design and analysis tasks.
Energy aware swarm optimization with intercluster search for wireless sensor network.
Thilagavathi, Shanmugasundaram; Geetha, Bhavani Gnanasambandan
2015-01-01
Wireless sensor networks (WSNs) are emerging as a low cost popular solution for many real-world challenges. The low cost ensures deployment of large sensor arrays to perform military and civilian tasks. Generally, WSNs are power constrained due to their unique deployment method which makes replacement of battery source difficult. Challenges in WSN include a well-organized communication platform for the network with negligible power utilization. In this work, an improved binary particle swarm optimization (PSO) algorithm with modified connected dominating set (CDS) based on residual energy is proposed for discovery of optimal number of clusters and cluster head (CH). Simulations show that the proposed BPSO-T and BPSO-EADS perform better than LEACH- and PSO-based system in terms of energy savings and QOS.
Scalable splitting algorithms for big-data interferometric imaging in the SKA era
NASA Astrophysics Data System (ADS)
Onose, Alexandru; Carrillo, Rafael E.; Repetti, Audrey; McEwen, Jason D.; Thiran, Jean-Philippe; Pesquet, Jean-Christophe; Wiaux, Yves
2016-11-01
In the context of next-generation radio telescopes, like the Square Kilometre Array (SKA), the efficient processing of large-scale data sets is extremely important. Convex optimization tasks under the compressive sensing framework have recently emerged and provide both enhanced image reconstruction quality and scalability to increasingly larger data sets. We focus herein mainly on scalability and propose two new convex optimization algorithmic structures able to solve the convex optimization tasks arising in radio-interferometric imaging. They rely on proximal splitting and forward-backward iterations and can be seen, by analogy, with the CLEAN major-minor cycle, as running sophisticated CLEAN-like iterations in parallel in multiple data, prior, and image spaces. Both methods support any convex regularization function, in particular, the well-studied ℓ1 priors promoting image sparsity in an adequate domain. Tailored for big-data, they employ parallel and distributed computations to achieve scalability, in terms of memory and computational requirements. One of them also exploits randomization, over data blocks at each iteration, offering further flexibility. We present simulation results showing the feasibility of the proposed methods as well as their advantages compared to state-of-the-art algorithmic solvers. Our MATLAB code is available online on GitHub.
Spectral optimization for micro-CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hupfer, Martin; Nowak, Tristan; Brauweiler, Robert
2012-06-15
Purpose: To optimize micro-CT protocols with respect to x-ray spectra and thereby reduce radiation dose at unimpaired image quality. Methods: Simulations were performed to assess image contrast, noise, and radiation dose for different imaging tasks. The figure of merit used to determine the optimal spectrum was the dose-weighted contrast-to-noise ratio (CNRD). Both optimal photon energy and tube voltage were considered. Three different types of filtration were investigated for polychromatic x-ray spectra: 0.5 mm Al, 3.0 mm Al, and 0.2 mm Cu. Phantoms consisted of water cylinders of 20, 32, and 50 mm in diameter with a central insert of 9more » mm which was filled with different contrast materials: an iodine-based contrast medium (CM) to mimic contrast-enhanced (CE) imaging, hydroxyapatite to mimic bone structures, and water with reduced density to mimic soft tissue contrast. Validation measurements were conducted on a commercially available micro-CT scanner using phantoms consisting of water-equivalent plastics. Measurements on a mouse cadaver were performed to assess potential artifacts like beam hardening and to further validate simulation results. Results: The optimal photon energy for CE imaging was found at 34 keV. For bone imaging, optimal energies were 17, 20, and 23 keV for the 20, 32, and 50 mm phantom, respectively. For density differences, optimal energies varied between 18 and 50 keV for the 20 and 50 mm phantom, respectively. For the 32 mm phantom and density differences, CNRD was found to be constant within 2.5% for the energy range of 21-60 keV. For polychromatic spectra and CMs, optimal settings were 50 kV with 0.2 mm Cu filtration, allowing for a dose reduction of 58% compared to the optimal setting for 0.5 mm Al filtration. For bone imaging, optimal tube voltages were below 35 kV. For soft tissue imaging, optimal tube settings strongly depended on phantom size. For 20 mm, low voltages were preferred. For 32 mm, CNRD was found to be almost independent of tube voltage. For 50 mm, voltages larger than 50 kV were preferred. For all three phantom sizes stronger filtration led to notable dose reduction for soft tissue imaging. Validation measurements were found to match simulations well, with deviations being less than 10%. Mouse measurements confirmed simulation results. Conclusions: Optimal photon energies and tube settings strongly depend on both phantom size and imaging task at hand. For in vivo CE imaging and density differences, strong filtration and voltages of 50-65 kV showed good overall results. For soft tissue imaging of animals the size of a rat or larger, voltages higher than 65 kV allow to greatly reduce scan times while maintaining dose efficiency. For imaging of bone structures, usage of only minimum filtration and low tube voltages of 40 kV and below allow exploiting the high contrast of bone at very low energies. Therefore, a combination of two filtrations could prove beneficial for micro-CT: a soft filtration allowing for bone imaging at low voltages, and a variable stronger filtration (e.g., 0.2 mm Cu) for soft tissue and contrast-enhanced imaging.« less
Testing the Limits of Optimizing Dual-Task Performance in Younger and Older Adults
Strobach, Tilo; Frensch, Peter; Müller, Herrmann Josef; Schubert, Torsten
2012-01-01
Impaired dual-task performance in younger and older adults can be improved with practice. Optimal conditions even allow for a (near) elimination of this impairment in younger adults. However, it is unknown whether such (near) elimination is the limit of performance improvements in older adults. The present study tests this limit in older adults under conditions of (a) a high amount of dual-task training and (b) training with simplified component tasks in dual-task situations. The data showed that a high amount of dual-task training in older adults provided no evidence for an improvement of dual-task performance to the optimal dual-task performance level achieved by younger adults. However, training with simplified component tasks in dual-task situations exclusively in older adults provided a similar level of optimal dual-task performance in both age groups. Therefore through applying a testing the limits approach, we demonstrated that older adults improved dual-task performance to the same level as younger adults at the end of training under very specific conditions. PMID:22408613
Park, Chanhun; Nam, Hee-Geun; Lee, Ki Bong; Mun, Sungyong
2014-10-24
The economically-efficient separation of formic acid from acetic acid and succinic acid has been a key issue in the production of formic acid with the Actinobacillus bacteria fermentation. To address this issue, an optimal three-zone simulated moving bed (SMB) chromatography for continuous separation of formic acid from acetic acid and succinic acid was developed in this study. As a first step for this task, the adsorption isotherm and mass-transfer parameters of each organic acid on the qualified adsorbent (Amberchrom-CG300C) were determined through a series of multiple frontal experiments. The determined parameters were then used in optimizing the SMB process for the considered separation. During such optimization, the additional investigation for selecting a proper SMB port configuration, which could be more advantageous for attaining better process performances, was carried out between two possible configurations. It was found that if the properly selected port configuration was adopted in the SMB of interest, the throughout and the formic-acid product concentration could be increased by 82% and 181% respectively. Finally, the optimized SMB process based on the properly selected port configuration was tested experimentally using a self-assembled SMB unit with three zones. The SMB experimental results and the relevant computer simulation verified that the developed process in this study was successful in continuous recovery of formic acid from a ternary organic-acid mixture of interest with high throughput, high purity, high yield, and high product concentration. Copyright © 2014 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nasrabadi, M. N., E-mail: mnnasrabadi@ast.ui.ac.ir; Sepiani, M.
2015-03-30
Production of medical radioisotopes is one of the most important tasks in the field of nuclear technology. These radioactive isotopes are mainly produced through variety nuclear process. In this research, excitation functions and nuclear reaction mechanisms are studied for simulation of production of these radioisotopes in the TALYS, EMPIRE and LISE++ reaction codes, then parameters and different models of nuclear level density as one of the most important components in statistical reaction models are adjusted for optimum production of desired radioactive yields.
NASA Astrophysics Data System (ADS)
Nasrabadi, M. N.; Sepiani, M.
2015-03-01
Production of medical radioisotopes is one of the most important tasks in the field of nuclear technology. These radioactive isotopes are mainly produced through variety nuclear process. In this research, excitation functions and nuclear reaction mechanisms are studied for simulation of production of these radioisotopes in the TALYS, EMPIRE & LISE++ reaction codes, then parameters and different models of nuclear level density as one of the most important components in statistical reaction models are adjusted for optimum production of desired radioactive yields.
Emergency Medicine Resident Physicians’ Perceptions of Electronic Documentation and Workflow
Neri, P.M.; Redden, L.; Poole, S.; Pozner, C.N.; Horsky, J.; Raja, A.S.; Poon, E.; Schiff, G.
2015-01-01
Summary Objective To understand emergency department (ED) physicians’ use of electronic documentation in order to identify usability and workflow considerations for the design of future ED information system (EDIS) physician documentation modules. Methods We invited emergency medicine resident physicians to participate in a mixed methods study using task analysis and qualitative interviews. Participants completed a simulated, standardized patient encounter in a medical simulation center while documenting in the test environment of a currently used EDIS. We recorded the time on task, type and sequence of tasks performed by the participants (including tasks performed in parallel). We then conducted semi-structured interviews with each participant. We analyzed these qualitative data using the constant comparative method to generate themes. Results Eight resident physicians participated. The simulation session averaged 17 minutes and participants spent 11 minutes on average on tasks that included electronic documentation. Participants performed tasks in parallel, such as history taking and electronic documentation. Five of the 8 participants performed a similar workflow sequence during the first part of the session while the remaining three used different workflows. Three themes characterize electronic documentation: (1) physicians report that location and timing of documentation varies based on patient acuity and workload, (2) physicians report a need for features that support improved efficiency; and (3) physicians like viewing available patient data but struggle with integration of the EDIS with other information sources. Conclusion We confirmed that physicians spend much of their time on documentation (65%) during an ED patient visit. Further, we found that resident physicians did not all use the same workflow and approach even when presented with an identical standardized patient scenario. Future EHR design should consider these varied workflows while trying to optimize efficiency, such as improving integration of clinical data. These findings should be tested quantitatively in a larger, representative study. PMID:25848411
CQPSO scheduling algorithm for heterogeneous multi-core DAG task model
NASA Astrophysics Data System (ADS)
Zhai, Wenzheng; Hu, Yue-Li; Ran, Feng
2017-07-01
Efficient task scheduling is critical to achieve high performance in a heterogeneous multi-core computing environment. The paper focuses on the heterogeneous multi-core directed acyclic graph (DAG) task model and proposes a novel task scheduling method based on an improved chaotic quantum-behaved particle swarm optimization (CQPSO) algorithm. A task priority scheduling list was built. A processor with minimum cumulative earliest finish time (EFT) was acted as the object of the first task assignment. The task precedence relationships were satisfied and the total execution time of all tasks was minimized. The experimental results show that the proposed algorithm has the advantage of optimization abilities, simple and feasible, fast convergence, and can be applied to the task scheduling optimization for other heterogeneous and distributed environment.
Comparison of simulator fidelity model predictions with in-simulator evaluation data
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Mckissick, B. T.; Ashworth, B. R.
1983-01-01
A full factorial in simulator experiment of a single axis, multiloop, compensatory pitch tracking task is described. The experiment was conducted to provide data to validate extensions to an analytic, closed loop model of a real time digital simulation facility. The results of the experiment encompassing various simulation fidelity factors, such as visual delay, digital integration algorithms, computer iteration rates, control loading bandwidths and proprioceptive cues, and g-seat kinesthetic cues, are compared with predictions obtained from the analytic model incorporating an optimal control model of the human pilot. The in-simulator results demonstrate more sensitivity to the g-seat and to the control loader conditions than were predicted by the model. However, the model predictions are generally upheld, although the predicted magnitudes of the states and of the error terms are sometimes off considerably. Of particular concern is the large sensitivity difference for one control loader condition, as well as the model/in-simulator mismatch in the magnitude of the plant states when the other states match.
The economic production of alcohol fuels from coal-derived synthesis gas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kugler, E.L.; Dadyburjor, D.B.; Yang, R.Y.K.
1995-12-31
The objectives of this project are to discover, (1) study and evaluate novel heterogeneous catalytic systems for the production of oxygenated fuel enhancers from synthesis gas. Specifically, alternative methods of preparing catalysts are to be investigated, and novel catalysts, including sulfur-tolerant ones, are to be pursued. (Task 1); (2) explore, analytically and on the bench scale, novel reactor and process concepts for use in converting syngas to liquid fuel products. (Task 1); (3) simulate by computer the most energy efficient and economically efficient process for converting coal to energy, with primary focus on converting syngas to fuel alcohols. (Task 2);more » (4) develop on the bench scale the best holistic combination of chemistry, catalyst, reactor and total process configuration integrated with the overall coal conversion process to achieve economic optimization for the conversion of syngas to liquid products within the framework of achieving the maximum cost effective transformation of coal to energy equivalents. (Tasks 1 and 2); and (5) evaluate the combustion, emission and performance characteristics of fuel alcohols and blends of alcohols with petroleum-based fuels. (Task 2)« less
Economical Unsteady High-Fidelity Aerodynamics for Structural Optimization with a Flutter Constraint
NASA Technical Reports Server (NTRS)
Bartels, Robert E.; Stanford, Bret K.
2017-01-01
Structural optimization with a flutter constraint for a vehicle designed to fly in the transonic regime is a particularly difficult task. In this speed range, the flutter boundary is very sensitive to aerodynamic nonlinearities, typically requiring high-fidelity Navier-Stokes simulations. However, the repeated application of unsteady computational fluid dynamics to guide an aeroelastic optimization process is very computationally expensive. This expense has motivated the development of methods that incorporate aspects of the aerodynamic nonlinearity, classical tools of flutter analysis, and more recent methods of optimization. While it is possible to use doublet lattice method aerodynamics, this paper focuses on the use of an unsteady high-fidelity aerodynamic reduced order model combined with successive transformations that allows for an economical way of utilizing high-fidelity aerodynamics in the optimization process. This approach is applied to the common research model wing structural design. As might be expected, the high-fidelity aerodynamics produces a heavier wing than that optimized with doublet lattice aerodynamics. It is found that the optimized lower skin of the wing using high-fidelity aerodynamics differs significantly from that using doublet lattice aerodynamics.
Stavrinos, Despina; Heaton, Karen; Welburn, Sharon C; McManus, Benjamin; Griffin, Russell; Fine, Philip R
2016-08-01
Reducing distracters detrimental to commercial truck driving is a critical component of improving the safety performance of commercial drivers, and makes the highways safer for all drivers. This study used a driving simulator to examine effects of cell phone, texting, and email distractions as well as self-reported driver optimism bias on the driving performance of commercial truck drivers. Results revealed that more visually demanding tasks were related to poorer driving performance. However, the cell phone task resulted in less off-the-road eye glances. Drivers reporting being "very skilled" displayed poorer driving performance than those reporting being "skilled." Onboard communication devices provide a practical, yet visually and manually demanding, solution for connecting drivers and dispatchers. Trucking company policies should minimize interaction between dispatchers and drivers when the truck is in motion. Training facilities should integrate driving simulators into the instruction of commercial drivers, targeting over-confident drivers. © 2016 The Author(s).
Hashem, Joseph; Schneider, Erich; Pryor, Mitch; ...
2017-01-01
Our paper describes how to use MCNP to evaluate the rate of material damage in a robot incurred by exposure to a neutron flux. The example used in this work is that of a robotic manipulator installed in a high intensity, fast, and collimated neutron radiography beam port at the University of Texas at Austin's TRIGA Mark II research reactor. Our effort includes taking robotic technologies and using them to automate non-destructive imaging tasks in nuclear facilities where the robotic manipulator acts as the motion control system for neutron imaging tasks. Simulated radiation tests are used to analyze the radiationmore » damage to the robot. Once the neutron damage is calculated using MCNP, several possible shielding materials are analyzed to determine the most effective way of minimizing the neutron damage. Furthermore, neutron damage predictions provide users the means to simulate geometrical and material changes, thus saving time, money, and energy in determining the optimal setup for a robotic system installed in a radiation environment.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hashem, Joseph; Schneider, Erich; Pryor, Mitch
Our paper describes how to use MCNP to evaluate the rate of material damage in a robot incurred by exposure to a neutron flux. The example used in this work is that of a robotic manipulator installed in a high intensity, fast, and collimated neutron radiography beam port at the University of Texas at Austin's TRIGA Mark II research reactor. Our effort includes taking robotic technologies and using them to automate non-destructive imaging tasks in nuclear facilities where the robotic manipulator acts as the motion control system for neutron imaging tasks. Simulated radiation tests are used to analyze the radiationmore » damage to the robot. Once the neutron damage is calculated using MCNP, several possible shielding materials are analyzed to determine the most effective way of minimizing the neutron damage. Furthermore, neutron damage predictions provide users the means to simulate geometrical and material changes, thus saving time, money, and energy in determining the optimal setup for a robotic system installed in a radiation environment.« less
Seol, Ye-In; Kim, Young-Kuk
2014-01-01
Power-aware scheduling reduces CPU energy consumption in hard real-time systems through dynamic voltage scaling (DVS). In this paper, we deal with pinwheel task model which is known as static and predictable task model and could be applied to various embedded or ubiquitous systems. In pinwheel task model, each task's priority is static and its execution sequence could be predetermined. There have been many static approaches to power-aware scheduling in pinwheel task model. But, in this paper, we will show that the dynamic priority scheduling results in power-aware scheduling could be applied to pinwheel task model. This method is more effective than adopting the previous static priority scheduling methods in saving energy consumption and, for the system being still static, it is more tractable and applicable to small sized embedded or ubiquitous computing. Also, we introduce a novel power-aware scheduling algorithm which exploits all slacks under preemptive earliest-deadline first scheduling which is optimal in uniprocessor system. The dynamic priority method presented in this paper could be applied directly to static systems of pinwheel task model. The simulation results show that the proposed algorithm with the algorithmic complexity of O(n) reduces the energy consumption by 10-80% over the existing algorithms.
2014-01-01
Power-aware scheduling reduces CPU energy consumption in hard real-time systems through dynamic voltage scaling (DVS). In this paper, we deal with pinwheel task model which is known as static and predictable task model and could be applied to various embedded or ubiquitous systems. In pinwheel task model, each task's priority is static and its execution sequence could be predetermined. There have been many static approaches to power-aware scheduling in pinwheel task model. But, in this paper, we will show that the dynamic priority scheduling results in power-aware scheduling could be applied to pinwheel task model. This method is more effective than adopting the previous static priority scheduling methods in saving energy consumption and, for the system being still static, it is more tractable and applicable to small sized embedded or ubiquitous computing. Also, we introduce a novel power-aware scheduling algorithm which exploits all slacks under preemptive earliest-deadline first scheduling which is optimal in uniprocessor system. The dynamic priority method presented in this paper could be applied directly to static systems of pinwheel task model. The simulation results show that the proposed algorithm with the algorithmic complexity of O(n) reduces the energy consumption by 10–80% over the existing algorithms. PMID:25121126
Supervisory Control of a Humanoid Robot in Microgravity for Manipulation Tasks
NASA Technical Reports Server (NTRS)
Farrell, Logan C.; Strawser, Phil; Hambuchen, Kimberly; Baker, Will; Badger, Julia
2017-01-01
Teleoperation is the dominant form of dexterous robotic tasks in the field. However, there are many use cases in which direct teleoperation is not feasible such as disaster areas with poor communication as posed in the DARPA Robotics Challenge, or robot operations on spacecraft a large distance from Earth with long communication delays. Presented is a solution that combines the Affordance Template Framework for object interaction with TaskForce for supervisory control in order to accomplish high level task objectives with basic autonomous behavior from the robot. TaskForce, is a new commanding infrastructure that allows for optimal development of task execution, clear feedback to the user to aid in off-nominal situations, and the capability to add autonomous verification and corrective actions. This framework has allowed the robot to take corrective actions before requesting assistance from the user. This framework is demonstrated with Robonaut 2 removing a Cargo Transfer Bag from a simulated logistics resupply vehicle for spaceflight using a single operator command. This was executed with 80% success with no human involvement, and 95% success with limited human interaction. This technology sets the stage to do any number of high level tasks using a similar framework, allowing the robot to accomplish tasks with minimal to no human interaction.
Telemanipulator design and optimization software
NASA Astrophysics Data System (ADS)
Cote, Jean; Pelletier, Michel
1995-12-01
For many years, industrial robots have been used to execute specific repetitive tasks. In those cases, the optimal configuration and location of the manipulator only has to be found once. The optimal configuration or position where often found empirically according to the tasks to be performed. In telemanipulation, the nature of the tasks to be executed is much wider and can be very demanding in terms of dexterity and workspace. The position/orientation of the robot's base could be required to move during the execution of a task. At present, the choice of the initial position of the teleoperator is usually found empirically which can be sufficient in the case of an easy or repetitive task. In the converse situation, the amount of time wasted to move the teleoperator support platform has to be taken into account during the execution of the task. Automatic optimization of the position/orientation of the platform or a better designed robot configuration could minimize these movements and save time. This paper will present two algorithms. The first algorithm is used to optimize the position and orientation of a given manipulator (or manipulators) with respect to the environment on which a task has to be executed. The second algorithm is used to optimize the position or the kinematic configuration of a robot. For this purpose, the tasks to be executed are digitized using a position/orientation measurement system and a compact representation based on special octrees. Given a digitized task, the optimal position or Denavit-Hartenberg configuration of the manipulator can be obtained numerically. Constraints on the robot design can also be taken into account. A graphical interface has been designed to facilitate the use of the two optimization algorithms.
The effects of experimental pain and induced optimism on working memory task performance.
Boselie, Jantine J L M; Vancleef, Linda M G; Peters, Madelon L
2016-07-01
Pain can interrupt and deteriorate executive task performance. We have previously shown that experimentally induced optimism can diminish the deteriorating effect of cold pressor pain on a subsequent working memory task (i.e., operation span task). In two successive experiments we sought further evidence for the protective role of optimism on pain-induced working memory impairments. We used another working memory task (i.e., 2-back task) that was performed either after or during pain induction. Study 1 employed a 2 (optimism vs. no-optimism)×2 (pain vs. no-pain)×2 (pre-score vs. post-score) mixed factorial design. In half of the participants optimism was induced by the Best Possible Self (BPS) manipulation, which required them to write and visualize about a life in the future where everything turned out for the best. In the control condition, participants wrote and visualized a typical day in their life (TD). Next, participants completed either the cold pressor task (CPT) or a warm water control task (WWCT). Before (baseline) and after the CPT or WWCT participants working memory performance was measured with the 2-back task. The 2-back task measures the ability to monitor and update working memory representation by asking participants to indicate whether the current stimulus corresponds to the stimulus that was presented 2 stimuli ago. Study 2 had a 2 (optimism vs. no-optimism)×2 (pain vs. no-pain) mixed factorial design. After receiving the BPS or control manipulation, participants completed the 2-back task twice: once with painful heat stimulation, and once without any stimulation (counter-balanced order). Continuous heat stimulation was used with temperatures oscillating around 1°C above and 1°C below the individual pain threshold. In study 1, the results did not show an effect of cold pressor pain on subsequent 2-back task performance. Results of study 2 indicated that heat pain impaired concurrent 2-back task performance. However, no evidence was found that optimism protected against this pain-induced performance deterioration. Experimentally induced pain impairs concurrent but not subsequent working memory task performance. Manipulated optimism did not counteract pain-induced deterioration of 2-back performance. It is important to explore factors that may diminish the negative impact of pain on the ability to function in daily life, as pain itself often cannot be remediated. We are planning to conduct future studies that should shed further light on the conditions, contexts and executive operations for which optimism can act as a protective factor. Copyright © 2016 Scandinavian Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.
Open-Loop Pitch Table Optimization for the Maximum Dynamic Pressure Orion Abort Flight Test
NASA Technical Reports Server (NTRS)
Stillwater, Ryan A.
2009-01-01
NASA has scheduled the retirement of the space shuttle orbiter fleet at the end of 2010. The Constellation program was created to develop the next generation of human spaceflight vehicles and launch vehicles, known as Orion and Ares respectively. The Orion vehicle is a return to the capsule configuration that was used in the Mercury, Gemini, and Apollo programs. This configuration allows for the inclusion of an abort system that safely removes the capsule from the booster in the event of a failure on launch. The Flight Test Office at NASA's Dryden Flight Research Center has been tasked with the flight testing of the abort system to ensure proper functionality and safety. The abort system will be tested in various scenarios to approximate the conditions encountered during an actual Orion launch. Every abort will have a closed-loop controller with an open-loop backup that will direct the vehicle during the abort. In order to provide the best fit for the desired total angle of attack profile with the open-loop pitch table, the table is tuned using simulated abort trajectories. A pitch table optimization program was created to tune the trajectories in an automated fashion. The program development was divided into three phases. Phase 1 used only the simulated nominal run to tune the open-loop pitch table. Phase 2 used the simulated nominal and three simulated off nominal runs to tune the open-loop pitch table. Phase 3 used the simulated nominal and sixteen simulated off nominal runs to tune the open-loop pitch table. The optimization program allowed for a quicker and more accurate fit to the desired profile as well as allowing for expanded resolution of the pitch table.
Fault tolerance of artificial neural networks with applications in critical systems
NASA Technical Reports Server (NTRS)
Protzel, Peter W.; Palumbo, Daniel L.; Arras, Michael K.
1992-01-01
This paper investigates the fault tolerance characteristics of time continuous recurrent artificial neural networks (ANN) that can be used to solve optimization problems. The principle of operations and performance of these networks are first illustrated by using well-known model problems like the traveling salesman problem and the assignment problem. The ANNs are then subjected to 13 simultaneous 'stuck at 1' or 'stuck at 0' faults for network sizes of up to 900 'neurons'. The effects of these faults is demonstrated and the cause for the observed fault tolerance is discussed. An application is presented in which a network performs a critical task for a real-time distributed processing system by generating new task allocations during the reconfiguration of the system. The performance degradation of the ANN under the presence of faults is investigated by large-scale simulations, and the potential benefits of delegating a critical task to a fault tolerant network are discussed.
Automation effects in a multiloop manual control system
NASA Technical Reports Server (NTRS)
Hess, R. A.; Mcnally, B. D.
1986-01-01
An experimental and analytical study was undertaken to investigate human interaction with a simple multiloop manual control system in which the human's activity was systematically varied by changing the level of automation. The system simulated was the longitudinal dynamics of a hovering helicopter. The automation-systems-stabilized vehicle responses from attitude to velocity to position and also provided for display automation in the form of a flight director. The control-loop structure resulting from the task definition can be considered a simple stereotype of a hierarchical control system. The experimental study was complemented by an analytical modeling effort which utilized simple crossover models of the human operator. It was shown that such models can be extended to the description of multiloop tasks involving preview and precognitive human operator behavior. The existence of time optimal manual control behavior was established for these tasks and the role which internal models may play in establishing human-machine performance was discussed.
Parker, Maximilian G; Tyson, Sarah F; Weightman, Andrew P; Abbott, Bruce; Emsley, Richard; Mansell, Warren
2017-11-01
Computational models that simulate individuals' movements in pursuit-tracking tasks have been used to elucidate mechanisms of human motor control. Whilst there is evidence that individuals demonstrate idiosyncratic control-tracking strategies, it remains unclear whether models can be sensitive to these idiosyncrasies. Perceptual control theory (PCT) provides a unique model architecture with an internally set reference value parameter, and can be optimized to fit an individual's tracking behavior. The current study investigated whether PCT models could show temporal stability and individual specificity over time. Twenty adults completed three blocks of 15 1-min, pursuit-tracking trials. Two blocks (training and post-training) were completed in one session and the third was completed after 1 week (follow-up). The target moved in a one-dimensional, pseudorandom pattern. PCT models were optimized to the training data using a least-mean-squares algorithm, and validated with data from post-training and follow-up. We found significant inter-individual variability (partial η 2 : .464-.697) and intra-individual consistency (Cronbach's α: .880-.976) in parameter estimates. Polynomial regression revealed that all model parameters, including the reference value parameter, contribute to simulation accuracy. Participants' tracking performances were significantly more accurately simulated by models developed from their own tracking data than by models developed from other participants' data. We conclude that PCT models can be optimized to simulate the performance of an individual and that the test-retest reliability of individual models is a necessary criterion for evaluating computational models of human performance.
Data-oriented scheduling for PROOF
NASA Astrophysics Data System (ADS)
Xu, Neng; Guan, Wen; Wu, Sau Lan; Ganis, Gerardo
2011-12-01
The Parallel ROOT Facility - PROOF - is a distributed analysis system optimized for I/O intensive analysis tasks of HEP data. With LHC entering the analysis phase, PROOF has become a natural ingredient for computing farms at Tier3 level. These analysis facilities will typically be used by a few tenths of users, and can also be federated into a sort of analysis cloud corresponding to the Virtual Organization of the experiment. Proper scheduling is required to guarantee fair resource usage, to enforce priority policies and to optimize the throughput. In this paper we discuss an advanced priority system that we are developing for PROOF. The system has been designed to automatically adapt to unknown length of the tasks, to take into account the data location and availability (including distribution across geographically separated sites), and the {group, user} default priorities. In this system, every element - user, group, dataset, job slot and storage - gets its priority and those priorities are dynamically linked with each other. In order to tune the interplay between the various components, we have designed and started implementing a simulation application that can model various type and size of PROOF clusters. In this application a monitoring package records all the changes of them so that we can easily understand and tune the performance. We will discuss the status of our simulation and show examples of the results we are expecting from it.
High-speed prediction of crystal structures for organic molecules
NASA Astrophysics Data System (ADS)
Obata, Shigeaki; Goto, Hitoshi
2015-02-01
We developed a master-worker type parallel algorithm for allocating tasks of crystal structure optimizations to distributed compute nodes, in order to improve a performance of simulations for crystal structure predictions. The performance experiments were demonstrated on TUT-ADSIM supercomputer system (HITACHI HA8000-tc/HT210). The experimental results show that our parallel algorithm could achieve speed-ups of 214 and 179 times using 256 processor cores on crystal structure optimizations in predictions of crystal structures for 3-aza-bicyclo(3.3.1)nonane-2,4-dione and 2-diazo-3,5-cyclohexadiene-1-one, respectively. We expect that this parallel algorithm is always possible to reduce computational costs of any crystal structure predictions.
Brain-computer interface analysis of a dynamic visuo-motor task.
Logar, Vito; Belič, Aleš
2011-01-01
The area of brain-computer interfaces (BCIs) represents one of the more interesting fields in neurophysiological research, since it investigates the development of the machines that perform different transformations of the brain's "thoughts" to certain pre-defined actions. Experimental studies have reported some successful implementations of BCIs; however, much of the field still remains unexplored. According to some recent reports the phase coding of informational content is an important mechanism in the brain's function and cognition, and has the potential to explain various mechanisms of the brain's data transfer, but it has yet to be scrutinized in the context of brain-computer interface. Therefore, if the mechanism of phase coding is plausible, one should be able to extract the phase-coded content, carried by brain signals, using appropriate signal-processing methods. In our previous studies we have shown that by using a phase-demodulation-based signal-processing approach it is possible to decode some relevant information on the current motor action in the brain from electroencephalographic (EEG) data. In this paper the authors would like to present a continuation of their previous work on the brain-information-decoding analysis of visuo-motor (VM) tasks. The present study shows that EEG data measured during more complex, dynamic visuo-motor (dVM) tasks carries enough information about the currently performed motor action to be successfully extracted by using the appropriate signal-processing and identification methods. The aim of this paper is therefore to present a mathematical model, which by means of the EEG measurements as its inputs predicts the course of the wrist movements as applied by each subject during the task in simulated or real time (BCI analysis). However, several modifications to the existing methodology are needed to achieve optimal decoding results and a real-time, data-processing ability. The information extracted from the EEG could, therefore, be further used for the development of a closed-loop, non-invasive, brain-computer interface. For the case of this study two types of measurements were performed, i.e., the electroencephalographic (EEG) signals and the wrist movements were measured simultaneously, during the subject's performance of a dynamic visuo-motor task. Wrist-movement predictions were computed by using the EEG data-processing methodology of double brain-rhythm filtering, double phase demodulation and double principal component analyses (PCA), each with a separate set of parameters. For the movement-prediction model a fuzzy inference system was used. The results have shown that the EEG signals measured during the dVM tasks carry enough information about the subjects' wrist movements for them to be successfully decoded using the presented methodology. Reasonably high values of the correlation coefficients suggest that the validation of the proposed approach is satisfactory. Moreover, since the causality of the rhythm filtering and the PCA transformation has been achieved, we have shown that these methods can also be used in a real-time, brain-computer interface. The study revealed that using non-causal, optimized methods yields better prediction results in comparison with the causal, non-optimized methodology; however, taking into account that the causality of these methods allows real-time processing, the minor decrease in prediction quality is acceptable. The study suggests that the methodology that was proposed in our previous studies is also valid for identifying the EEG-coded content during dVM tasks, albeit with various modifications, which allow better prediction results and real-time data processing. The results have shown that wrist movements can be predicted in simulated or real time; however, the results of the non-causal, optimized methodology (simulated) are slightly better. Nevertheless, the study has revealed that these methods should be suitable for use in the development of a non-invasive, brain-computer interface. Copyright © 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wen, Gezheng; Markey, Mia K.
2015-03-01
It is resource-intensive to conduct human studies for task-based assessment of medical image quality and system optimization. Thus, numerical model observers have been developed as a surrogate for human observers. The Hotelling observer (HO) is the optimal linear observer for signal-detection tasks, but the high dimensionality of imaging data results in a heavy computational burden. Channelization is often used to approximate the HO through a dimensionality reduction step, but how to produce channelized images without losing significant image information remains a key challenge. Kernel local Fisher discriminant analysis (KLFDA) uses kernel techniques to perform supervised dimensionality reduction, which finds an embedding transformation that maximizes betweenclass separability and preserves within-class local structure in the low-dimensional manifold. It is powerful for classification tasks, especially when the distribution of a class is multimodal. Such multimodality could be observed in many practical clinical tasks. For example, primary and metastatic lesions may both appear in medical imaging studies, but the distributions of their typical characteristics (e.g., size) may be very different. In this study, we propose to use KLFDA as a novel channelization method. The dimension of the embedded manifold (i.e., the result of KLFDA) is a counterpart to the number of channels in the state-of-art linear channelization. We present a simulation study to demonstrate the potential usefulness of KLFDA for building the channelized HOs (CHOs) and generating reliable decision statistics for clinical tasks. We show that the performance of the CHO with KLFDA channels is comparable to that of the benchmark CHOs.
Tommasino, Paolo; Campolo, Domenico
2017-02-03
In this work, we address human-like motor planning in redundant manipulators. Specifically, we want to capture postural synergies such as Donders' law, experimentally observed in humans during kinematically redundant tasks, and infer a minimal set of parameters to implement similar postural synergies in a kinematic model. For the model itself, although the focus of this paper is to solve redundancy by implementing postural strategies derived from experimental data, we also want to ensure that such postural control strategies do not interfere with other possible forms of motion control (in the task-space), i.e. solving the posture/movement problem. The redundancy problem is framed as a constrained optimization problem, traditionally solved via the method of Lagrange multipliers. The posture/movement problem can be tackled via the separation principle which, derived from experimental evidence, posits that the brain processes static torques (i.e. posture-dependent, such as gravitational torques) separately from dynamic torques (i.e. velocity-dependent). The separation principle has traditionally been applied at a joint torque level. Our main contribution is to apply the separation principle to Lagrange multipliers, which act as task-space force fields, leading to a task-space separation principle. In this way, we can separate postural control (implementing Donders' law) from various types of tasks-space movement planners. As an example, the proposed framework is applied to the (redundant) task of pointing with the human wrist. Nonlinear inverse optimization (NIO) is used to fit the model parameters and to capture motor strategies displayed by six human subjects during pointing tasks. The novelty of our NIO approach is that (i) the fitted motor strategy, rather than raw data, is used to filter and down-sample human behaviours; (ii) our framework is used to efficiently simulate model behaviour iteratively, until it converges towards the experimental human strategies.
Optimal Micropatterns in 2D Transport Networks and Their Relation to Image Inpainting
NASA Astrophysics Data System (ADS)
Brancolini, Alessio; Rossmanith, Carolin; Wirth, Benedikt
2018-04-01
We consider two different variational models of transport networks: the so-called branched transport problem and the urban planning problem. Based on a novel relation to Mumford-Shah image inpainting and techniques developed in that field, we show for a two-dimensional situation that both highly non-convex network optimization tasks can be transformed into a convex variational problem, which may be very useful from analytical and numerical perspectives. As applications of the convex formulation, we use it to perform numerical simulations (to our knowledge this is the first numerical treatment of urban planning), and we prove a lower bound for the network cost that matches a known upper bound (in terms of how the cost scales in the model parameters) which helps better understand optimal networks and their minimal costs.
Efficient Constant-Time Complexity Algorithm for Stochastic Simulation of Large Reaction Networks.
Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado
2017-01-01
Exact stochastic simulation is an indispensable tool for a quantitative study of biochemical reaction networks. The simulation realizes the time evolution of the model by randomly choosing a reaction to fire and update the system state according to a probability that is proportional to the reaction propensity. Two computationally expensive tasks in simulating large biochemical networks are the selection of next reaction firings and the update of reaction propensities due to state changes. We present in this work a new exact algorithm to optimize both of these simulation bottlenecks. Our algorithm employs the composition-rejection on the propensity bounds of reactions to select the next reaction firing. The selection of next reaction firings is independent of the number reactions while the update of propensities is skipped and performed only when necessary. It therefore provides a favorable scaling for the computational complexity in simulating large reaction networks. We benchmark our new algorithm with the state of the art algorithms available in literature to demonstrate its applicability and efficiency.
Constructive Engineering of Simulations
NASA Technical Reports Server (NTRS)
Snyder, Daniel R.; Barsness, Brendan
2011-01-01
Joint experimentation that investigates sensor optimization, re-tasking and management has far reaching implications for Department of Defense, Interagency and multinational partners. An adaption of traditional human in the loop (HITL) Modeling and Simulation (M&S) was one approach used to generate the findings necessary to derive and support these implications. Here an entity-based simulation was re-engineered to run on USJFCOM's High Performance Computer (HPC). The HPC was used to support the vast number of constructive runs necessary to produce statistically significant data in a timely manner. Then from the resulting sensitivity analysis, event designers blended the necessary visualization and decision making components into a synthetic environment for the HITL simulations trials. These trials focused on areas where human decision making had the greatest impact on the sensor investigations. Thus, this paper discusses how re-engineering existing M&S for constructive applications can positively influence the design of an associated HITL experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wayne F. Boyer; Gurdeep S. Hura
2005-09-01
The Problem of obtaining an optimal matching and scheduling of interdependent tasks in distributed heterogeneous computing (DHC) environments is well known to be an NP-hard problem. In a DHC system, task execution time is dependent on the machine to which it is assigned and task precedence constraints are represented by a directed acyclic graph. Recent research in evolutionary techniques has shown that genetic algorithms usually obtain more efficient schedules that other known algorithms. We propose a non-evolutionary random scheduling (RS) algorithm for efficient matching and scheduling of inter-dependent tasks in a DHC system. RS is a succession of randomized taskmore » orderings and a heuristic mapping from task order to schedule. Randomized task ordering is effectively a topological sort where the outcome may be any possible task order for which the task precedent constraints are maintained. A detailed comparison to existing evolutionary techniques (GA and PSGA) shows the proposed algorithm is less complex than evolutionary techniques, computes schedules in less time, requires less memory and fewer tuning parameters. Simulation results show that the average schedules produced by RS are approximately as efficient as PSGA schedules for all cases studied and clearly more efficient than PSGA for certain cases. The standard formulation for the scheduling problem addressed in this paper is Rm|prec|Cmax.,« less
Method and Apparatus for Performance Optimization Through Physical Perturbation of Task Elements
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III (Inventor); Pope, Alan T. (Inventor); Palsson, Olafur S. (Inventor); Turner, Marsha J. (Inventor)
2016-01-01
The invention is an apparatus and method of biofeedback training for attaining a physiological state optimally consistent with the successful performance of a task, wherein the probability of successfully completing the task is made is inversely proportional to a physiological difference value, computed as the absolute value of the difference between at least one physiological signal optimally consistent with the successful performance of the task and at least one corresponding measured physiological signal of a trainee performing the task. The probability of successfully completing the task is made inversely proportional to the physiological difference value by making one or more measurable physical attributes of the environment in which the task is performed, and upon which completion of the task depends, vary in inverse proportion to the physiological difference value.
Layout optimization of DRAM cells using rigorous simulation model for NTD
NASA Astrophysics Data System (ADS)
Jeon, Jinhyuck; Kim, Shinyoung; Park, Chanha; Yang, Hyunjo; Yim, Donggyu; Kuechler, Bernd; Zimmermann, Rainer; Muelders, Thomas; Klostermann, Ulrich; Schmoeller, Thomas; Do, Mun-hoe; Choi, Jung-Hoe
2014-03-01
DRAM chip space is mainly determined by the size of the memory cell array patterns which consist of periodic memory cell features and edges of the periodic array. Resolution Enhancement Techniques (RET) are used to optimize the periodic pattern process performance. Computational Lithography such as source mask optimization (SMO) to find the optimal off axis illumination and optical proximity correction (OPC) combined with model based SRAF placement are applied to print patterns on target. For 20nm Memory Cell optimization we see challenges that demand additional tool competence for layout optimization. The first challenge is a memory core pattern of brick-wall type with a k1 of 0.28, so it allows only two spectral beams to interfere. We will show how to analytically derive the only valid geometrically limited source. Another consequence of two-beam interference limitation is a "super stable" core pattern, with the advantage of high depth of focus (DoF) but also low sensitivity to proximity corrections or changes of contact aspect ratio. This makes an array edge correction very difficult. The edge can be the most critical pattern since it forms the transition from the very stable regime of periodic patterns to non-periodic periphery, so it combines the most critical pitch and highest susceptibility to defocus. Above challenge makes the layout correction to a complex optimization task demanding a layout optimization that finds a solution with optimal process stability taking into account DoF, exposure dose latitude (EL), mask error enhancement factor (MEEF) and mask manufacturability constraints. This can only be achieved by simultaneously considering all criteria while placing and sizing SRAFs and main mask features. The second challenge is the use of a negative tone development (NTD) type resist, which has a strong resist effect and is difficult to characterize experimentally due to negative resist profile taper angles that perturb CD at bottom characterization by scanning electron microscope (SEM) measurements. High resist impact and difficult model data acquisition demand for a simulation model that hat is capable of extrapolating reliably beyond its calibration dataset. We use rigorous simulation models to provide that predictive performance. We have discussed the need of a rigorous mask optimization process for DRAM contact cell layout yielding mask layouts that are optimal in process performance, mask manufacturability and accuracy. In this paper, we have shown the step by step process from analytical illumination source derivation, a NTD and application tailored model calibration to layout optimization such as OPC and SRAF placement. Finally the work has been verified with simulation and experimental results on wafer.
Part-Task Simulation of Synthetic and Enhanced Vision Concepts for Lunar Landing
NASA Technical Reports Server (NTRS)
Arthur, Jarvis J., III; Bailey, Randall E.; Jackson, E. Bruce; Williams, Steven P.; Kramer, Lynda J.; Barnes, James R.
2010-01-01
During Apollo, the constraints placed by the design of the Lunar Module (LM) window for crew visibility and landing trajectory were a major problem. Lunar landing trajectories were tailored to provide crew visibility using nearly 70 degrees look-down angle from the canted LM windows. Apollo landings were scheduled only at specific times and locations to provide optimal sunlight on the landing site. The complications of trajectory design and crew visibility are still a problem today. Practical vehicle designs for lunar lander missions using optimal or near-optimal fuel trajectories render the natural vision of the crew from windows inadequate for the approach and landing task. Further, the sun angles for the desirable landing areas in the lunar polar regions create visually powerful, season-long shadow effects. Fortunately, Synthetic and Enhanced Vision (S/EV) technologies, conceived and developed in the aviation domain, may provide solutions to this visibility problem and enable additional benefits for safer, more efficient lunar operations. Piloted simulation evaluations have been conducted to assess the handling qualities of the various lunar landing concepts, including the influence of cockpit displays and the informational data and formats. Evaluation pilots flew various landing scenarios with S/EV displays. For some of the evaluation trials, an eye glasses-mounted, monochrome monocular display, coupled with head tracking, was worn. The head-worn display scene consisted of S/EV fusion concepts. The results of this experiment showed that a head-worn system did not increase the pilot s workload when compared to using just the head-down displays. As expected, the head-worn system did not provide an increase in performance measures. Some pilots commented that the head-worn system provided greater situational awareness compared to just head-down displays.
Part-task simulation of synthetic and enhanced vision concepts for lunar landing
NASA Astrophysics Data System (ADS)
Arthur, Jarvis J., III; Bailey, Randall E.; Jackson, E. Bruce; Barnes, James R.; Williams, Steven P.; Kramer, Lynda J.
2010-04-01
During Apollo, the constraints placed by the design of the Lunar Module (LM) window for crew visibility and landing trajectory were "a major problem." Lunar landing trajectories were tailored to provide crew visibility using nearly 70 degrees look-down angle from the canted LM windows. Apollo landings were scheduled only at specific times and locations to provide optimal sunlight on the landing site. The complications of trajectory design and crew visibility are still a problem today. Practical vehicle designs for lunar lander missions using optimal or near-optimal fuel trajectories render the natural vision of the crew from windows inadequate for the approach and landing task. Further, the sun angles for the desirable landing areas in the lunar polar regions create visually powerful, season-long shadow effects. Fortunately, Synthetic and Enhanced Vision (S/EV) technologies, conceived and developed in the aviation domain, may provide solutions to this visibility problem and enable additional benefits for safer, more efficient lunar operations. Piloted simulation evaluations have been conducted to assess the handling qualities of the various lunar landing concepts, including the influence of cockpit displays and the informational data and formats. Evaluation pilots flew various landing scenarios with S/EV displays. For some of the evaluation trials, an eye glasses-mounted, monochrome monocular display, coupled with head tracking, was worn. The head-worn display scene consisted of S/EV fusion concepts. The results of this experiment showed that a head-worn system did not increase the pilot's workload when compared to using just the head-down displays. As expected, the head-worn system did not provide an increase in performance measures. Some pilots commented that the head-worn system provided greater situational awareness compared to just head-down displays.
NASA Astrophysics Data System (ADS)
Ryckaert, Jana; Correia, António; Smet, Kevin; Tessier, Mickael D.; Dupont, Dorian; Hens, Zeger; Hanselaer, Peter; Meuret, Youri
2017-09-01
Combining traditional phosphors with a broad emission spectrum and non-scattering quantum dots with a narrow emission spectrum can have multiple advantages for white LEDs. It allows to reduce the amount of scattering in the wavelength conversion element, increasing the efficiency of the complete system. Furthermore, the unique possibility to tune the emission spectrum of quantum dots allows to optimize the resulting LED spectrum in order to achieve optimal color rendering properties for the light source. However, finding the optimal quantum dot properties to achieve optimal efficacy and color rendering is a non-trivial task. Instead of simply summing up the emission spectra of the blue LED, phosphor and quantum dots, we propose a complete simulation tool that allows an accurate analysis of the final performance for a range of different quantum dot synthesis parameters. The recycling of the reflected light from the wavelength conversion element by the LED package is taken into account, as well as the re-absorption and the associated red-shift. This simulation tool is used to vary two synthesis parameters (core size and cadmium fraction) of InP/CdxZn1-xSe quantum dots. We find general trends for the ideal quantum dot that should be combined with a specific YAG:Ce broad band phosphor to obtain optimal efficiency and color rendering for a white LED with a specific pumping LED and recycling cavity, with a desired CCT of 3500K.
An investigation of bleed configurations and their effect on shock wave/boundary layer interactions
NASA Technical Reports Server (NTRS)
Hamed, Awatef
1995-01-01
The design of high efficiency supersonic inlets is a complex task involving the optimization of a number of performance parameters such as pressure recovery, spillage, drag, and exit distortion profile, over the flight Mach number range. Computational techniques must be capable of accurately simulating the physics of shock/boundary layer interactions, secondary corner flows, flow separation, and bleed if they are to be useful in the design. In particular, bleed and flow separation, play an important role in inlet unstart, and the associated pressure oscillations. Numerical simulations were conducted to investigate some of the basic physical phenomena associated with bleed in oblique shock wave boundary layer interactions that affect the inlet performance.
NASA Astrophysics Data System (ADS)
Cao, Haotian; Song, Xiaolin; Zhao, Song; Bao, Shan; Huang, Zhi
2017-08-01
Automated driving has received a broad of attentions from the academia and industry, since it is effective to greatly reduce the severity of potential traffic accidents and achieve the ultimate automobile safety and comfort. This paper presents an optimal model-based trajectory following architecture for highly automated vehicle in its driving tasks such as automated guidance or lane keeping, which includes a velocity-planning module, a steering controller and a velocity-tracking controller. The velocity-planning module considering the optimal time-consuming and passenger comforts simultaneously could generate a smooth velocity profile. The robust sliding mode control (SMC) steering controller with adaptive preview time strategy could not only track the target path well, but also avoid a big lateral acceleration occurred in its path-tracking progress due to a fuzzy-adaptive preview time mechanism introduced. In addition, an SMC controller with input-output linearisation method for velocity tracking is built and validated. Simulation results show this trajectory following architecture are effective and feasible for high automated driving vehicle, comparing with the Driver-in-the-Loop simulations performed by an experienced driver and novice driver, respectively. The simulation results demonstrate that the present trajectory following architecture could plan a satisfying longitudinal speed profile, track the target path well and safely when dealing with different road geometry structure, it ensures a good time efficiency and driving comfort simultaneously.
NASA Astrophysics Data System (ADS)
Utama, D. N.; Triana, Y. S.; Iqbal, M. M.; Iksal, M.; Fikri, I.; Dharmawan, T.
2018-03-01
Mosque, for Muslim, is not only a place for daily worshipping, however as a center of culture as well. It is an important and valuable building to be well managed. For a responsible department or institution (such as Religion or Plan Department in Indonesia), to practically manage a lot of mosques is not simple task to handle. The challenge is in relation to data number and characteristic problems tackled. Specifically for renovating and rehabilitating the damaged mosques, a decision to determine the first damaged mosque priority to be renovated and rehabilitated is problematic. Through two types of optimization method, simulated-annealing and hill-climbing, a decision support model for mosque renovation and rehabilitation was systematically constructed. The method fuzzy-logic was also operated to establish the priority of eleven selected parameters. The constructed model is able to simulate an efficiency comparison between two optimization methods used and suggest the most objective decision coming from 196 generated alternatives.
Optimizing Monitoring Designs under Alternative Objectives
Gastelum, Jason A.; USA, Richland Washington; Porter, Ellen A.; ...
2014-12-31
This paper describes an approach to identify monitoring designs that optimize detection of CO2 leakage from a carbon capture and sequestration (CCS) reservoir and compares the results generated under two alternative objective functions. The first objective function minimizes the expected time to first detection of CO2 leakage, the second more conservative objective function minimizes the maximum time to leakage detection across the set of realizations. The approach applies a simulated annealing algorithm that searches the solution space by iteratively mutating the incumbent monitoring design. The approach takes into account uncertainty by evaluating the performance of potential monitoring designs across amore » set of simulated leakage realizations. The approach relies on a flexible two-tiered signature to infer that CO2 leakage has occurred. This research is part of the National Risk Assessment Partnership, a U.S. Department of Energy (DOE) project tasked with conducting risk and uncertainty analysis in the areas of reservoir performance, natural leakage pathways, wellbore integrity, groundwater protection, monitoring, and systems level modeling.« less
The use of vestibular models for design and evaluation of flight simulator motion
NASA Technical Reports Server (NTRS)
Bussolari, Steven R.; Young, Laurence R.; Lee, Alfred T.
1989-01-01
Quantitative models for the dynamics of the human vestibular system are applied to the design and evaluation of flight simulator platform motion. An optimal simulator motion control algorithm is generated to minimize the vector difference between perceived spatial orientation estimated in flight and in simulation. The motion controller has been implemented on the Vertical Motion Simulator at NASA Ames Research Center and evaluated experimentally through measurement of pilot performance and subjective rating during VTOL aircraft simulation. In general, pilot performance in a longitudinal tracking task (formation flight) did not appear to be sensitive to variations in platform motion condition as long as motion was present. However, pilot assessment of motion fidelity by means of a rating scale designed for this purpose, were sensitive to motion controller design. Platform motion generated with the optimal motion controller was found to be generally equivalent to that generated by conventional linear crossfeed washout. The vestibular models are used to evaluate the motion fidelity of transport category aircraft (Boeing 727) simulation in a pilot performance and simulator acceptability study at the Man-Vehicle Systems Research Facility at NASA Ames Research Center. Eighteen airline pilots, currently flying B-727, were given a series of flight scenarios in the simulator under various conditions of simulator motion. The scenarios were chosen to reflect the flight maneuvers that these pilots might expect to be given during a routine pilot proficiency check. Pilot performance and subjective rating of simulator fidelity was relatively insensitive to the motion condition, despite large differences in the amplitude of motion provided. This lack of sensitivity may be explained by means of the vestibular models, which predict little difference in the modeled motion sensations of the pilots when different motion conditions are imposed.
Multi-A Graph Patrolling and Partitioning
NASA Astrophysics Data System (ADS)
Elor, Y.; Bruckstein, A. M.
2012-12-01
We introduce a novel multi agent patrolling algorithm inspired by the behavior of gas filled balloons. Very low capability ant-like agents are considered with the task of patrolling an unknown area modeled as a graph. While executing the proposed algorithm, the agents dynamically partition the graph between them using simple local interactions, every agent assuming the responsibility for patrolling his subgraph. Balanced graph partition is an emergent behavior due to the local interactions between the agents in the swarm. Extensive simulations on various graphs (environments) showed that the average time to reach a balanced partition is linear with the graph size. The simulations yielded a convincing argument for conjecturing that if the graph being patrolled contains a balanced partition, the agents will find it. However, we could not prove this. Nevertheless, we have proved that if a balanced partition is reached, the maximum time lag between two successive visits to any vertex using the proposed strategy is at most twice the optimal so the patrol quality is at least half the optimal. In case of weighted graphs the patrol quality is at least (1)/(2){lmin}/{lmax} of the optimal where lmax (lmin) is the longest (shortest) edge in the graph.
Research on optimal path planning algorithm of task-oriented optical remote sensing satellites
NASA Astrophysics Data System (ADS)
Liu, Yunhe; Xu, Shengli; Liu, Fengjing; Yuan, Jingpeng
2015-08-01
GEO task-oriented optical remote sensing satellite, is very suitable for long-term continuous monitoring and quick access to imaging. With the development of high resolution optical payload technology and satellite attitude control technology, GEO optical remote sensing satellites will become an important developing trend for aerospace remote sensing satellite in the near future. In the paper, we focused on GEO optical remote sensing satellite plane array stare imaging characteristics and real-time leading mission of earth observation mode, targeted on satisfying needs of the user with the minimum cost of maneuver, and put forward the optimal path planning algorithm centered on transformation from geographic coordinate space to Field of plane, and finally reduced the burden of the control system. In this algorithm, bounded irregular closed area on the ground would be transformed based on coordinate transformation relations in to the reference plane for field of the satellite payload, and then using the branch and bound method to search for feasible solutions, cutting off the non-feasible solution in the solution space based on pruning strategy; and finally trimming some suboptimal feasible solutions based on the optimization index until a feasible solution for the global optimum. Simulation and visualization presentation software testing results verified the feasibility and effectiveness of the strategy.
Abdulhamid, Shafi’i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid
2016-01-01
Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques. PMID:27384239
Abdulhamid, Shafi'i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid
2016-01-01
Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques.
Zhimeng, Li; Chuan, He; Dishan, Qiu; Jin, Liu; Manhao, Ma
2013-01-01
Aiming to the imaging tasks scheduling problem on high-altitude airship in emergency condition, the programming models are constructed by analyzing the main constraints, which take the maximum task benefit and the minimum energy consumption as two optimization objectives. Firstly, the hierarchy architecture is adopted to convert this scheduling problem into three subproblems, that is, the task ranking, value task detecting, and energy conservation optimization. Then, the algorithms are designed for the sub-problems, and the solving results are corresponding to feasible solution, efficient solution, and optimization solution of original problem, respectively. This paper makes detailed introduction to the energy-aware optimization strategy, which can rationally adjust airship's cruising speed based on the distribution of task's deadline, so as to decrease the total energy consumption caused by cruising activities. Finally, the application results and comparison analysis show that the proposed strategy and algorithm are effective and feasible. PMID:23864822
Video game practice optimizes executive control skills in dual-task and task switching situations.
Strobach, Tilo; Frensch, Peter A; Schubert, Torsten
2012-05-01
We examined the relation of action video game practice and the optimization of executive control skills that are needed to coordinate two different tasks. As action video games are similar to real life situations and complex in nature, and include numerous concurrent actions, they may generate an ideal environment for practicing these skills (Green & Bavelier, 2008). For two types of experimental paradigms, dual-task and task switching respectively; we obtained performance advantages for experienced video gamers compared to non-gamers in situations in which two different tasks were processed simultaneously or sequentially. This advantage was absent in single-task situations. These findings indicate optimized executive control skills in video gamers. Similar findings in non-gamers after 15 h of action video game practice when compared to non-gamers with practice on a puzzle game clarified the causal relation between video game practice and the optimization of executive control skills. Copyright © 2012 Elsevier B.V. All rights reserved.
Optimal Modality Selection for Cooperative Human-Robot Task Completion.
Jacob, Mithun George; Wachs, Juan P
2016-12-01
Human-robot cooperation in complex environments must be fast, accurate, and resilient. This requires efficient communication channels where robots need to assimilate information using a plethora of verbal and nonverbal modalities such as hand gestures, speech, and gaze. However, even though hybrid human-robot communication frameworks and multimodal communication have been studied, a systematic methodology for designing multimodal interfaces does not exist. This paper addresses the gap by proposing a novel methodology to generate multimodal lexicons which maximizes multiple performance metrics over a wide range of communication modalities (i.e., lexicons). The metrics are obtained through a mixture of simulation and real-world experiments. The methodology is tested in a surgical setting where a robot cooperates with a surgeon to complete a mock abdominal incision and closure task by delivering surgical instruments. Experimental results show that predicted optimal lexicons significantly outperform predicted suboptimal lexicons (p <; 0.05) in all metrics validating the predictability of the methodology. The methodology is validated in two scenarios (with and without modeling the risk of a human-robot collision) and the differences in the lexicons are analyzed.
Sub-half-micron contact window design with 3D photolithography simulator
NASA Astrophysics Data System (ADS)
Brainerd, Steve K.; Bernard, Douglas A.; Rey, Juan C.; Li, Jiangwei; Granik, Yuri; Boksha, Victor V.
1997-07-01
In state of the art IC design and manufacturing certain lithography layers have unique requirements. Latitudes and tolerances that apply to contacts and polysilicon gates are tight for such critical layers. Industry experts are discussing the most cost effective ways to use feature- oriented equipment and materials already developed for these layers. Such requirements introduce new dimensions into the traditionally challenging task for the photolithography engineer when considering various combinations of multiple factors to optimize and control the process. In addition, he/she faces a rapidly increasing cost of experiments, limited time and scarce access to equipment to conduct them. All the reasons presented above support simulation as an ideal method to satisfy these demands. However lithography engineers may be easily dissatisfied with a simulation tool when discovering disagreement between the simulation and experimental data. The problem is that several parameters used in photolithography simulation are very process specific. Calibration, i.e. matching experimental and simulation data using a specific set of procedures allows one to effectively use the simulation tool. We present results of a simulation based approach to optimize photolithography processes for sub-0.5 micron contact windows. Our approach consists of: (1) 3D simulation to explore different lithographic options, (2) calibration to a range of process conditions with extensive use of specifically developed optimization techniques. The choice of a 3D simulator is essential because of 3D nature of the problem of contact window design. We use DEPICT 4.1. This program performs fast aerial image simulation as presented before. For 3D exposure the program uses an extension to three-dimensions of the high numerical aperture model combined with Fast Fourier Transforms for maximum performance and accuracy. We use Kim (U.C. Berkeley) model and the fast marching Level Set method respectively for the calculation of resist development rates and resist surface movement during development process. Calibration efforts were aimed at matching experimental results on contact windows obtained after exposure of a binary mask. Additionally, simulation was applied to conduct quantitative analysis of PSM design capabilities, optical proximity correction, and stepper parameter optimization. Extensive experiments covered exposure (ASML 5500/100D stepper), pre- and post-exposure bake and development (2.38% TMAH, puddle process) of JSR IX725D2G and TOK iP3500 photoresists films on 200 mm test wafers. `Aquatar' was used as top antireflective coating, SEM pictures of developed patterns were analyzed and compared with simulation results for different values of defocus, exposure energies, numerical aperture and partial coherence.
NASA Astrophysics Data System (ADS)
Grigoras, Costin; Carminati, Federico; Vladimirovna Datskova, Olga; Schreiner, Steffen; Lee, Sehoon; Zhu, Jianlin; Gheata, Mihaela; Gheata, Andrei; Saiz, Pablo; Betev, Latchezar; Furano, Fabrizio; Mendez Lorenzo, Patricia; Grigoras, Alina Gabriela; Bagnasco, Stefano; Peters, Andreas Joachim; Saiz Santos, Maria Dolores
2011-12-01
With the LHC and ALICE entering a full operation and production modes, the amount of Simulation and RAW data processing and end user analysis computational tasks are increasing. The efficient management of all these tasks, all of which have large differences in lifecycle, amounts of processed data and methods to analyze the end result, required the development and deployment of new tools in addition to the already existing Grid infrastructure. To facilitate the management of the large scale simulation and raw data reconstruction tasks, ALICE has developed a production framework called a Lightweight Production Manager (LPM). The LPM is automatically submitting jobs to the Grid based on triggers and conditions, for example after a physics run completion. It follows the evolution of the job and publishes the results on the web for worldwide access by the ALICE physicists. This framework is tightly integrated with the ALICE Grid framework AliEn. In addition to the publication of the job status, LPM is also allowing a fully authenticated interface to the AliEn Grid catalogue, to browse and download files, and in the near future will provide simple types of data analysis through ROOT plugins. The framework is also being extended to allow management of end user jobs.
Weiss, Patrice L.; Keshner, Emily A.
2015-01-01
The primary focus of rehabilitation for individuals with loss of upper limb movement as a result of acquired brain injury is the relearning of specific motor skills and daily tasks. This relearning is essential because the loss of upper limb movement often results in a reduced quality of life. Although rehabilitation strives to take advantage of neuroplastic processes during recovery, results of traditional approaches to upper limb rehabilitation have not entirely met this goal. In contrast, enriched training tasks, simulated with a wide range of low- to high-end virtual reality–based simulations, can be used to provide meaningful, repetitive practice together with salient feedback, thereby maximizing neuroplastic processes via motor learning and motor recovery. Such enriched virtual environments have the potential to optimize motor learning by manipulating practice conditions that explicitly engage motivational, cognitive, motor control, and sensory feedback–based learning mechanisms. The objectives of this article are to review motor control and motor learning principles, to discuss how they can be exploited by virtual reality training environments, and to provide evidence concerning current applications for upper limb motor recovery. The limitations of the current technologies with respect to their effectiveness and transfer of learning to daily life tasks also are discussed. PMID:25212522
Task driven optimal leg trajectories in insect-scale legged microrobots
NASA Astrophysics Data System (ADS)
Doshi, Neel; Goldberg, Benjamin; Jayaram, Kaushik; Wood, Robert
Origami inspired layered manufacturing techniques and 3D-printing have enabled the development of highly articulated legged robots at the insect-scale, including the 1.43g Harvard Ambulatory MicroRobot (HAMR). Research on these platforms has expanded its focus from manufacturing aspects to include design optimization and control for application-driven tasks. Consequently, the choice of gait selection, body morphology, leg trajectory, foot design, etc. have become areas of active research. HAMR has two controlled degrees-of-freedom per leg, making it an ideal candidate for exploring leg trajectory. We will discuss our work towards optimizing HAMR's leg trajectories for two different tasks: climbing using electroadhesives and level ground running (5-10 BL/s). These tasks demonstrate the ability of single platform to adapt to vastly different locomotive scenarios: quasi-static climbing with controlled ground contact, and dynamic running with un-controlled ground contact. We will utilize trajectory optimization methods informed by existing models and experimental studies to determine leg trajectories for each task. We also plan to discuss how task specifications and choice of objective function have contributed to the shape of these optimal leg trajectories.
Surgical task analysis of simulated laparoscopic cholecystectomy with a navigation system.
Sugino, T; Kawahira, H; Nakamura, R
2014-09-01
Advanced surgical procedures, which have become complex and difficult, increase the burden of surgeons. Quantitative analysis of surgical procedures can improve training, reduce variability, and enable optimization of surgical procedures. To this end, a surgical task analysis system was developed that uses only surgical navigation information. Division of the surgical procedure, task progress analysis, and task efficiency analysis were done. First, the procedure was divided into five stages. Second, the operating time and progress rate were recorded to document task progress during specific stages, including the dissecting task. Third, the speed of the surgical instrument motion (mean velocity and acceleration), as well as the size and overlap ratio of the approximate ellipse of the location log data distribution, was computed to estimate the task efficiency during each stage. These analysis methods were evaluated based on experimental validation with two groups of surgeons, i.e., skilled and "other" surgeons. The performance metrics and analytical parameters included incidents during the operation, the surgical environment, and the surgeon's skills or habits. Comparison of groups revealed that skilled surgeons tended to perform the procedure in less time and involved smaller regions; they also manipulated the surgical instruments more gently. Surgical task analysis developed for quantitative assessment of surgical procedures and surgical performance may provide practical methods and metrics for objective evaluation of surgical expertise.
Metaheuristic Algorithms for Convolution Neural Network
Fanany, Mohamad Ivan; Arymurthy, Aniati Murni
2016-01-01
A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738
Metaheuristic Algorithms for Convolution Neural Network.
Rere, L M Rasdi; Fanany, Mohamad Ivan; Arymurthy, Aniati Murni
2016-01-01
A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent).
The impact on midlevel vision of statistically optimal divisive normalization in V1.
Coen-Cagli, Ruben; Schwartz, Odelia
2013-07-15
The first two areas of the primate visual cortex (V1, V2) provide a paradigmatic example of hierarchical computation in the brain. However, neither the functional properties of V2 nor the interactions between the two areas are well understood. One key aspect is that the statistics of the inputs received by V2 depend on the nonlinear response properties of V1. Here, we focused on divisive normalization, a canonical nonlinear computation that is observed in many neural areas and modalities. We simulated V1 responses with (and without) different forms of surround normalization derived from statistical models of natural scenes, including canonical normalization and a statistically optimal extension that accounted for image nonhomogeneities. The statistics of the V1 population responses differed markedly across models. We then addressed how V2 receptive fields pool the responses of V1 model units with different tuning. We assumed this is achieved by learning without supervision a linear representation that removes correlations, which could be accomplished with principal component analysis. This approach revealed V2-like feature selectivity when we used the optimal normalization and, to a lesser extent, the canonical one but not in the absence of both. We compared the resulting two-stage models on two perceptual tasks; while models encompassing V1 surround normalization performed better at object recognition, only statistically optimal normalization provided systematic advantages in a task more closely matched to midlevel vision, namely figure/ground judgment. Our results suggest that experiments probing midlevel areas might benefit from using stimuli designed to engage the computations that characterize V1 optimality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vydyanathan, Naga; Krishnamoorthy, Sriram; Sabin, Gerald M.
2009-08-01
Complex parallel applications can often be modeled as directed acyclic graphs of coarse-grained application-tasks with dependences. These applications exhibit both task- and data-parallelism, and combining these two (also called mixedparallelism), has been shown to be an effective model for their execution. In this paper, we present an algorithm to compute the appropriate mix of task- and data-parallelism required to minimize the parallel completion time (makespan) of these applications. In other words, our algorithm determines the set of tasks that should be run concurrently and the number of processors to be allocated to each task. The processor allocation and scheduling decisionsmore » are made in an integrated manner and are based on several factors such as the structure of the taskgraph, the runtime estimates and scalability characteristics of the tasks and the inter-task data communication volumes. A locality conscious scheduling strategy is used to improve inter-task data reuse. Evaluation through simulations and actual executions of task graphs derived from real applications as well as synthetic graphs shows that our algorithm consistently generates schedules with lower makespan as compared to CPR and CPA, two previously proposed scheduling algorithms. Our algorithm also produces schedules that have lower makespan than pure taskand data-parallel schedules. For task graphs with known optimal schedules or lower bounds on the makespan, our algorithm generates schedules that are closer to the optima than other scheduling approaches.« less
NASA Astrophysics Data System (ADS)
Zhong, Yaoquan; Guo, Wei; Jin, Yaohui; Sun, Weiqiang; Hu, Weisheng
2010-12-01
A cost-effective and service-differentiated provisioning strategy is very desirable to service providers so that they can offer users satisfactory services, while optimizing network resource allocation. Providing differentiated protection services to connections for surviving link failure has been extensively studied in recent years. However, the differentiated protection services for workflow-based applications, which consist of many interdependent tasks, have scarcely been studied. This paper investigates the problem of providing differentiated services for workflow-based applications in optical grid. In this paper, we develop three differentiated protection services provisioning strategies which can provide security level guarantee and network-resource optimization for workflow-based applications. The simulation demonstrates that these heuristic algorithms provide protection cost-effectively while satisfying the applications' failure probability requirements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, J; Sisniega, A; Zbijewski, W
Purpose: To design a dedicated x-ray cone-beam CT (CBCT) system suitable to deployment at the point-of-care and offering reliable detection of acute intracranial hemorrhage (ICH), traumatic brain injury (TBI), stroke, and other head and neck injuries. Methods: A comprehensive task-based image quality model was developed to guide system design and optimization of a prototype head scanner suitable to imaging of acute TBI and ICH. Previously reported models were expanded to include the effects of x-ray scatter correction necessary for detection of low contrast ICH and the contribution of bit depth (digitization noise) to imaging performance. Task-based detectablity index provided themore » objective function for optimization of system geometry, x-ray source, detector type, anti-scatter grid, and technique at 10–25 mGy dose. Optimal characteristics were experimentally validated using a custom head phantom with 50 HU contrast ICH inserts imaged on a CBCT imaging bench allowing variation of system geometry, focal spot size, detector, grid selection, and x-ray technique. Results: The model guided selection of system geometry with a nominal source-detector distance 1100 mm and optimal magnification of 1.50. Focal spot size ∼0.6 mm was sufficient for spatial resolution requirements in ICH detection. Imaging at 90 kVp yielded the best tradeoff between noise and contrast. The model provided quantitation of tradeoffs between flat-panel and CMOS detectors with respect to electronic noise, field of view, and readout speed required for imaging of ICH. An anti-scatter grid was shown to provide modest benefit in conjunction with post-acquisition scatter correction. Images of the head phantom demonstrate visualization of millimeter-scale simulated ICH. Conclusions: Performance consistent with acute TBI and ICH detection is feasible with model-based system design and robust artifact correction in a dedicated head CBCT system. Further improvements can be achieved with incorporation of model-based iterative reconstruction techniques also within the scope of the task-based optimization framework. David Foos and Xiaohui Wang are employees of Carestream Health.« less
Strategic Adaptation to Task Characteristics, Incentives, and Individual Differences in Dual-Tasking
Janssen, Christian P.; Brumby, Duncan P.
2015-01-01
We investigate how good people are at multitasking by comparing behavior to a prediction of the optimal strategy for dividing attention between two concurrent tasks. In our experiment, 24 participants had to interleave entering digits on a keyboard with controlling a randomly moving cursor with a joystick. The difficulty of the tracking task was systematically varied as a within-subjects factor. Participants were also exposed to different explicit reward functions that varied the relative importance of the tracking task relative to the typing task (between-subjects). Results demonstrate that these changes in task characteristics and monetary incentives, together with individual differences in typing ability, influenced how participants choose to interleave tasks. This change in strategy then affected their performance on each task. A computational cognitive model was used to predict performance for a wide set of alternative strategies for how participants might have possibly interleaved tasks. This allowed for predictions of optimal performance to be derived, given the constraints placed on performance by the task and cognition. A comparison of human behavior with the predicted optimal strategy shows that participants behaved near optimally. Our findings have implications for the design and evaluation of technology for multitasking situations, as consideration should be given to the characteristics of the task, but also to how different users might use technology depending on their individual characteristics and their priorities. PMID:26161851
DOE Office of Scientific and Technical Information (OSTI.GOV)
Machnes, S.; Institute for Theoretical Physics, University of Ulm, D-89069 Ulm; Sander, U.
2011-08-15
For paving the way to novel applications in quantum simulation, computation, and technology, increasingly large quantum systems have to be steered with high precision. It is a typical task amenable to numerical optimal control to turn the time course of pulses, i.e., piecewise constant control amplitudes, iteratively into an optimized shape. Here, we present a comparative study of optimal-control algorithms for a wide range of finite-dimensional applications. We focus on the most commonly used algorithms: GRAPE methods which update all controls concurrently, and Krotov-type methods which do so sequentially. Guidelines for their use are given and open research questions aremore » pointed out. Moreover, we introduce a unifying algorithmic framework, DYNAMO (dynamic optimization platform), designed to provide the quantum-technology community with a convenient matlab-based tool set for optimal control. In addition, it gives researchers in optimal-control techniques a framework for benchmarking and comparing newly proposed algorithms with the state of the art. It allows a mix-and-match approach with various types of gradients, update and step-size methods as well as subspace choices. Open-source code including examples is made available at http://qlib.info.« less
Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam
2015-01-01
The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Advani, S.H.; Lee, T.S.; Moon, H.
1992-10-01
The analysis of pertinent energy components or affiliated characteristic times for hydraulic stimulation processes serves as an effective tool for fracture configuration designs optimization, and control. This evaluation, in conjunction with parametric sensitivity studies, provides a rational base for quantifying dominant process mechanisms and the roles of specified reservoir properties relative to controllable hydraulic fracture variables for a wide spectrum of treatment scenarios. Results are detailed for the following multi-task effort: (a) Application of characteristic time concept and parametric sensitivity studies for specialized fracture geometries (rectangular, penny-shaped, elliptical) and three-layered elliptic crack models (in situ stress, elastic moduli, and fracturemore » toughness contrasts). (b) Incorporation of leak-off effects for models investigated in (a). (c) Simulation of generalized hydraulic fracture models and investigation of the role of controllable vaxiables and uncontrollable system properties. (d) Development of guidelines for hydraulic fracture design and optimization.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Advani, S.H.; Lee, T.S.; Moon, H.
1992-10-01
The analysis of pertinent energy components or affiliated characteristic times for hydraulic stimulation processes serves as an effective tool for fracture configuration designs optimization, and control. This evaluation, in conjunction with parametric sensitivity studies, provides a rational base for quantifying dominant process mechanisms and the roles of specified reservoir properties relative to controllable hydraulic fracture variables for a wide spectrum of treatment scenarios. Results are detailed for the following multi-task effort: (a) Application of characteristic time concept and parametric sensitivity studies for specialized fracture geometries (rectangular, penny-shaped, elliptical) and three-layered elliptic crack models (in situ stress, elastic moduli, and fracturemore » toughness contrasts). (b) Incorporation of leak-off effects for models investigated in (a). (c) Simulation of generalized hydraulic fracture models and investigation of the role of controllable vaxiables and uncontrollable system properties. (d) Development of guidelines for hydraulic fracture design and optimization.« less
NASA Astrophysics Data System (ADS)
Nawi, Nazri Mohd.; Khan, Abdullah; Rehman, M. Z.
2015-05-01
A nature inspired behavior metaheuristic techniques which provide derivative-free solutions to solve complex problems. One of the latest additions to the group of nature inspired optimization procedure is Cuckoo Search (CS) algorithm. Artificial Neural Network (ANN) training is an optimization task since it is desired to find optimal weight set of a neural network in training process. Traditional training algorithms have some limitation such as getting trapped in local minima and slow convergence rate. This study proposed a new technique CSLM by combining the best features of two known algorithms back-propagation (BP) and Levenberg Marquardt algorithm (LM) for improving the convergence speed of ANN training and avoiding local minima problem by training this network. Some selected benchmark classification datasets are used for simulation. The experiment result show that the proposed cuckoo search with Levenberg Marquardt algorithm has better performance than other algorithm used in this study.
Flight Control Development for the ARH-70 Armed Reconnaissance Helicopter Program
NASA Technical Reports Server (NTRS)
Christensen, Kevin T.; Campbell, Kip G.; Griffith, Carl D.; Ivler, Christina M.; Tischler, Mark B.; Harding, Jeffrey W.
2008-01-01
In July 2005, Bell Helicopter won the U.S. Army's Armed Reconnaissance Helicopter competition to produce a replacement for the OH-58 Kiowa Warrior capable of performing the armed reconnaissance mission. To meet the U.S. Army requirement that the ARH-70A have Level 1 handling qualities for the scout rotorcraft mission task elements defined by ADS-33E-PRF, Bell equipped the aircraft with their generic automatic flight control system (AFCS). Under the constraints of the tight ARH-70A schedule, the development team used modem parameter identification and control law optimization techniques to optimize the AFCS gains to simultaneously meet multiple handling qualities design criteria. This paper will show how linear modeling, control law optimization, and simulation have been used to produce a Level 1 scout rotorcraft for the U.S. Army, while minimizing the amount of flight testing required for AFCS development and handling qualities evaluation of the ARH-70A.
Display/control requirements for automated VTOL aircraft
NASA Technical Reports Server (NTRS)
Hoffman, W. C.; Kleinman, D. L.; Young, L. R.
1976-01-01
A systematic design methodology for pilot displays in advanced commercial VTOL aircraft was developed and refined. The analyst is provided with a step-by-step procedure for conducting conceptual display/control configurations evaluations for simultaneous monitoring and control pilot tasks. The approach consists of three phases: formulation of information requirements, configuration evaluation, and system selection. Both the monitoring and control performance models are based upon the optimal control model of the human operator. Extensions to the conventional optimal control model required in the display design methodology include explicit optimization of control/monitoring attention; simultaneous monitoring and control performance predictions; and indifference threshold effects. The methodology was applied to NASA's experimental CH-47 helicopter in support of the VALT program. The CH-47 application examined the system performance of six flight conditions. Four candidate configurations are suggested for evaluation in pilot-in-the-loop simulations and eventual flight tests.
Jihong, Qu
2014-01-01
Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision. PMID:24895663
Ren, Kun; Jihong, Qu
2014-01-01
Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision.
Solving optimization problems by the public goods game
NASA Astrophysics Data System (ADS)
Javarone, Marco Alberto
2017-09-01
We introduce a method based on the Public Goods Game for solving optimization tasks. In particular, we focus on the Traveling Salesman Problem, i.e. a NP-hard problem whose search space exponentially grows increasing the number of cities. The proposed method considers a population whose agents are provided with a random solution to the given problem. In doing so, agents interact by playing the Public Goods Game using the fitness of their solution as currency of the game. Notably, agents with better solutions provide higher contributions, while those with lower ones tend to imitate the solution of richer agents for increasing their fitness. Numerical simulations show that the proposed method allows to compute exact solutions, and suboptimal ones, in the considered search spaces. As result, beyond to propose a new heuristic for combinatorial optimization problems, our work aims to highlight the potentiality of evolutionary game theory beyond its current horizons.
Gang, G J; Siewerdsen, J H; Stayman, J W
2017-02-11
This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index ( d' ) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength ( β ) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.
A Comparison of Three Determinants of an Engagement Index for Use in a Simulated Flight Environment
NASA Technical Reports Server (NTRS)
Hitt, James M., II
1995-01-01
The following report details a project design that is to be completed by the end of the year. Determining how engaged a person is at a task is rather difficult. There are many different ways to assess engagement. One such method is to use psychophysical measures. The current study focuses on three determinants of an engagement index proposed by researchers at NASA-Langley (Pope, A. T., Bogart, E. H., and Bartolome, D. S., 1995). The index (20 Beta/(Alpha+Theta)) uses EEG power bands to determine a person's level of engagement while performs a compensatory tracking task. The tracking task switches between manual and automatic modes. Participants each experience both positive and negative feedback within each trial of the three trials. The tracking task is altered in terms of difficulty depending on the participants current engagement index. The rationale of this study is to determine the optimal level of engagement to gain peak performance. The three determinants are based on an absolute index which differs from the past research which uses a slope index.
NASA Astrophysics Data System (ADS)
Halbrügge, Marc
2010-12-01
This paper describes the creation of a cognitive model submitted to the ‘Dynamic Stocks and Flows’ (DSF) modeling challenge. This challenge aims at comparing computational cognitive models for human behavior during an open ended control task. Participants in the modeling competition were provided with a simulation environment and training data for benchmarking their models while the actual specification of the competition task was withheld. To meet this challenge, the cognitive model described here was designed and optimized for generalizability. Only two simple assumptions about human problem solving were used to explain the empirical findings of the training data. In-depth analysis of the data set prior to the development of the model led to the dismissal of correlations or other parametric statistics as goodness-of-fit indicators. A new statistical measurement based on rank orders and sequence matching techniques is being proposed instead. This measurement, when being applied to the human sample, also identifies clusters of subjects that use different strategies for the task. The acceptability of the fits achieved by the model is verified using permutation tests.
Predictive Cache Modeling and Analysis
2011-11-01
metaheuristic /bin-packing algorithm to optimize task placement based on task communication characterization. Our previous work on task allocation showed...Cache Miss Minimization Technology To efficiently explore combinations and discover nearly-optimal task-assignment algorithms , we extended to our...it was possible to use our algorithmic techniques to decrease network bandwidth consumption by ~25%. In this effort, we adapted these existing
Mousavi, Maryam; Yap, Hwa Jen; Musa, Siti Nurmaya; Tahriri, Farzad; Md Dawal, Siti Zawiah
2017-01-01
Flexible manufacturing system (FMS) enhances the firm's flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs). An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and hybrid GA-PSO) to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs' battery charge. Assessment of the numerical examples' scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software.
Yap, Hwa Jen; Musa, Siti Nurmaya; Tahriri, Farzad; Md Dawal, Siti Zawiah
2017-01-01
Flexible manufacturing system (FMS) enhances the firm’s flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs). An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and hybrid GA-PSO) to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs’ battery charge. Assessment of the numerical examples’ scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software. PMID:28263994
Minimal perceptrons for memorizing complex patterns
NASA Astrophysics Data System (ADS)
Pastor, Marissa; Song, Juyong; Hoang, Danh-Tai; Jo, Junghyo
2016-11-01
Feedforward neural networks have been investigated to understand learning and memory, as well as applied to numerous practical problems in pattern classification. It is a rule of thumb that more complex tasks require larger networks. However, the design of optimal network architectures for specific tasks is still an unsolved fundamental problem. In this study, we consider three-layered neural networks for memorizing binary patterns. We developed a new complexity measure of binary patterns, and estimated the minimal network size for memorizing them as a function of their complexity. We formulated the minimal network size for regular, random, and complex patterns. In particular, the minimal size for complex patterns, which are neither ordered nor disordered, was predicted by measuring their Hamming distances from known ordered patterns. Our predictions agree with simulations based on the back-propagation algorithm.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor)
1994-01-01
Real-time video presentations are provided in the field of operator-supervised automation and teleoperation, particularly in control stations having movable cameras for optimal viewing of a region of interest in robotics and teleoperations for performing different types of tasks. Movable monitors to match the corresponding camera orientations (pan, tilt, and roll) are provided in order to match the coordinate systems of all the monitors to the operator internal coordinate system. Automated control of the arrangement of cameras and monitors, and of the configuration of system parameters, is provided for optimal viewing and performance of each type of task for each operator since operators have different individual characteristics. The optimal viewing arrangement and system parameter configuration is determined and stored for each operator in performing each of many types of tasks in order to aid the automation of setting up optimal arrangements and configurations for successive tasks in real time. Factors in determining what is optimal include the operator's ability to use hand-controllers for each type of task. Robot joint locations, forces and torques are used, as well as the operator's identity, to identify the current type of task being performed in order to call up a stored optimal viewing arrangement and system parameter configuration.
Design and Analysis of Self-Adapted Task Scheduling Strategies in Wireless Sensor Networks
Guo, Wenzhong; Xiong, Naixue; Chao, Han-Chieh; Hussain, Sajid; Chen, Guolong
2011-01-01
In a wireless sensor network (WSN), the usage of resources is usually highly related to the execution of tasks which consume a certain amount of computing and communication bandwidth. Parallel processing among sensors is a promising solution to provide the demanded computation capacity in WSNs. Task allocation and scheduling is a typical problem in the area of high performance computing. Although task allocation and scheduling in wired processor networks has been well studied in the past, their counterparts for WSNs remain largely unexplored. Existing traditional high performance computing solutions cannot be directly implemented in WSNs due to the limitations of WSNs such as limited resource availability and the shared communication medium. In this paper, a self-adapted task scheduling strategy for WSNs is presented. First, a multi-agent-based architecture for WSNs is proposed and a mathematical model of dynamic alliance is constructed for the task allocation problem. Then an effective discrete particle swarm optimization (PSO) algorithm for the dynamic alliance (DPSO-DA) with a well-designed particle position code and fitness function is proposed. A mutation operator which can effectively improve the algorithm’s ability of global search and population diversity is also introduced in this algorithm. Finally, the simulation results show that the proposed solution can achieve significant better performance than other algorithms. PMID:22163971
Dynamic Task Performance, Cohesion, and Communications in Human Groups.
Giraldo, Luis Felipe; Passino, Kevin M
2016-10-01
In the study of the behavior of human groups, it has been observed that there is a strong interaction between the cohesiveness of the group, its performance when the group has to solve a task, and the patterns of communication between the members of the group. Developing mathematical and computational tools for the analysis and design of task-solving groups that are not only cohesive but also perform well is of importance in social sciences, organizational management, and engineering. In this paper, we model a human group as a dynamical system whose behavior is driven by a task optimization process and the interaction between subsystems that represent the members of the group interconnected according to a given communication network. These interactions are described as attractions and repulsions among members. We show that the dynamics characterized by the proposed mathematical model are qualitatively consistent with those observed in real-human groups, where the key aspect is that the attraction patterns in the group and the commitment to solve the task are not static but change over time. Through a theoretical analysis of the system we provide conditions on the parameters that allow the group to have cohesive behaviors, and Monte Carlo simulations are used to study group dynamics for different sets of parameters, communication topologies, and tasks to solve.
NASA Astrophysics Data System (ADS)
Brewer, Jeffrey David
The National Aeronautics and Space Administration is planning for long-duration manned missions to the Moon and Mars. For feasible long-duration space travel, improvements in exercise countermeasures are necessary to maintain cardiovascular fitness, bone mass throughout the body and the ability to perform coordinated movements in a constant gravitational environment that is six orders of magnitude higher than the "near weightlessness" condition experienced during transit to and/or orbit of the Moon, Mars, and Earth. In such gravitational transitions feedback and feedforward postural control strategies must be recalibrated to ensure optimal locomotion performance. In order to investigate methods of improving postural control adaptation during these gravitational transitions, a treadmill based precision stepping task was developed to reveal changes in neuromuscular control of locomotion following both simulated partial gravity exposure and post-simulation exercise countermeasures designed to speed lower extremity impedance adjustment mechanisms. The exercise countermeasures included a short period of running with or without backpack loads immediately after partial gravity running. A novel suspension type partial gravity simulator incorporating spring balancers and a motor-driven treadmill was developed to facilitate body weight off loading and various gait patterns in both simulated partial and full gravitational environments. Studies have provided evidence that suggests: the environmental simulator constructed for this thesis effort does induce locomotor adaptations following partial gravity running; the precision stepping task may be a helpful test for illuminating these adaptations; and musculoskeletal loading in the form of running with or without backpack loads may improve the locomotor adaptation process.
Gómez, Pablo; Patel, Rita R.; Alexiou, Christoph; Bohr, Christopher; Schützenberger, Anne
2017-01-01
Motivation Human voice is generated in the larynx by the two oscillating vocal folds. Owing to the limited space and accessibility of the larynx, endoscopic investigation of the actual phonatory process in detail is challenging. Hence the biomechanics of the human phonatory process are still not yet fully understood. Therefore, we adapt a mathematical model of the vocal folds towards vocal fold oscillations to quantify gender and age related differences expressed by computed biomechanical model parameters. Methods The vocal fold dynamics are visualized by laryngeal high-speed videoendoscopy (4000 fps). A total of 33 healthy young subjects (16 females, 17 males) and 11 elderly subjects (5 females, 6 males) were recorded. A numerical two-mass model is adapted to the recorded vocal fold oscillations by varying model masses, stiffness and subglottal pressure. For adapting the model towards the recorded vocal fold dynamics, three different optimization algorithms (Nelder–Mead, Particle Swarm Optimization and Simulated Bee Colony) in combination with three cost functions were considered for applicability. Gender differences and age-related kinematic differences reflected by the model parameters were analyzed. Results and conclusion The biomechanical model in combination with numerical optimization techniques allowed phonatory behavior to be simulated and laryngeal parameters involved to be quantified. All three optimization algorithms showed promising results. However, only one cost function seems to be suitable for this optimization task. The gained model parameters reflect the phonatory biomechanics for men and women well and show quantitative age- and gender-specific differences. The model parameters for younger females and males showed lower subglottal pressures, lower stiffness and higher masses than the corresponding elderly groups. Females exhibited higher subglottal pressures, smaller oscillation masses and larger stiffness than the corresponding similar aged male groups. Optimizing numerical models towards vocal fold oscillations is useful to identify underlying laryngeal components controlling the phonatory process. PMID:29121085
NASA Astrophysics Data System (ADS)
Gang, Grace J.; Siewerdsen, Jeffrey H.; Webster Stayman, J.
2017-06-01
Tube current modulation (TCM) is routinely adopted on diagnostic CT scanners for dose reduction. Conventional TCM strategies are generally designed for filtered-backprojection (FBP) reconstruction to satisfy simple image quality requirements based on noise. This work investigates TCM designs for model-based iterative reconstruction (MBIR) to achieve optimal imaging performance as determined by a task-based image quality metric. Additionally, regularization is an important aspect of MBIR that is jointly optimized with TCM, and includes both the regularization strength that controls overall smoothness as well as directional weights that permits control of the isotropy/anisotropy of the local noise and resolution properties. Initial investigations focus on a known imaging task at a single location in the image volume. The framework adopts Fourier and analytical approximations for fast estimation of the local noise power spectrum (NPS) and modulation transfer function (MTF)—each carrying dependencies on TCM and regularization. For the single location optimization, the local detectability index (d‧) of the specific task was directly adopted as the objective function. A covariance matrix adaptation evolution strategy (CMA-ES) algorithm was employed to identify the optimal combination of imaging parameters. Evaluations of both conventional and task-driven approaches were performed in an abdomen phantom for a mid-frequency discrimination task in the kidney. Among the conventional strategies, the TCM pattern optimal for FBP using a minimum variance criterion yielded a worse task-based performance compared to an unmodulated strategy when applied to MBIR. Moreover, task-driven TCM designs for MBIR were found to have the opposite behavior from conventional designs for FBP, with greater fluence assigned to the less attenuating views of the abdomen and less fluence to the more attenuating lateral views. Such TCM patterns exaggerate the intrinsic anisotropy of the MTF and NPS as a result of the data weighting in MBIR. Directional penalty design was found to reinforce the same trend. The task-driven approaches outperform conventional approaches, with the maximum improvement in d‧ of 13% given by the joint optimization of TCM and regularization. This work demonstrates that the TCM optimal for MBIR is distinct from conventional strategies proposed for FBP reconstruction and strategies optimal for FBP are suboptimal and may even reduce performance when applied to MBIR. The task-driven imaging framework offers a promising approach for optimizing acquisition and reconstruction for MBIR that can improve imaging performance and/or dose utilization beyond conventional imaging strategies.
Optimal processor assignment for pipeline computations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Simha, Rahul; Choudhury, Alok N.; Narahari, Bhagirath
1991-01-01
The availability of large scale multitasked parallel architectures introduces the following processor assignment problem for pipelined computations. Given a set of tasks and their precedence constraints, along with their experimentally determined individual responses times for different processor sizes, find an assignment of processor to tasks. Two objectives are of interest: minimal response given a throughput requirement, and maximal throughput given a response time requirement. These assignment problems differ considerably from the classical mapping problem in which several tasks share a processor; instead, it is assumed that a large number of processors are to be assigned to a relatively small number of tasks. Efficient assignment algorithms were developed for different classes of task structures. For a p processor system and a series parallel precedence graph with n constituent tasks, an O(np2) algorithm is provided that finds the optimal assignment for the response time optimization problem; it was found that the assignment optimizing the constrained throughput in O(np2log p) time. Special cases of linear, independent, and tree graphs are also considered.
NASA Technical Reports Server (NTRS)
1976-01-01
In the conceptual design task, several feasible wind generator systems (WGS) configurations were evaluated, and the concept offering the lowest energy cost potential and minimum technical risk for utility applications was selected. In the optimization task, the selected concept was optimized utilizing a parametric computer program prepared for this purpose. In the preliminary design task, the optimized selected concept was designed and analyzed in detail. The utility requirements evaluation task examined the economic, operational, and institutional factors affecting the WGS in a utility environment, and provided additional guidance for the preliminary design effort. Results of the conceptual design task indicated that a rotor operating at constant speed, driving an AC generator through a gear transmission is the most cost effective WGS configuration. The optimization task results led to the selection of a 500 kW rating for the low power WGS and a 1500 kW rating for the high power WGS.
ERIC Educational Resources Information Center
Hazelwood, R. Jordan; Armeson, Kent E.; Hill, Elizabeth G.; Bonilha, Heather Shaw; Martin-Harris, Bonnie
2017-01-01
Purpose: The purpose of this study was to identify which swallowing task(s) yielded the worst performance during a standardized modified barium swallow study (MBSS) in order to optimize the detection of swallowing impairment. Method: This secondary data analysis of adult MBSSs estimated the probability of each swallowing task yielding the derived…
NASA Technical Reports Server (NTRS)
Carpenter, Michele; Jackson, Kimberly; Cohanim, Babak; Duda, Kevin R.; Rize, Jared; Dopart, Celena; Hoffman, Jeffrey; Curiel, Pedro; Studak, Joseph; Ponica, Dina;
2013-01-01
Looking ahead to the human exploration of Mars, NASA is planning for exploration of near-Earth asteroids and the Martian moons. Performing tasks near the surface of such low-gravity objects will likely require the use of an updated version of the Manned Maneuvering Unit (MMU) since the surface gravity is not high enough to allow astronauts to walk, or have sufficient resistance to counter reaction forces and torques during movements. The extravehicular activity (EVA) Jetpack device currently under development is based on the Simplified Aid for EVA Rescue (SAFER) unit and has maneuvering capabilities to assist EVA astronauts with their tasks. This maneuvering unit has gas thrusters for attitude control and translation. When EVA astronauts are performing tasks that require ne motor control such as sample collection and equipment placement, the current control system will re thrusters to compensate for the resulting changes in center-of-mass location and moments of inertia, adversely affecting task performance. The proposed design of a next-generation maneuvering and stability system incorporates control concepts optimized to support astronaut tasks and adds control-moment gyroscopes (CMGs) to the current Jetpack system. This design aims to reduce fuel consumption, as well as improve task performance for astronauts by providing a sti er work platform. The high-level control architecture for an EVA maneuvering system using both thrusters and CMGs considers an initial assessment of tasks to be performed by an astronaut and an evaluation of the corresponding human-system dynamics. For a scenario in which the astronaut orbits an asteroid, simulation results from the current EVA maneuvering system are compared to those from a simulation of the same system augmented with CMGs, demonstrating that the forces and torques on an astronaut can be significantly reduced with the new control system actuation while conserving onboard fuel.
Task-driven imaging in cone-beam computed tomography.
Gang, G J; Stayman, J W; Ouadah, S; Ehtiati, T; Siewerdsen, J H
Conventional workflow in interventional imaging often ignores a wealth of prior information of the patient anatomy and the imaging task. This work introduces a task-driven imaging framework that utilizes such information to prospectively design acquisition and reconstruction techniques for cone-beam CT (CBCT) in a manner that maximizes task-based performance in subsequent imaging procedures. The framework is employed in jointly optimizing tube current modulation, orbital tilt, and reconstruction parameters in filtered backprojection reconstruction for interventional imaging. Theoretical predictors of noise and resolution relates acquisition and reconstruction parameters to task-based detectability. Given a patient-specific prior image and specification of the imaging task, an optimization algorithm prospectively identifies the combination of imaging parameters that maximizes task-based detectability. Initial investigations were performed for a variety of imaging tasks in an elliptical phantom and an anthropomorphic head phantom. Optimization of tube current modulation and view-dependent reconstruction kernel was shown to have greatest benefits for a directional task (e.g., identification of device or tissue orientation). The task-driven approach yielded techniques in which the dose and sharp kernels were concentrated in views contributing the most to the signal power associated with the imaging task. For example, detectability of a line pair detection task was improved by at least three fold compared to conventional approaches. For radially symmetric tasks, the task-driven strategy yielded results similar to a minimum variance strategy in the absence of kernel modulation. Optimization of the orbital tilt successfully avoided highly attenuating structures that can confound the imaging task by introducing noise correlations masquerading at spatial frequencies of interest. This work demonstrated the potential of a task-driven imaging framework to improve image quality and reduce dose beyond that achievable with conventional imaging approaches.
Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography
Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.
2017-01-01
Purpose This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d′) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM. PMID:28626290
Joint optimization of fluence field modulation and regularization in task-driven computed tomography
NASA Astrophysics Data System (ADS)
Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.
2017-03-01
Purpose: This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods: We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d') across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results: The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions: The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.
Potential roles for EVA and telerobotics in a unified worksite
NASA Astrophysics Data System (ADS)
Akin, David; Howard, Russel D.
1993-02-01
Although telerobotics and extravehicular activity (EVA) are often portrayed as competitive approaches to space operations, ongoing research in the Space Systems Laboratory (SSL) has demonstrated the utility of cooperative roles in an integrated EVA/telerobotic work site. Working in the neutral buoyancy simulation environment, tests were performed on interactive roles or EVA subjects and telerobots in structural assembly and satellite servicing tasks. In the most elaborate of these tests to date, EVA subjects were assisted by the SSL's Beam Assembly Teleoperator (BAT) in several servicing tasks planned for Hubble Space Telescope, using the high-fidelity crew training article in the NASA Marshall Neutral Buoyancy Simulator. These tests revealed several shortcomings in the design of BAT for satellite servicing and demonstrated the utility of a free-flying or RMS-mounted telerobot for providing EVA crew assistance. This paper documents the past tests, including the use of free-flying telerobots to effect the rescue of a simulated incapacitated EVA subject, and details planned future efforts in this area, including the testing of a new telerobotic system optimized for the satellite servicing role, the development of dedicated telerobotic devices designed specifically for assisting EVA crew, and conceptual approaches to advanced EVA/telerobotic operations such as the Astronaut Operations Vehicle.
Development of a task-level robot programming and simulation system
NASA Technical Reports Server (NTRS)
Liu, H.; Kawamura, K.; Narayanan, S.; Zhang, G.; Franke, H.; Ozkan, M.; Arima, H.; Liu, H.
1987-01-01
An ongoing project in developing a Task-Level Robot Programming and Simulation System (TARPS) is discussed. The objective of this approach is to design a generic TARPS that can be used in a variety of applications. Many robotic applications require off-line programming, and a TARPS is very useful in such applications. Task level programming is object centered in that the user specifies tasks to be performed instead of robot paths. Graphics simulation provides greater flexibility and also avoids costly machine setup and possible damage. A TARPS has three major modules: world model, task planner and task simulator. The system architecture, design issues and some preliminary results are given.
Assessment of mass detection performance in contrast enhanced digital mammography
NASA Astrophysics Data System (ADS)
Carton, Ann-Katherine; de Carvalho, Pablo M.; Li, Zhijin; Dromain, Clarisse; Muller, Serge
2015-03-01
We address the detectability of contrast-agent enhancing masses for contrast-agent enhanced spectral mammography (CESM), a dual-energy technique providing functional projection images of breast tissue perfusion and vascularity using simulated CESM images. First, the realism of simulated CESM images from anthropomorphic breast software phantoms generated with a software X-ray imaging platform was validated. Breast texture was characterized by power-law coefficients calculated in data sets of real clinical and simulated images. We also performed a 2-alternative forced choice (2-AFC) psychophysical experiment whereby simulated and real images were presented side-by-side to an experienced radiologist to test if real images could be distinguished from the simulated images. It was found that texture in our simulated CESM images has a fairly realistic appearance. Next, the relative performance of human readers and previously developed mathematical observers was assessed for the detection of iodine-enhancing mass lesions containing different contrast agent concentrations. A four alternative-forced-choice (4 AFC) task was designed; the task for the model and human observer was to detect which one of the four simulated DE recombined images contained an iodineenhancing mass. Our results showed that the NPW and NPWE models largely outperform human performance. After introduction of an internal noise component, both observers approached human performance. The CHO observer performs slightly worse than the average human observer. There is still work to be done in improving model observers as predictors of human-observer performance. Larger trials could also improve our test statistics. We hope that in the future, this framework of software breast phantoms, virtual image acquisition and processing, and mathematical observers can be beneficial to optimize CESM imaging techniques.
NASA Technical Reports Server (NTRS)
Smith, Jeffrey D.; Twombly, I. Alexander; Maese, A. Christopher; Cagle, Yvonne; Boyle, Richard
2003-01-01
The International Space Station demonstrates the greatest capabilities of human ingenuity, international cooperation and technology development. The complexity of this space structure is unprecedented; and training astronaut crews to maintain all its systems, as well as perform a multitude of research experiments, requires the most advanced training tools and techniques. Computer simulation and virtual environments are currently used by astronauts to train for robotic arm manipulations and extravehicular activities; but now, with the latest computer technologies and recent successes in areas of medical simulation, the capability exists to train astronauts for more hands-on research tasks using immersive virtual environments. We have developed a new technology, the Virtual Glovebox (VGX), for simulation of experimental tasks that astronauts will perform aboard the Space Station. The VGX may also be used by crew support teams for design of experiments, testing equipment integration capability and optimizing the procedures astronauts will use. This is done through the 3D, desk-top sized, reach-in virtual environment that can simulate the microgravity environment in space. Additional features of the VGX allow for networking multiple users over the internet and operation of tele-robotic devices through an intuitive user interface. Although the system was developed for astronaut training and assisting support crews, Earth-bound applications, many emphasizing homeland security, have also been identified. Examples include training experts to handle hazardous biological and/or chemical agents in a safe simulation, operation of tele-robotic systems for assessing and diffusing threats such as bombs, and providing remote medical assistance to field personnel through a collaborative virtual environment. Thus, the emerging VGX simulation technology, while developed for space- based applications, can serve a dual use facilitating homeland security here on Earth.
A Method for Functional Task Alignment Analysis of an Arthrocentesis Simulator.
Adams, Reid A; Gilbert, Gregory E; Buckley, Lisa A; Nino Fong, Rodolfo; Fuentealba, I Carmen; Little, Erika L
2018-05-16
During simulation-based education, simulators are subjected to procedures composed of a variety of tasks and processes. Simulators should functionally represent a patient in response to the physical action of these tasks. The aim of this work was to describe a method for determining whether a simulator does or does not have sufficient functional task alignment (FTA) to be used in a simulation. Potential performance checklist items were gathered from published arthrocentesis guidelines and aggregated into a performance checklist using Lawshe's method. An expert panel used this performance checklist and an FTA analysis questionnaire to evaluate a simulator's ability to respond to the physical actions required by the performance checklist. Thirteen items, from a pool of 39, were included on the performance checklist. Experts had mixed reviews of the simulator's FTA and its suitability for use in simulation. Unexpectedly, some positive FTA was found for several tasks where the simulator lacked functionality. By developing a detailed list of specific tasks required to complete a clinical procedure, and surveying experts on the simulator's response to those actions, educators can gain insight into the simulator's clinical accuracy and suitability. Unexpected of positive FTA ratings of function deficits suggest that further revision of the survey method is required.
On scheduling task systems with variable service times
NASA Astrophysics Data System (ADS)
Maset, Richard G.; Banawan, Sayed A.
1993-08-01
Several strategies have been proposed for developing optimal and near-optimal schedules for task systems (jobs consisting of multiple tasks that can be executed in parallel). Most such strategies, however, implicitly assume deterministic task service times. We show that these strategies are much less effective when service times are highly variable. We then evaluate two strategies—one adaptive, one static—that have been proposed for retaining high performance despite such variability. Both strategies are extensions of critical path scheduling, which has been found to be efficient at producing near-optimal schedules. We found the adaptive approach to be quite effective.
A Comparison of Two Methods Used for Ranking Task Exposure Levels Using Simulated Multi-Task Data
1999-12-17
OF OKLAHOMA HEALTH SCIENCES CENTER GRADUATE COLLEGE A COMPARISON OF TWO METHODS USED FOR RANKING TASK EXPOSURE LEVELS USING SIMULATED MULTI-TASK...COSTANTINO Oklahoma City, Oklahoma 1999 ^ooo wx °^ A COMPARISON OF TWO METHODS USED FOR RANKING TASK EXPOSURE LEVELS USING SIMULATED MULTI-TASK DATA... METHODS AND MATERIALS 9 TV. RESULTS 14 V. DISCUSSION AND CONCLUSION 28 LIST OF REFERENCES 31 APPENDICES 33 Appendix A JJ -in Appendix B Dl IV
Artificial intelligence for the CTA Observatory scheduler
NASA Astrophysics Data System (ADS)
Colomé, Josep; Colomer, Pau; Campreciós, Jordi; Coiffard, Thierry; de Oña, Emma; Pedaletti, Giovanna; Torres, Diego F.; Garcia-Piquer, Alvaro
2014-08-01
The Cherenkov Telescope Array (CTA) project will be the next generation ground-based very high energy gamma-ray instrument. The success of the precursor projects (i.e., HESS, MAGIC, VERITAS) motivated the construction of this large infrastructure that is included in the roadmap of the ESFRI projects since 2008. CTA is planned to start the construction phase in 2015 and will consist of two arrays of Cherenkov telescopes operated as a proposal-driven open observatory. Two sites are foreseen at the southern and northern hemispheres. The CTA observatory will handle several observation modes and will have to operate tens of telescopes with a highly efficient and reliable control. Thus, the CTA planning tool is a key element in the control layer for the optimization of the observatory time. The main purpose of the scheduler for CTA is the allocation of multiple tasks to one single array or to multiple sub-arrays of telescopes, while maximizing the scientific return of the facility and minimizing the operational costs. The scheduler considers long- and short-term varying conditions to optimize the prioritization of tasks. A short-term scheduler provides the system with the capability to adapt, in almost real-time, the selected task to the varying execution constraints (i.e., Targets of Opportunity, health or status of the system components, environment conditions). The scheduling procedure ensures that long-term planning decisions are correctly transferred to the short-term prioritization process for a suitable selection of the next task to execute on the array. In this contribution we present the constraints to CTA task scheduling that helped classifying it as a Flexible Job-Shop Problem case and finding its optimal solution based on Artificial Intelligence techniques. We describe the scheduler prototype that uses a Guarded Discrete Stochastic Neural Network (GDSN), for an easy representation of the possible long- and short-term planning solutions, and Constraint Propagation techniques. A simulation platform, an analysis tool and different test case scenarios for CTA were developed to test the performance of the scheduler and are also described.
Effects of Motion Cues on the Training of Multi-Axis Manual Control Skills
NASA Technical Reports Server (NTRS)
Zaal, Peter M. T.; Mobertz, Xander R. I.
2017-01-01
The study described in this paper investigated the effects of two different hexapod motion configurations on the training and transfer of training of a simultaneous roll and pitch control task. Pilots were divided between two groups which trained either under a baseline hexapod motion condition, with motion typically provided by current training simulators, or an optimized hexapod motion condition, with increased fidelity of the motion cues most relevant for the task. All pilots transferred to the same full-motion condition, representing motion experienced in flight. A cybernetic approach was used that gave insights into the development of pilots use of visual and motion cues over the course of training and after transfer. Based on the current results, neither of the hexapod motion conditions can unambiguously be chosen as providing the best motion for training and transfer of training of the used multi-axis control task. However, the optimized hexapod motion condition did allow pilots to generate less visual lead, control with higher gains, and have better disturbance-rejection performance at the end of the training session compared to the baseline hexapod motion condition. Significant adaptations in control behavior still occurred in the transfer phase under the full-motion condition for both groups. Pilots behaved less linearly compared to previous single-axis control-task experiments; however, this did not result in smaller motion or learning effects. Motion and learning effects were more pronounced in pitch compared to roll. Finally, valuable lessons were learned that allow us to improve the adopted approach for future transfer-of-training studies.
Method for Household Refrigerators Efficiency Increasing
NASA Astrophysics Data System (ADS)
Lebedev, V. V.; Sumzina, L. V.; Maksimov, A. V.
2017-11-01
The relevance of working processes parameters optimization in air conditioning systems is proved in the work. The research is performed with the use of the simulation modeling method. The parameters optimization criteria are considered, the analysis of target functions is given while the key factors of technical and economic optimization are considered in the article. The search for the optimal solution at multi-purpose optimization of the system is made by finding out the minimum of the dual-target vector created by the Pareto method of linear and weight compromises from target functions of the total capital costs and total operating costs. The tasks are solved in the MathCAD environment. The research results show that the values of technical and economic parameters of air conditioning systems in the areas relating to the optimum solutions’ areas manifest considerable deviations from the minimum values. At the same time, the tendencies for significant growth in deviations take place at removal of technical parameters from the optimal values of both the capital investments and operating costs. The production and operation of conditioners with the parameters which are considerably deviating from the optimal values will lead to the increase of material and power costs. The research allows one to establish the borders of the area of the optimal values for technical and economic parameters at air conditioning systems’ design.
Wireless Channel Characterization in the Airport Surface Environment
NASA Technical Reports Server (NTRS)
Neville, Joshua T.
2004-01-01
Given the anticipated increase in air traffic in the coming years, modernization of the National Airspace System (NAS) is a necessity. Part of this modernization effort will include updating current communication, navigation, and surveillance (CNS) systems to deal with the increased traffic as well as developing advanced CNS technologies for the systems. An example of such technology is the integrated CNS (ICNS) network being developed by the Advanced CNS Architecture and Systems Technology (ACAST) group for use in the airport surface environment. The ICNS network would be used to convey voice/data between users in a secure and reliable manner. The current surface system only supports voice and does so through an obsolete physical infrastructure. The old system is vulnerable to outages and costly to maintain. The proposed ICNS network will include a wireless radio link. To ensure optimal performance, a thorough and accurate characterization of the channel across which the link would operate is necessary. The channel is the path the signal takes from the transmitter to the receiver and is prone to various forms of interference. Channel characterization involves a combination of analysis, simulation, and measurement. My work this summer was divided into four tasks. The first task required compiling and reviewing reference material that dealt with the characterization and modeling of aeronautical channels. The second task involved developing a systematic approach that could be used to group airports into classes, e.g. small airfields, medium airports, large open airports, large cluttered airports, etc. The third task consisted of implementing computer simulations of existing channel models. The fourth task entailed measuring possible interference sources in the airport surface environment via a spectrum analyzer.
A Rational Analysis of the Selection Task as Optimal Data Selection.
ERIC Educational Resources Information Center
Oaksford, Mike; Chater, Nick
1994-01-01
Experimental data on human reasoning in hypothesis-testing tasks is reassessed in light of a Bayesian model of optimal data selection in inductive hypothesis testing. The rational analysis provided by the model suggests that reasoning in such tasks may be rational rather than subject to systematic bias. (SLD)
An Efficient Ray-Tracing Method for Determining Terrain Intercepts in EDL Simulations
NASA Technical Reports Server (NTRS)
Shidner, Jeremy D.
2016-01-01
The calculation of a ray's intercept from an arbitrary point in space to a prescribed surface is a common task in computer simulations. The arbitrary point often represents an object that is moving according to the simulation, while the prescribed surface is fixed in a defined frame. For detailed simulations, this surface becomes complex, taking the form of real-world objects such as mountains, craters or valleys which require more advanced methods to accurately calculate a ray's intercept location. Incorporation of these complex surfaces has commonly been implemented in graphics systems that utilize highly optimized graphics processing units to analyze such features. This paper proposes a simplified method that does not require computationally intensive graphics solutions, but rather an optimized ray-tracing method for an assumed terrain dataset. This approach was developed for the Mars Science Laboratory mission which landed on the complex terrain of Gale Crater. First, this paper begins with a discussion of the simulation used to implement the model and the applicability of finding surface intercepts with respect to atmosphere modeling, altitude determination, radar modeling, and contact forces influencing vehicle dynamics. Next, the derivation and assumptions of the intercept finding method are presented. Key assumptions are noted making the routines specific to only certain types of surface data sets that are equidistantly spaced in longitude and latitude. The derivation of the method relies on ray-tracing, requiring discussion on the formulation of the ray with respect to the terrain datasets. Further discussion includes techniques for ray initialization in order to optimize the intercept search. Then, the model implementation for various new applications in the simulation are demonstrated. Finally, a validation of the accuracy is presented along with the corresponding data sets used in the validation. A performance summary of the method will be shown using the analysis from the Mars Science Laboratory's terminal descent sensing model. Alternate uses will also be shown for determining horizon maps and orbiter set times.
Simulation verification techniques study
NASA Technical Reports Server (NTRS)
Schoonmaker, P. B.; Wenglinski, T. H.
1975-01-01
Results are summarized of the simulation verification techniques study which consisted of two tasks: to develop techniques for simulator hardware checkout and to develop techniques for simulation performance verification (validation). The hardware verification task involved definition of simulation hardware (hardware units and integrated simulator configurations), survey of current hardware self-test techniques, and definition of hardware and software techniques for checkout of simulator subsystems. The performance verification task included definition of simulation performance parameters (and critical performance parameters), definition of methods for establishing standards of performance (sources of reference data or validation), and definition of methods for validating performance. Both major tasks included definition of verification software and assessment of verification data base impact. An annotated bibliography of all documents generated during this study is provided.
The Application of SNiPER to the JUNO Simulation
NASA Astrophysics Data System (ADS)
Lin, Tao; Zou, Jiaheng; Li, Weidong; Deng, Ziyan; Fang, Xiao; Cao, Guofu; Huang, Xingtao; You, Zhengyun; JUNO Collaboration
2017-10-01
The JUNO (Jiangmen Underground Neutrino Observatory) is a multipurpose neutrino experiment which is designed to determine neutrino mass hierarchy and precisely measure oscillation parameters. As one of the important systems, the JUNO offline software is being developed using the SNiPER software. In this proceeding, we focus on the requirements of JUNO simulation and present the working solution based on the SNiPER. The JUNO simulation framework is in charge of managing event data, detector geometries and materials, physics processes, simulation truth information etc. It glues physics generator, detector simulation and electronics simulation modules together to achieve a full simulation chain. In the implementation of the framework, many attractive characteristics of the SNiPER have been used, such as dynamic loading, flexible flow control, multiple event management and Python binding. Furthermore, additional efforts have been made to make both detector and electronics simulation flexible enough to accommodate and optimize different detector designs. For the Geant4-based detector simulation, each sub-detector component is implemented as a SNiPER tool which is a dynamically loadable and configurable plugin. So it is possible to select the detector configuration at runtime. The framework provides the event loop to drive the detector simulation and interacts with the Geant4 which is implemented as a passive service. All levels of user actions are wrapped into different customizable tools, so that user functions can be easily extended by just adding new tools. The electronics simulation has been implemented by following an event driven scheme. The SNiPER task component is used to simulate data processing steps in the electronics modules. The electronics and trigger are synchronized by triggered events containing possible physics signals. The JUNO simulation software has been released and is being used by the JUNO collaboration to do detector design optimization, event reconstruction algorithm development and physics sensitivity studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yang; Zhao, Qiangsheng; Mirdamadi, Mansour
Woven fabric carbon fiber/epoxy composites made through compression molding are one of the promising choices of material for the vehicle light-weighting strategy. Previous studies have shown that the processing conditions can have substantial influence on the performance of this type of the material. Therefore the optimization of the compression molding process is of great importance to the manufacturing practice. An efficient way to achieve the optimized design of this process would be through conducting finite element (FE) simulations of compression molding for woven fabric carbon fiber/epoxy composites. However, performing such simulation remains a challenging task for FE as multiple typesmore » of physics are involved during the compression molding process, including the epoxy resin curing and the complex mechanical behavior of woven fabric structure. In the present study, the FE simulation of the compression molding process of resin based woven fabric composites at continuum level is conducted, which is enabled by the implementation of an integrated material modeling methodology in LS-Dyna. Specifically, the chemo-thermo-mechanical problem of compression molding is solved through the coupling of three material models, i.e., one thermal model for temperature history in the resin, one mechanical model to update the curing-dependent properties of the resin and another mechanical model to simulate the behavior of the woven fabric composites. Preliminary simulations of the carbon fiber/epoxy woven fabric composites in LS-Dyna are presented as a demonstration, while validations and models with real part geometry are planned in the future work.« less
Application of ant colony algorithm in path planning of the data center room robot
NASA Astrophysics Data System (ADS)
Wang, Yong; Ma, Jianming; Wang, Ying
2017-05-01
According to the Internet Data Center (IDC) room patrol robot as the background, the robot in the search path of autonomous obstacle avoidance and path planning ability, worked out in advance of the robot room patrol mission. The simulation experimental results show that the improved ant colony algorithm for IDC room patrol robot obstacle avoidance planning, makes the robot along an optimal or suboptimal and safe obstacle avoidance path to reach the target point to complete the task. To prove the feasibility of the method.
Management of health care expenditure by soft computing methodology
NASA Astrophysics Data System (ADS)
Maksimović, Goran; Jović, Srđan; Jovanović, Radomir; Aničić, Obrad
2017-01-01
In this study was managed the health care expenditure by soft computing methodology. The main goal was to predict the gross domestic product (GDP) according to several factors of health care expenditure. Soft computing methodologies were applied since GDP prediction is very complex task. The performances of the proposed predictors were confirmed with the simulation results. According to the results, support vector regression (SVR) has better prediction accuracy compared to other soft computing methodologies. The soft computing methods benefit from the soft computing capabilities of global optimization in order to avoid local minimum issues.
NASA Technical Reports Server (NTRS)
Simon, William E.; Li, Ku-Yen; Yaws, Carl L.; Mei, Harry T.; Nguyen, Vinh D.; Chu, Hsing-Wei
1994-01-01
A methyl acetate reactor was developed to perform a subscale kinetic investigation in the design and optimization of a full-scale metabolic simulator for long term testing of life support systems. Other tasks in support of the closed ecological life support system test program included: (1) heating, ventilation and air conditioning analysis of a variable pressure growth chamber, (2) experimental design for statistical analysis of plant crops, (3) resource recovery for closed life support systems, and (4) development of data acquisition software for automating an environmental growth chamber.
Molecular nanomagnets with switchable coupling for quantum simulation
Chiesa, Alessandro; Whitehead, George F. S.; Carretta, Stefano; ...
2014-12-11
Molecular nanomagnets are attractive candidate qubits because of their wide inter- and intra-molecular tunability. Uniform magnetic pulses could be exploited to implement one- and two-qubit gates in presence of a properly engineered pattern of interactions, but the synthesis of suitable and potentially scalable supramolecular complexes has proven a very hard task. Indeed, no quantum algorithms have ever been implemented, not even a proof-of-principle two-qubit gate. In this paper we show that the magnetic couplings in two supramolecular {Cr7Ni}-Ni-{Cr7Ni} assemblies can be chemically engineered to fit the above requisites for conditional gates with no need of local control. Microscopic parameters aremore » determined by a recently developed many-body ab-initio approach and used to simulate quantum gates. We find that these systems are optimal for proof-of-principle two-qubit experiments and can be exploited as building blocks of scalable architectures for quantum simulation.« less
Construct validation of a novel hybrid surgical simulator.
Broe, D; Ridgway, P F; Johnson, S; Tierney, S; Conlon, K C
2006-06-01
Simulated minimal access surgery has improved recently as both a learning and assessment tool. The construct validation of a novel simulator, ProMis, is described for use by residents in training. ProMis is a surgical simulator that can design tasks in both virtual and actual reality. A pilot group of surgical residents ranging from novice to expert completed three standardized tasks: orientation, dissection, and basic suturing. The tasks were tested for construct validity. Two experienced surgeons examined the recorded tasks in a blinded fashion using an objective structured assessment of technical skills format (OSATS: task-specific checklist and global rating score) as well as metrics delivered by the simulator. The findings showed excellent interrater reliability (Cronbach's alpha of 0.88 for the checklist and 0.93 for the global rating). The median scores in the experience groups were statistically different in both the global rating and the task-specific checklists (p < 0.05). The scores for the orientation task alone did not reach significance (p = 0.1), suggesting that modification is required before ProMis could be used in isolation as an assessment tool. The three simulated tasks in combination are construct valid for differentiating experience levels among surgeons in training. This hybrid simulator has potential added benefits of marrying the virtual with actual, and of combining simple box traits and advanced virtual reality simulation.
CATO: a CAD tool for intelligent design of optical networks and interconnects
NASA Astrophysics Data System (ADS)
Chlamtac, Imrich; Ciesielski, Maciej; Fumagalli, Andrea F.; Ruszczyk, Chester; Wedzinga, Gosse
1997-10-01
Increasing communication speed requirements have created a great interest in very high speed optical and all-optical networks and interconnects. The design of these optical systems is a highly complex task, requiring the simultaneous optimization of various parts of the system, ranging from optical components' characteristics to access protocol techniques. Currently there are no computer aided design (CAD) tools on the market to support the interrelated design of all parts of optical communication systems, thus the designer has to rely on costly and time consuming testbed evaluations. The objective of the CATO (CAD tool for optical networks and interconnects) project is to develop a prototype of an intelligent CAD tool for the specification, design, simulation and optimization of optical communication networks. CATO allows the user to build an abstract, possible incomplete, model of the system, and determine its expected performance. Based on design constraints provided by the user, CATO will automatically complete an optimum design, using mathematical programming techniques, intelligent search methods and artificial intelligence (AI). Initial design and testing of a CATO prototype (CATO-1) has been completed recently. The objective was to prove the feasibility of combining AI techniques, simulation techniques, an optical device library and a graphical user interface into a flexible CAD tool for obtaining optimal communication network designs in terms of system cost and performance. CATO-1 is an experimental tool for designing packet-switching wavelength division multiplexing all-optical communication systems using a LAN/MAN ring topology as the underlying network. The two specific AI algorithms incorporated are simulated annealing and a genetic algorithm. CATO-1 finds the optimal number of transceivers for each network node, using an objective function that includes the cost of the devices and the overall system performance.
Rasmussen, Sebastian R; Konge, Lars; Mikkelsen, Peter T; Sørensen, Mads S; Andersen, Steven A W
2016-03-01
Cognitive load (CL) theory suggests that working memory can be overloaded in complex learning tasks such as surgical technical skills training, which can impair learning. Valid and feasible methods for estimating the CL in specific learning contexts are necessary before the efficacy of CL-lowering instructional interventions can be established. This study aims to explore secondary task precision for the estimation of CL in virtual reality (VR) surgical simulation and also investigate the effects of CL-modifying factors such as simulator-integrated tutoring and repeated practice. Twenty-four participants were randomized for visual assistance by a simulator-integrated tutor function during the first 5 of 12 repeated mastoidectomy procedures on a VR temporal bone simulator. Secondary task precision was found to be significantly lower during simulation compared with nonsimulation baseline, p < .001. Contrary to expectations, simulator-integrated tutoring and repeated practice did not have an impact on secondary task precision. This finding suggests that even though considerable changes in CL are reflected in secondary task precision, it lacks sensitivity. In contrast, secondary task reaction time could be more sensitive, but requires substantial postprocessing of data. Therefore, future studies on the effect of CL modifying interventions should weigh the pros and cons of the various secondary task measurements. © The Author(s) 2015.
A control-theory model for human decision-making
NASA Technical Reports Server (NTRS)
Levison, W. H.; Tanner, R. B.
1971-01-01
A model for human decision making is an adaptation of an optimal control model for pilot/vehicle systems. The models for decision and control both contain concepts of time delay, observation noise, optimal prediction, and optimal estimation. The decision making model was intended for situations in which the human bases his decision on his estimate of the state of a linear plant. Experiments are described for the following task situations: (a) single decision tasks, (b) two-decision tasks, and (c) simultaneous manual control and decision making. Using fixed values for model parameters, single-task and two-task decision performance can be predicted to within an accuracy of 10 percent. Agreement is less good for the simultaneous decision and control situation.
NASA Technical Reports Server (NTRS)
Frederick, D. K.; Lashmet, P. K.; Sandor, G. N.; Shen, C. N.; Smith, E. V.; Yerazunis, S. W.
1973-01-01
Problems related to the design and control of a mobile planetary vehicle to implement a systematic plan for the exploration of Mars are reported. Problem areas include: vehicle configuration, control, dynamics, systems and propulsion; systems analysis, terrain modeling and path selection; and chemical analysis of specimens. These tasks are summarized: vehicle model design, mathematical model of vehicle dynamics, experimental vehicle dynamics, obstacle negotiation, electrochemical controls, remote control, collapsibility and deployment, construction of a wheel tester, wheel analysis, payload design, system design optimization, effect of design assumptions, accessory optimal design, on-board computer subsystem, laser range measurement, discrete obstacle detection, obstacle detection systems, terrain modeling, path selection system simulation and evaluation, gas chromatograph/mass spectrometer system concepts, and chromatograph model evaluation and improvement.
A framework for optimizing micro-CT in dual-modality micro-CT/XFCT small-animal imaging system
NASA Astrophysics Data System (ADS)
Vedantham, Srinivasan; Shrestha, Suman; Karellas, Andrew; Cho, Sang Hyun
2017-09-01
Dual-modality Computed Tomography (CT)/X-ray Fluorescence Computed Tomography (XFCT) can be a valuable tool for imaging and quantifying the organ and tissue distribution of small concentrations of high atomic number materials in small-animal system. In this work, the framework for optimizing the micro-CT imaging system component of the dual-modality system is described, either when the micro-CT images are concurrently acquired with XFCT and using the x-ray spectral conditions for XFCT, or when the micro-CT images are acquired sequentially and independently of XFCT. This framework utilizes the cascaded systems analysis for task-specific determination of the detectability index using numerical observer models at a given radiation dose, where the radiation dose is determined using Monte Carlo simulations.
Rad-hard Dual-threshold High-count-rate Silicon Pixel-array Detector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Adam
In this program, a Voxtel-led team demonstrates a full-format (192 x 192, 100-µm pitch, VX-810) high-dynamic-range x-ray photon-counting sensor—the Dual Photon Resolved Energy Acquisition (DUPREA) sensor. Within the Phase II program the following tasks were completed: 1) system analysis and definition of the DUPREA sensor requirements; 2) design, simulation, and fabrication of the full-format VX-810 ROIC design; 3) design, optimization, and fabrication of thick, fully depleted silicon photodiodes optimized for x-ray photon collection; 4) hybridization of the VX-810 ROIC to the photodiode array in the creation of the optically sensitive focal-plane array; 5) development of an evaluation camera; and 6)more » electrical and optical characterization of the sensor.« less
Impulse-induced optimum signal amplification in scale-free networks.
Martínez, Pedro J; Chacón, Ricardo
2016-04-01
Optimizing information transmission across a network is an essential task for controlling and manipulating generic information-processing systems. Here, we show how topological amplification effects in scale-free networks of signaling devices are optimally enhanced when the impulse transmitted by periodic external signals (time integral over two consecutive zeros) is maximum. This is demonstrated theoretically by means of a star-like network of overdamped bistable systems subjected to generic zero-mean periodic signals and confirmed numerically by simulations of scale-free networks of such systems. Our results show that the enhancer effect of increasing values of the signal's impulse is due to a correlative increase of the energy transmitted by the periodic signals, while it is found to be resonant-like with respect to the topology-induced amplification mechanism.
Simultaneous calibration phantom commission and geometry calibration in cone beam CT
NASA Astrophysics Data System (ADS)
Xu, Yuan; Yang, Shuai; Ma, Jianhui; Li, Bin; Wu, Shuyu; Qi, Hongliang; Zhou, Linghong
2017-09-01
Geometry calibration is a vital step for describing the geometry of a cone beam computed tomography (CBCT) system and is a prerequisite for CBCT reconstruction. In current methods, calibration phantom commission and geometry calibration are divided into two independent tasks. Small errors in ball-bearing (BB) positioning in the phantom-making step will severely degrade the quality of phantom calibration. To solve this problem, we propose an integrated method to simultaneously realize geometry phantom commission and geometry calibration. Instead of assuming the accuracy of the geometry phantom, the integrated method considers BB centers in the phantom as an optimized parameter in the workflow. Specifically, an evaluation phantom and the corresponding evaluation contrast index are used to evaluate geometry artifacts for optimizing the BB coordinates in the geometry phantom. After utilizing particle swarm optimization, the CBCT geometry and BB coordinates in the geometry phantom are calibrated accurately and are then directly used for the next geometry calibration task in other CBCT systems. To evaluate the proposed method, both qualitative and quantitative studies were performed on simulated and realistic CBCT data. The spatial resolution of reconstructed images using dental CBCT can reach up to 15 line pair cm-1. The proposed method is also superior to the Wiesent method in experiments. This paper shows that the proposed method is attractive for simultaneous and accurate geometry phantom commission and geometry calibration.
Methods for compressible fluid simulation on GPUs using high-order finite differences
NASA Astrophysics Data System (ADS)
Pekkilä, Johannes; Väisälä, Miikka S.; Käpylä, Maarit J.; Käpylä, Petri J.; Anjum, Omer
2017-08-01
We focus on implementing and optimizing a sixth-order finite-difference solver for simulating compressible fluids on a GPU using third-order Runge-Kutta integration. Since graphics processing units perform well in data-parallel tasks, this makes them an attractive platform for fluid simulation. However, high-order stencil computation is memory-intensive with respect to both main memory and the caches of the GPU. We present two approaches for simulating compressible fluids using 55-point and 19-point stencils. We seek to reduce the requirements for memory bandwidth and cache size in our methods by using cache blocking and decomposing a latency-bound kernel into several bandwidth-bound kernels. Our fastest implementation is bandwidth-bound and integrates 343 million grid points per second on a Tesla K40t GPU, achieving a 3 . 6 × speedup over a comparable hydrodynamics solver benchmarked on two Intel Xeon E5-2690v3 processors. Our alternative GPU implementation is latency-bound and achieves the rate of 168 million updates per second.
Dura-Bernal, S.; Neymotin, S. A.; Kerr, C. C.; Sivagnanam, S.; Majumdar, A.; Francis, J. T.; Lytton, W. W.
2017-01-01
Biomimetic simulation permits neuroscientists to better understand the complex neuronal dynamics of the brain. Embedding a biomimetic simulation in a closed-loop neuroprosthesis, which can read and write signals from the brain, will permit applications for amelioration of motor, psychiatric, and memory-related brain disorders. Biomimetic neuroprostheses require real-time adaptation to changes in the external environment, thus constituting an example of a dynamic data-driven application system. As model fidelity increases, so does the number of parameters and the complexity of finding appropriate parameter configurations. Instead of adapting synaptic weights via machine learning, we employed major biological learning methods: spike-timing dependent plasticity and reinforcement learning. We optimized the learning metaparameters using evolutionary algorithms, which were implemented in parallel and which used an island model approach to obtain sufficient speed. We employed these methods to train a cortical spiking model to utilize macaque brain activity, indicating a selected target, to drive a virtual musculoskeletal arm with realistic anatomical and biomechanical properties to reach to that target. The optimized system was able to reproduce macaque data from a comparable experimental motor task. These techniques can be used to efficiently tune the parameters of multiscale systems, linking realistic neuronal dynamics to behavior, and thus providing a useful tool for neuroscience and neuroprosthetics. PMID:29200477
Berglund, Johan; Johansson, Henrik; Lundqvist, Mats; Cederström, Björn; Fredenberg, Erik
2014-01-01
Abstract. In x-ray imaging, contrast information content varies with photon energy. It is, therefore, possible to improve image quality by weighting photons according to energy. We have implemented and evaluated so-called energy weighting on a commercially available spectral photon-counting mammography system. The technique was evaluated using computer simulations, phantom experiments, and analysis of screening mammograms. The CNR benefit of energy weighting for a number of relevant target-background combinations measured by the three methods fell in the range of 2.2 to 5.2% when using optimal weight factors. This translates to a potential dose reduction at constant CNR in the range of 4.5 to 11%. We expect the choice of weight factor in practical implementations to be straightforward because (1) the CNR improvement was not very sensitive to weight, (2) the optimal weight was similar for all investigated target-background combinations, (3) aluminum/PMMA phantoms were found to represent clinically relevant tasks well, and (4) the optimal weight could be calculated directly from pixel values in phantom images. Reasonable agreement was found between the simulations and phantom measurements. Manual measurements on microcalcifications and automatic image analysis confirmed that the CNR improvement was detectable in energy-weighted screening mammograms. PMID:26158045
Nonlinear dynamic analysis and optimal trajectory planning of a high-speed macro-micro manipulator
NASA Astrophysics Data System (ADS)
Yang, Yi-ling; Wei, Yan-ding; Lou, Jun-qiang; Fu, Lei; Zhao, Xiao-wei
2017-09-01
This paper reports the nonlinear dynamic modeling and the optimal trajectory planning for a flexure-based macro-micro manipulator, which is dedicated to the large-scale and high-speed tasks. In particular, a macro- micro manipulator composed of a servo motor, a rigid arm and a compliant microgripper is focused. Moreover, both flexure hinges and flexible beams are considered. By combining the pseudorigid-body-model method, the assumed mode method and the Lagrange equation, the overall dynamic model is derived. Then, the rigid-flexible-coupling characteristics are analyzed by numerical simulations. After that, the microscopic scale vibration excited by the large-scale motion is reduced through the trajectory planning approach. Especially, a fitness function regards the comprehensive excitation torque of the compliant microgripper is proposed. The reference curve and the interpolation curve using the quintic polynomial trajectories are adopted. Afterwards, an improved genetic algorithm is used to identify the optimal trajectory by minimizing the fitness function. Finally, the numerical simulations and experiments validate the feasibility and the effectiveness of the established dynamic model and the trajectory planning approach. The amplitude of the residual vibration reduces approximately 54.9%, and the settling time decreases 57.1%. Therefore, the operation efficiency and manipulation stability are significantly improved.
Manipulation and handling processes off-line programming and optimization with use of K-Roset
NASA Astrophysics Data System (ADS)
Gołda, G.; Kampa, A.
2017-08-01
Contemporary trends in development of efficient, flexible manufacturing systems require practical implementation of modern “Lean production” concepts for maximizing customer value through minimizing all wastes in manufacturing and logistics processes. Every FMS is built on the basis of automated and robotized production cells. Except flexible CNC machine tools and other equipments, the industrial robots are primary elements of the system. In the studies, authors look for wastes of time and cost in real tasks of robots, during manipulation processes. According to aspiration for optimization of handling and manipulation processes with use of the robots, the application of modern off-line programming methods and computer simulation, is the best solution and it is only way to minimize unnecessary movements and other instructions. The modelling process of robotized production cell and offline programming of Kawasaki robots in AS-Language will be described. The simulation of robotized workstation will be realized with use of virtual reality software K-Roset. Authors show the process of industrial robot’s programs improvement and optimization in terms of minimizing the number of useless manipulator movements and unnecessary instructions. This is realized in order to shorten the time of production cycles. This will also reduce costs of handling, manipulations and technological process.
The impact on midlevel vision of statistically optimal divisive normalization in V1
Coen-Cagli, Ruben; Schwartz, Odelia
2013-01-01
The first two areas of the primate visual cortex (V1, V2) provide a paradigmatic example of hierarchical computation in the brain. However, neither the functional properties of V2 nor the interactions between the two areas are well understood. One key aspect is that the statistics of the inputs received by V2 depend on the nonlinear response properties of V1. Here, we focused on divisive normalization, a canonical nonlinear computation that is observed in many neural areas and modalities. We simulated V1 responses with (and without) different forms of surround normalization derived from statistical models of natural scenes, including canonical normalization and a statistically optimal extension that accounted for image nonhomogeneities. The statistics of the V1 population responses differed markedly across models. We then addressed how V2 receptive fields pool the responses of V1 model units with different tuning. We assumed this is achieved by learning without supervision a linear representation that removes correlations, which could be accomplished with principal component analysis. This approach revealed V2-like feature selectivity when we used the optimal normalization and, to a lesser extent, the canonical one but not in the absence of both. We compared the resulting two-stage models on two perceptual tasks; while models encompassing V1 surround normalization performed better at object recognition, only statistically optimal normalization provided systematic advantages in a task more closely matched to midlevel vision, namely figure/ground judgment. Our results suggest that experiments probing midlevel areas might benefit from using stimuli designed to engage the computations that characterize V1 optimality. PMID:23857950
Li, Yang; Zhao, Qiangsheng; Mirdamadi, Mansour; ...
2016-01-06
Woven fabric carbon fiber/epoxy composites made through compression molding are one of the promising choices of material for the vehicle light-weighting strategy. Previous studies have shown that the processing conditions can have substantial influence on the performance of this type of the material. Therefore the optimization of the compression molding process is of great importance to the manufacturing practice. An efficient way to achieve the optimized design of this process would be through conducting finite element (FE) simulations of compression molding for woven fabric carbon fiber/epoxy composites. However, performing such simulation remains a challenging task for FE as multiple typesmore » of physics are involved during the compression molding process, including the epoxy resin curing and the complex mechanical behavior of woven fabric structure. In the present study, the FE simulation of the compression molding process of resin based woven fabric composites at continuum level is conducted, which is enabled by the implementation of an integrated material modeling methodology in LS-Dyna. Specifically, the chemo-thermo-mechanical problem of compression molding is solved through the coupling of three material models, i.e., one thermal model for temperature history in the resin, one mechanical model to update the curing-dependent properties of the resin and another mechanical model to simulate the behavior of the woven fabric composites. Preliminary simulations of the carbon fiber/epoxy woven fabric composites in LS-Dyna are presented as a demonstration, while validations and models with real part geometry are planned in the future work.« less
Image quality comparison between single energy and dual energy CT protocols for hepatic imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Yuan, E-mail: yuanyao@stanford.edu; Pelc, Nor
Purpose: Multi-detector computed tomography (MDCT) enables volumetric scans in a single breath hold and is clinically useful for hepatic imaging. For simple tasks, conventional single energy (SE) computed tomography (CT) images acquired at the optimal tube potential are known to have better quality than dual energy (DE) blended images. However, liver imaging is complex and often requires imaging of both structures containing iodinated contrast media, where atomic number differences are the primary contrast mechanism, and other structures, where density differences are the primary contrast mechanism. Hence it is conceivable that the broad spectrum used in a dual energy acquisition maymore » be an advantage. In this work we are interested in comparing these two imaging strategies at equal-dose and more complex settings. Methods: We developed numerical anthropomorphic phantoms to mimic realistic clinical CT scans for medium size and large size patients. MDCT images based on the defined phantoms were simulated using various SE and DE protocols at pre- and post-contrast stages. For SE CT, images from 60 kVp through 140 with 10 kVp steps were considered; for DE CT, both 80/140 and 100/140 kVp scans were simulated and linearly blended at the optimal weights. To make a fair comparison, the mAs of each scan was adjusted to match the reference radiation dose (120 kVp, 200 mAs for medium size patients and 140 kVp, 400 mAs for large size patients). Contrast-to-noise ratio (CNR) of liver against other soft tissues was used to evaluate and compare the SE and DE protocols, and multiple pre- and post-contrasted liver-tissue pairs were used to define a composite CNR. To help validate the simulation results, we conducted a small clinical study. Eighty-five 120 kVp images and 81 blended 80/140 kVp images were collected and compared through both quantitative image quality analysis and an observer study. Results: In the simulation study, we found that the CNR of pre-contrast SE image mostly increased with increasing kVp while for post-contrast imaging 90 kVp or lower yielded higher CNR images, depending on the differential iodine concentration of each tissue. Similar trends were seen in DE blended CNR and those from SE protocols. In the presence of differential iodine concentration (i.e., post-contrast), the CNR curves maximize at lower kVps (80–120), with the peak shifted rightward for larger patients. The combined pre- and post-contrast composite CNR study demonstrated that an optimal SE protocol has better performance than blended DE images, and the optimal tube potential for SE scan is around 90 kVp for a medium size patients and between 90 and 120 kVp for large size patients (although low kVp imaging requires high x-ray tube power to avoid photon starvation). Also, a tin filter added to the high kVp beam is not only beneficial for material decomposition but it improves the CNR of the DE blended images as well. The dose adjusted CNR of the clinical images also showed the same trend and radiologists favored the SE scans over blended DE images. Conclusions: Our simulation showed that an optimized SE protocol produces up to 5% higher CNR for a range of clinical tasks. The clinical study also suggested 120 kVp SE scans have better image quality than blended DE images. Hence, blended DE images do not have a fundamental CNR advantage over optimized SE images.« less
Flight Tasks and Metrics to Evaluate Laser Eye Protection in Flight Simulators
2017-07-07
AFRL-RH-FS-TR-2017-0026 Flight Tasks and Metrics to Evaluate Laser Eye Protection in Flight Simulators Thomas K. Kuyk Peter A. Smith Solangia...34Flight Tasks and Metrics to Evaluate Laser Eye Protection in Flight Simulators" (AFRL-RH-FS-TR- 2017 - 0026 SHORTER.PATRI CK.D.1023156390 Digitally...SUBTITLE Flight Tasks and Metrics to Evaluate Laser Eye Protection in Flight Simulators 5a. CONTRACT NUMBER FA8650-14-D-6519 5b. GRANT NUMBER 5c
Optimization: Old Dogs and New Tasks
ERIC Educational Resources Information Center
Kaplan, Jennifer J.; Otten, Samuel
2012-01-01
This article introduces an optimization task with a ready-made motivating question that may be paraphrased as follows: "Are you smarter than a Welsh corgi?" The authors present the task along with descriptions of the ways in which two groups of students approached it. These group vignettes reveal as much about the nature of calculus students'…
Self-Efficacy and Interest: Experimental Studies of Optimal Incompetence.
ERIC Educational Resources Information Center
Silvia, Paul J.
2003-01-01
To test the optimal incompetence hypothesis (high self-efficacy lowers task interest), 30 subjects rated interest, perceived difficulty, and confidence of success in different tasks. In study 2, 33 subjects completed a dart-game task in easy, moderate, and difficult conditions. In both, interest was a quadratic function of self-efficacy,…
A domain-specific compiler for a parallel multiresolution adaptive numerical simulation environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajbhandari, Samyam; Kim, Jinsung; Krishnamoorthy, Sriram
This paper describes the design and implementation of a layered domain-specific compiler to support MADNESS---Multiresolution ADaptive Numerical Environment for Scientific Simulation. MADNESS is a high-level software environment for the solution of integral and differential equations in many dimensions, using adaptive and fast harmonic analysis methods with guaranteed precision. MADNESS uses k-d trees to represent spatial functions and implements operators like addition, multiplication, differentiation, and integration on the numerical representation of functions. The MADNESS runtime system provides global namespace support and a task-based execution model including futures. MADNESS is currently deployed on massively parallel supercomputers and has enabled many science advances.more » Due to the highly irregular and statically unpredictable structure of the k-d trees representing the spatial functions encountered in MADNESS applications, only purely runtime approaches to optimization have previously been implemented in the MADNESS framework. This paper describes a layered domain-specific compiler developed to address some performance bottlenecks in MADNESS. The newly developed static compile-time optimizations, in conjunction with the MADNESS runtime support, enable significant performance improvement for the MADNESS framework.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gang, G; Stayman, J; Ouadah, S
2015-06-15
Purpose: This work introduces a task-driven imaging framework that utilizes a patient-specific anatomical model, mathematical definition of the imaging task, and a model of the imaging system to prospectively design acquisition and reconstruction techniques that maximize task-based imaging performance. Utility of the framework is demonstrated in the joint optimization of tube current modulation and view-dependent reconstruction kernel in filtered-backprojection reconstruction and non-circular orbit design in model-based reconstruction. Methods: The system model is based on a cascaded systems analysis of cone-beam CT capable of predicting the spatially varying noise and resolution characteristics as a function of the anatomical model and amore » wide range of imaging parameters. Detectability index for a non-prewhitening observer model is used as the objective function in a task-driven optimization. The combination of tube current and reconstruction kernel modulation profiles were identified through an alternating optimization algorithm where tube current was updated analytically followed by a gradient-based optimization of reconstruction kernel. The non-circular orbit is first parameterized as a linear combination of bases functions and the coefficients were then optimized using an evolutionary algorithm. The task-driven strategy was compared with conventional acquisitions without modulation, using automatic exposure control, and in a circular orbit. Results: The task-driven strategy outperformed conventional techniques in all tasks investigated, improving the detectability of a spherical lesion detection task by an average of 50% in the interior of a pelvis phantom. The non-circular orbit design successfully mitigated photon starvation effects arising from a dense embolization coil in a head phantom, improving the conspicuity of an intracranial hemorrhage proximal to the coil. Conclusion: The task-driven imaging framework leverages a knowledge of the imaging task within a patient-specific anatomical model to optimize image acquisition and reconstruction techniques, thereby improving imaging performance beyond that achievable with conventional approaches. 2R01-CA-112163; R01-EB-017226; U01-EB-018758; Siemens Healthcare (Forcheim, Germany)« less
Dziuda, Lukasz; Biernacki, Marcin P; Baran, Paulina M; Truszczyński, Olaf E
2014-05-01
In the study, we checked: 1) how the simulator test conditions affect the severity of simulator sickness symptoms; 2) how the severity of simulator sickness symptoms changes over time; and 3) whether the conditions of the simulator test affect the severity of these symptoms in different ways, depending on the time that has elapsed since the performance of the task in the simulator. We studied 12 men aged 24-33 years (M = 28.8, SD = 3.26) using a truck simulator. The SSQ questionnaire was used to assess the severity of the symptoms of simulator sickness. Each of the subjects performed three 30-minute tasks running along the same route in a driving simulator. Each of these tasks was carried out in a different simulator configuration: A) fixed base platform with poor visibility; B) fixed base platform with good visibility; and C) motion base platform with good visibility. The measurement of the severity of the simulator sickness symptoms took place in five consecutive intervals. The results of the analysis showed that the simulator test conditions affect in different ways the severity of the simulator sickness symptoms, depending on the time which has elapsed since performing the task on the simulator. The simulator sickness symptoms persisted at the highest level for the test conditions involving the motion base platform. Also, when performing the tasks on the motion base platform, the severity of the simulator sickness symptoms varied depending on the time that had elapsed since performing the task. Specifically, the addition of motion to the simulation increased the oculomotor and disorientation symptoms reported as well as the duration of the after-effects. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Optimal multisensory decision-making in a reaction-time task.
Drugowitsch, Jan; DeAngelis, Gregory C; Klier, Eliana M; Angelaki, Dora E; Pouget, Alexandre
2014-06-14
Humans and animals can integrate sensory evidence from various sources to make decisions in a statistically near-optimal manner, provided that the stimulus presentation time is fixed across trials. Little is known about whether optimality is preserved when subjects can choose when to make a decision (reaction-time task), nor when sensory inputs have time-varying reliability. Using a reaction-time version of a visual/vestibular heading discrimination task, we show that behavior is clearly sub-optimal when quantified with traditional optimality metrics that ignore reaction times. We created a computational model that accumulates evidence optimally across both cues and time, and trades off accuracy with decision speed. This model quantitatively explains subjects's choices and reaction times, supporting the hypothesis that subjects do, in fact, accumulate evidence optimally over time and across sensory modalities, even when the reaction time is under the subject's control.
NASA Astrophysics Data System (ADS)
Tran, T.
With the onset of the SmallSat era, the RSO catalog is expected to see continuing growth in the near future. This presents a significant challenge to the current sensor tasking of the SSN. The Air Force is in need of a sensor tasking system that is robust, efficient, scalable, and able to respond in real-time to interruptive events that can change the tracking requirements of the RSOs. Furthermore, the system must be capable of using processed data from heterogeneous sensors to improve tasking efficiency. The SSN sensor tasking can be regarded as an economic problem of supply and demand: the amount of tracking data needed by each RSO represents the demand side while the SSN sensor tasking represents the supply side. As the number of RSOs to be tracked grows, demand exceeds supply. The decision-maker is faced with the problem of how to allocate resources in the most efficient manner. Braxton recently developed a framework called Multi-Objective Resource Optimization using Genetic Algorithm (MOROUGA) as one of its modern COTS software products. This optimization framework took advantage of the maturing technology of evolutionary computation in the last 15 years. This framework was applied successfully to address the resource allocation of an AFSCN-like problem. In any resource allocation problem, there are five key elements: (1) the resource pool, (2) the tasks using the resources, (3) a set of constraints on the tasks and the resources, (4) the objective functions to be optimized, and (5) the demand levied on the resources. In this paper we explain in detail how the design features of this optimization framework are directly applicable to address the SSN sensor tasking domain. We also discuss our validation effort as well as present the result of the AFSCN resource allocation domain using a prototype based on this optimization framework.
Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam
2015-01-01
The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches. PMID:25734182
Watson, Robert A
2014-08-01
To test the hypothesis that machine learning algorithms increase the predictive power to classify surgical expertise using surgeons' hand motion patterns. In 2012 at the University of North Carolina at Chapel Hill, 14 surgical attendings and 10 first- and second-year surgical residents each performed two bench model venous anastomoses. During the simulated tasks, the participants wore an inertial measurement unit on the dorsum of their dominant (right) hand to capture their hand motion patterns. The pattern from each bench model task performed was preprocessed into a symbolic time series and labeled as expert (attending) or novice (resident). The labeled hand motion patterns were processed and used to train a Support Vector Machine (SVM) classification algorithm. The trained algorithm was then tested for discriminative/predictive power against unlabeled (blinded) hand motion patterns from tasks not used in the training. The Lempel-Ziv (LZ) complexity metric was also measured from each hand motion pattern, with an optimal threshold calculated to separately classify the patterns. The LZ metric classified unlabeled (blinded) hand motion patterns into expert and novice groups with an accuracy of 70% (sensitivity 64%, specificity 80%). The SVM algorithm had an accuracy of 83% (sensitivity 86%, specificity 80%). The results confirmed the hypothesis. The SVM algorithm increased the predictive power to classify blinded surgical hand motion patterns into expert versus novice groups. With further development, the system used in this study could become a viable tool for low-cost, objective assessment of procedural proficiency in a competency-based curriculum.
Cunningham, C E; Siegel, L S
1987-06-01
Groups of 30 ADD-H boys and 90 normal boys were divided into 30 mixed dyads composed of a normal and an ADD-H boy, and 30 normal dyads composed of 2 normal boys. Dyads were videotaped interacting in 15-minute free-play, 15-minute cooperative task, and 15-minute simulated classroom settings. Mixed dyads engaged in more controlling interaction than normal dyads in both free-play and simulated classroom settings. In the simulated classroom, mixed dyads completed fewer math problems and were less compliant with the commands of peers. ADD-H children spent less simulated classroom time on task and scored lower on drawing tasks than normal peers. Older dyads proved less controlling, more compliant with peer commands, more inclined to play and work independently, less active, and more likely to remain on task during the cooperative task and simulated classroom settings. Results suggest that the ADD-H child prompts a more controlling, less cooperative pattern of responses from normal peers.
Research on Scheduling Algorithm for Multi-satellite and Point Target Task on Swinging Mode
NASA Astrophysics Data System (ADS)
Wang, M.; Dai, G.; Peng, L.; Song, Z.; Chen, G.
2012-12-01
Nowadays, using satellite in space to observe ground is an important and major method to obtain ground information. With the development of the scientific technology in the field of space, many fields such as military and economic and other areas have more and more requirement of space technology because of the benefits of the satellite's widespread, timeliness and unlimited of area and country. And at the same time, because of the wide use of all kinds of satellites, sensors, repeater satellites and ground receiving stations, ground control system are now facing great challenge. Therefore, how to make the best value of satellite resources so as to make full use of them becomes an important problem of ground control system. Satellite scheduling is to distribute the resource to all tasks without conflict to obtain the scheduling result so as to complete as many tasks as possible to meet user's requirement under considering the condition of the requirement of satellites, sensors and ground receiving stations. Considering the size of the task, we can divide tasks into point task and area task. This paper only considers point targets. In this paper, a description of satellite scheduling problem and a chief introduction of the theory of satellite scheduling are firstly made. We also analyze the restriction of resource and task in scheduling satellites. The input and output flow of scheduling process are also chiefly described in the paper. On the basis of these analyses, we put forward a scheduling model named as multi-variable optimization model for multi-satellite and point target task on swinging mode. In the multi-variable optimization model, the scheduling problem is transformed the parametric optimization problem. The parameter we wish to optimize is the swinging angle of every time-window. In the view of the efficiency and accuracy, some important problems relating the satellite scheduling such as the angle relation between satellites and ground targets, positive and negative swinging angle and the computation of time window are analyzed and discussed. And many strategies to improve the efficiency of this model are also put forward. In order to solve the model, we bring forward the conception of activity sequence map. By using the activity sequence map, the activity choice and the start time of the activity can be divided. We also bring forward three neighborhood operators to search the result space. The front movement remaining time and the back movement remaining time are used to analyze the feasibility to generate solution from neighborhood operators. Lastly, the algorithm to solve the problem and model is put forward based genetic algorithm. Population initialization, crossover operator, mutation operator, individual evaluation, collision decrease operator, select operator and collision elimination operator is designed in the paper. Finally, the scheduling result and the simulation for a practical example on 5 satellites and 100 point targets with swinging mode is given, and the scheduling performances are also analyzed while the swinging angle in 0, 5, 10, 15, 25. It can be shown by the result that the model and the algorithm are more effective than those ones without swinging mode.
NASA Astrophysics Data System (ADS)
Leśko, Michał; Bujalski, Wojciech
2017-12-01
The aim of this document is to present the topic of modeling district heating systems in order to enable optimization of their operation, with special focus on thermal energy storage in the pipelines. Two mathematical models for simulation of transient behavior of district heating networks have been described, and their results have been compared in a case study. The operational optimization in a DH system, especially if this system is supplied from a combined heat and power plant, is a difficult and complicated task. Finding a global financial optimum requires considering long periods of time and including thermal energy storage possibilities into consideration. One of the most interesting options for thermal energy storage is utilization of thermal inertia of the network itself. This approach requires no additional investment, while providing significant possibilities for heat load shifting. It is not feasible to use full topological models of the networks, comprising thousands of substations and network sections, for the purpose of operational optimization with thermal energy storage, because such models require long calculation times. In order to optimize planned thermal energy storage actions, it is necessary to model the transient behavior of the network in a very simple way - allowing for fast and reliable calculations. Two approaches to building such models have been presented. Both have been tested by comparing the results of simulation of the behavior of the same network. The characteristic features, advantages and disadvantages of both kinds of models have been identified. The results can prove useful for district heating system operators in the near future.
Simulation for learning and teaching procedural skills: the state of the science.
Nestel, Debra; Groom, Jeffrey; Eikeland-Husebø, Sissel; O'Donnell, John M
2011-08-01
Simulation is increasingly used to support learning of procedural skills. Our panel was tasked with summarizing the "best evidence." We addressed the following question: To what extent does simulation support learning and teaching in procedural skills? We conducted a literature search from 2000 to 2010 using Medline, CINAHL, ERIC, and PSYCHINFO databases. Inclusion criteria were established and then data extracted from abstracts according to several categories. Although secondary sources of literature were sourced from key informants and participants at the "Research Consensus Summit: State of the Science," they were not included in the data extraction process but were used to inform discussion. Eighty-one of 1,575 abstracts met inclusion criteria. The uses of simulation for learning and teaching procedural skills were diverse. The most commonly reported simulator type was manikins (n = 17), followed by simulated patients (n = 14), anatomic simulators (eg, part-task) (n = 12), and others. For research design, most abstracts (n = 52) were at Level IV of the National Health and Medical Research Council classification (ie, case series, posttest, or pretest/posttest, with no control group, narrative reviews, and editorials). The most frequent Best Evidence Medical Education ranking was for conclusions probable (n = 37). Using the modified Kirkpatrick scale for impact of educational intervention, the most frequent classification was for modification of knowledge and/or skills (Level 2b) (n = 52). Abstracts assessed skills (n = 47), knowledge (n = 32), and attitude (n = 15) with the majority demonstrating improvements after simulation-based interventions. Studies focused on immediate gains and skills assessments were usually conducted in simulation. The current state of the science finds that simulation usually leads to improved knowledge and skills. Learners and instructors express high levels of satisfaction with the method. While most studies focus on short-term gains attained in the simulation setting, a small number support the transfer of simulation learning to clinical practice. Further study is needed to optimize the alignment of learner, instructor, simulator, setting, and simulation for learning and teaching procedural skills. Instructional design and educational theory, contextualization, transferability, accessibility, and scalability must all be considered in simulation-based education programs. More consistently, robust research designs are required to strengthen the evidence.
Davis, Bradley; Welch, Katherine; Walsh-Hart, Sharon; Hanseman, Dennis; Petro, Michael; Gerlach, Travis; Dorlac, Warren; Collins, Jocelyn; Pritts, Timothy
2014-08-01
Critical Care Air Transport Teams (CCATTs) are a critical component of the United States Air Force evacuation paradigm. This study was conducted to assess the incidence of task saturation in simulated CCATT missions and to determine if there are predictable performance domains. Sixteen CCATTs were studied over a 6-month period. Performance was scored using a tool assessing eight domains of performance. Teams were also assessed during critical events to determine the presence or absence of task saturation and its impact on patient care. Sixteen simulated missions were reviewed and 45 crisis events identified. Task saturation was present in 22/45 (49%) of crisis events. Scoring demonstrated that task saturation was associated with poor performance in teamwork (odds ratio [OR] = 1.96), communication (OR = 2.08), and mutual performance monitoring (OR = 1.9), but not maintenance of guidelines, task management, procedural skill, and equipment management. We analyzed the effect of task saturation on adverse patient outcomes during crisis events. Adverse outcomes occurred more often when teams were task saturated as compared to non-task-saturated teams (91% vs. 23%; RR 4.1, p < 0.0001). Task saturation is observed in simulated CCATT missions. Nontechnical skills correlate with task saturation. Task saturation is associated with worsening physiologic derangements in simulated patients. Reprint & Copyright © 2014 Association of Military Surgeons of the U.S.
Reconfigurable Software for Controlling Formation Flying
NASA Technical Reports Server (NTRS)
Mueller, Joseph B.
2006-01-01
Software for a system to control the trajectories of multiple spacecraft flying in formation is being developed to reflect underlying concepts of (1) a decentralized approach to guidance and control and (2) reconfigurability of the control system, including reconfigurability of the software and of control laws. The software is organized as a modular network of software tasks. The computational load for both determining relative trajectories and planning maneuvers is shared equally among all spacecraft in a cluster. The flexibility and robustness of the software are apparent in the fact that tasks can be added, removed, or replaced during flight. In a computational simulation of a representative formation-flying scenario, it was demonstrated that the following are among the services performed by the software: Uploading of commands from a ground station and distribution of the commands among the spacecraft, Autonomous initiation and reconfiguration of formations, Autonomous formation of teams through negotiations among the spacecraft, Working out details of high-level commands (e.g., shapes and sizes of geometrically complex formations), Implementation of a distributed guidance law providing autonomous optimization and assignment of target states, and Implementation of a decentralized, fuel-optimal, impulsive control law for planning maneuvers.
Weller, Jennifer M; Janssen, Anna L; Merry, Alan F; Robinson, Brian
2008-04-01
We placed anaesthesia teams into a stressful environment in order to explore interactions between members of different professional groups and to investigate their perspectives on the impact of these interactions on team performance. Ten anaesthetists, 5 nurses and 5 trained anaesthetic assistants each participated in 2 full-immersion simulations of critical events using a high-fidelity computerised patient simulator. Their perceptions of team interactions were explored through questionnaires and semi-structured interviews. Written questionnaire data and interview transcriptions were entered into N6 qualitative software. Data were analysed by 2 investigators for emerging themes and coded to produce reports on each theme. We found evidence of limited understanding of the roles and capabilities of team members across professional boundaries, different perceptions of appropriate roles and responsibilities for different members of the team, limited sharing of information between team members and limited team input into decision making. There was a perceived impact on task distribution and the optimal utilisation of resources within the team. Effective management of medical emergencies depends on optimal team function. We have identified important factors affecting interactions between different health professionals in the anaesthesia team, and their perceived influences on team function. This provides evidence on which to build appropriate and specific strategies for interdisciplinary team training in operating theatre staff.
Stock price change rate prediction by utilizing social network activities.
Deng, Shangkun; Mitsubuchi, Takashi; Sakurai, Akito
2014-01-01
Predicting stock price change rates for providing valuable information to investors is a challenging task. Individual participants may express their opinions in social network service (SNS) before or after their transactions in the market; we hypothesize that stock price change rate is better predicted by a function of social network service activities and technical indicators than by a function of just stock market activities. The hypothesis is tested by accuracy of predictions as well as performance of simulated trading because success or failure of prediction is better measured by profits or losses the investors gain or suffer. In this paper, we propose a hybrid model that combines multiple kernel learning (MKL) and genetic algorithm (GA). MKL is adopted to optimize the stock price change rate prediction models that are expressed in a multiple kernel linear function of different types of features extracted from different sources. GA is used to optimize the trading rules used in the simulated trading by fusing the return predictions and values of three well-known overbought and oversold technical indicators. Accumulated return and Sharpe ratio were used to test the goodness of performance of the simulated trading. Experimental results show that our proposed model performed better than other models including ones using state of the art techniques.
Stock Price Change Rate Prediction by Utilizing Social Network Activities
Mitsubuchi, Takashi; Sakurai, Akito
2014-01-01
Predicting stock price change rates for providing valuable information to investors is a challenging task. Individual participants may express their opinions in social network service (SNS) before or after their transactions in the market; we hypothesize that stock price change rate is better predicted by a function of social network service activities and technical indicators than by a function of just stock market activities. The hypothesis is tested by accuracy of predictions as well as performance of simulated trading because success or failure of prediction is better measured by profits or losses the investors gain or suffer. In this paper, we propose a hybrid model that combines multiple kernel learning (MKL) and genetic algorithm (GA). MKL is adopted to optimize the stock price change rate prediction models that are expressed in a multiple kernel linear function of different types of features extracted from different sources. GA is used to optimize the trading rules used in the simulated trading by fusing the return predictions and values of three well-known overbought and oversold technical indicators. Accumulated return and Sharpe ratio were used to test the goodness of performance of the simulated trading. Experimental results show that our proposed model performed better than other models including ones using state of the art techniques. PMID:24790586
2009-09-01
Wireless Sensor Network (WSN) Simulator Research Personnel: Dr. Ali Abu-El Humos Task No. Task Current Status 1 Literature review and problem definition...networks.com/ [2] S. Dulman, P. Havinga, "A Simulation Template for Wireless Sensor Networks ," Supplement of the Sixth International Symposium on Autonomous... Sensor Network (WSN) Simulator 76 I Breakdown of the Research Activity to Tasks 76 II Description of the Tasks 76 Task 1 Literature Review and
Measuring Pilot Workload in a Moving-base Simulator. Part 2: Building Levels of Workload
NASA Technical Reports Server (NTRS)
Kantowitz, B. H.; Hart, S. G.; Bortolussi, M. R.; Shively, R. J.; Kantowitz, S. C.
1984-01-01
Pilot behavior in flight simulators often use a secondary task as an index of workload. His routine to regard flying as the primary task and some less complex task as the secondary task. While this assumption is quite reasonable for most secondary tasks used to study mental workload in aircraft, the treatment of flying a simulator through some carefully crafted flight scenario as a unitary task is less justified. The present research acknowledges that total mental workload depends upon the specific nature of the sub-tasks that a pilot must complete as a first approximation, flight tasks were divided into three levels of complexity. The simplest level (called the Base Level) requires elementary maneuvers that do not utilize all the degrees of freedom of which an aircraft, or a moving-base simulator; is capable. The second level (called the Paired Level) requires the pilot to simultaneously execute two Base Level tasks. The third level (called the Complex Level) imposes three simultaneous constraints upon the pilot.
Design and multi-physics optimization of rotary MRF brakes
NASA Astrophysics Data System (ADS)
Topcu, Okan; Taşcıoğlu, Yiğit; Konukseven, Erhan İlhan
2018-03-01
Particle swarm optimization (PSO) is a popular method to solve the optimization problems. However, calculations for each particle will be excessive when the number of particles and complexity of the problem increases. As a result, the execution speed will be too slow to achieve the optimized solution. Thus, this paper proposes an automated design and optimization method for rotary MRF brakes and similar multi-physics problems. A modified PSO algorithm is developed for solving multi-physics engineering optimization problems. The difference between the proposed method and the conventional PSO is to split up the original single population into several subpopulations according to the division of labor. The distribution of tasks and the transfer of information to the next party have been inspired by behaviors of a hunting party. Simulation results show that the proposed modified PSO algorithm can overcome the problem of heavy computational burden of multi-physics problems while improving the accuracy. Wire type, MR fluid type, magnetic core material, and ideal current inputs have been determined by the optimization process. To the best of the authors' knowledge, this multi-physics approach is novel for optimizing rotary MRF brakes and the developed PSO algorithm is capable of solving other multi-physics engineering optimization problems. The proposed method has showed both better performance compared to the conventional PSO and also has provided small, lightweight, high impedance rotary MRF brake designs.
Crown, William; Buyukkaramikli, Nasuh; Thokala, Praveen; Morton, Alec; Sir, Mustafa Y; Marshall, Deborah A; Tosh, Jon; Padula, William V; Ijzerman, Maarten J; Wong, Peter K; Pasupathy, Kalyan S
2017-03-01
Providing health services with the greatest possible value to patients and society given the constraints imposed by patient characteristics, health care system characteristics, budgets, and so forth relies heavily on the design of structures and processes. Such problems are complex and require a rigorous and systematic approach to identify the best solution. Constrained optimization is a set of methods designed to identify efficiently and systematically the best solution (the optimal solution) to a problem characterized by a number of potential solutions in the presence of identified constraints. This report identifies 1) key concepts and the main steps in building an optimization model; 2) the types of problems for which optimal solutions can be determined in real-world health applications; and 3) the appropriate optimization methods for these problems. We first present a simple graphical model based on the treatment of "regular" and "severe" patients, which maximizes the overall health benefit subject to time and budget constraints. We then relate it back to how optimization is relevant in health services research for addressing present day challenges. We also explain how these mathematical optimization methods relate to simulation methods, to standard health economic analysis techniques, and to the emergent fields of analytics and machine learning. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Ma, Ning; Yu, Angela J
2016-01-01
Inhibitory control, the ability to stop or modify preplanned actions under changing task conditions, is an important component of cognitive functions. Two lines of models of inhibitory control have previously been proposed for human response in the classical stop-signal task, in which subjects must inhibit a default go response upon presentation of an infrequent stop signal: (1) the race model, which posits two independent go and stop processes that race to determine the behavioral outcome, go or stop; and (2) an optimal decision-making model, which posits that observers decides whether and when to go based on continually (Bayesian) updated information about both the go and stop stimuli. In this work, we probe the relationship between go and stop processing by explicitly manipulating the discrimination difficulty of the go stimulus. While the race model assumes the go and stop processes are independent, and therefore go stimulus discriminability should not affect the stop stimulus processing, we simulate the optimal model to show that it predicts harder go discrimination should result in longer go reaction time (RT), lower stop error rate, as well as faster stop-signal RT. We then present novel behavioral data that validate these model predictions. The results thus favor a fundamentally inseparable account of go and stop processing, in a manner consistent with the optimal model, and contradicting the independence assumption of the race model. More broadly, our findings contribute to the growing evidence that the computations underlying inhibitory control are systematically modulated by cognitive influences in a Bayes-optimal manner, thus opening new avenues for interpreting neural responses underlying inhibitory control.
Caffeine dosing strategies to optimize alertness during sleep loss.
Vital-Lopez, Francisco G; Ramakrishnan, Sridhar; Doty, Tracy J; Balkin, Thomas J; Reifman, Jaques
2018-05-28
Sleep loss, which affects about one-third of the US population, can severely impair physical and neurobehavioural performance. Although caffeine, the most widely used stimulant in the world, can mitigate these effects, currently there are no tools to guide the timing and amount of caffeine consumption to optimize its benefits. In this work, we provide an optimization algorithm, suited for mobile computing platforms, to determine when and how much caffeine to consume, so as to safely maximize neurobehavioural performance at the desired time of the day, under any sleep-loss condition. The algorithm is based on our previously validated Unified Model of Performance, which predicts the effect of caffeine consumption on a psychomotor vigilance task. We assessed the algorithm by comparing the caffeine-dosing strategies (timing and amount) it identified with the dosing strategies used in four experimental studies, involving total and partial sleep loss. Through computer simulations, we showed that the algorithm yielded caffeine-dosing strategies that enhanced performance of the predicted psychomotor vigilance task by up to 64% while using the same total amount of caffeine as in the original studies. In addition, the algorithm identified strategies that resulted in equivalent performance to that in the experimental studies while reducing caffeine consumption by up to 65%. Our work provides the first quantitative caffeine optimization tool for designing effective strategies to maximize neurobehavioural performance and to avoid excessive caffeine consumption during any arbitrary sleep-loss condition. © 2018 The Authors. Journal of Sleep Research published by John Wiley & Sons Ltd on behalf of European Sleep Research Society.
Gang, G J; Siewerdsen, J H; Stayman, J W
2016-02-01
This work applies task-driven optimization to design CT tube current modulation and directional regularization in penalized-likelihood (PL) reconstruction. The relative performance of modulation schemes commonly adopted for filtered-backprojection (FBP) reconstruction were also evaluated for PL in comparison. We adopt a task-driven imaging framework that utilizes a patient-specific anatomical model and information of the imaging task to optimize imaging performance in terms of detectability index ( d' ). This framework leverages a theoretical model based on implicit function theorem and Fourier approximations to predict local spatial resolution and noise characteristics of PL reconstruction as a function of the imaging parameters to be optimized. Tube current modulation was parameterized as a linear combination of Gaussian basis functions, and regularization was based on the design of (directional) pairwise penalty weights for the 8 in-plane neighboring voxels. Detectability was optimized using a covariance matrix adaptation evolutionary strategy algorithm. Task-driven designs were compared to conventional tube current modulation strategies for a Gaussian detection task in an abdomen phantom. The task-driven design yielded the best performance, improving d' by ~20% over an unmodulated acquisition. Contrary to FBP, PL reconstruction using automatic exposure control and modulation based on minimum variance (in FBP) performed worse than the unmodulated case, decreasing d' by 16% and 9%, respectively. This work shows that conventional tube current modulation schemes suitable for FBP can be suboptimal for PL reconstruction. Thus, the proposed task-driven optimization provides additional opportunities for improved imaging performance and dose reduction beyond that achievable with conventional acquisition and reconstruction.
Using ADA Tasks to Simulate Operating Equipment
NASA Technical Reports Server (NTRS)
DeAcetis, Louis A.; Schmidt, Oron; Krishen, Kumar
1990-01-01
A method of simulating equipment using ADA tasks is discussed. Individual units of equipment are coded as concurrently running tasks that monitor and respond to input signals. This technique has been used in a simulation of the space-to-ground Communications and Tracking subsystem of Space Station Freedom.
Using Ada tasks to simulate operating equipment
NASA Technical Reports Server (NTRS)
Deacetis, Louis A.; Schmidt, Oron; Krishen, Kumar
1990-01-01
A method of simulating equipment using Ada tasks is discussed. Individual units of equipment are coded as concurrently running tasks that monitor and respond to input signals. This technique has been used in a simulation of the space-to-ground Communications and Tracking subsystem of Space Station Freedom.
The effect of spectral filters on visual search in stroke patients.
Beasley, Ian G; Davies, Leon N
2013-01-01
Visual search impairment can occur following stroke. The utility of optimal spectral filters on visual search in stroke patients has not been considered to date. The present study measured the effect of optimal spectral filters on visual search response time and accuracy, using a task requiring serial processing. A stroke and control cohort undertook the task three times: (i) using an optimally selected spectral filter; (ii) the subjects were randomly assigned to two groups with group 1 using an optimal filter for two weeks, whereas group 2 used a grey filter for two weeks; (iii) the groups were crossed over with group 1 using a grey filter for a further two weeks and group 2 given an optimal filter, before undertaking the task for the final time. Initial use of an optimal spectral filter improved visual search response time but not error scores in the stroke cohort. Prolonged use of neither an optimal nor a grey filter improved response time or reduced error scores. In fact, response times increased with the filter, regardless of its type, for stroke and control subjects; this outcome may be due to contrast reduction or a reflection of task design, given that significant practice effects were noted.
Ghiasi, Mohammad Sadegh; Arjmand, Navid; Boroushaki, Mehrdad; Farahmand, Farzam
2016-03-01
A six-degree-of-freedom musculoskeletal model of the lumbar spine was developed to predict the activity of trunk muscles during light, moderate and heavy lifting tasks in standing posture. The model was formulated into a multi-objective optimization problem, minimizing the sum of the cubed muscle stresses and maximizing the spinal stability index. Two intelligent optimization algorithms, i.e., the vector evaluated particle swarm optimization (VEPSO) and nondominated sorting genetic algorithm (NSGA), were employed to solve the optimization problem. The optimal solution for each task was then found in the way that the corresponding in vivo intradiscal pressure could be reproduced. Results indicated that both algorithms predicted co-activity in the antagonistic abdominal muscles, as well as an increase in the stability index when going from the light to the heavy task. For all of the light, moderate and heavy tasks, the muscles' activities predictions of the VEPSO and the NSGA were generally consistent and in the same order of the in vivo electromyography data. The proposed methodology is thought to provide improved estimations for muscle activities by considering the spinal stability and incorporating the in vivo intradiscal pressure data.
NASA Astrophysics Data System (ADS)
Osei, Richard
There are many problems associated with operating a data center. Some of these problems include data security, system performance, increasing infrastructure complexity, increasing storage utilization, keeping up with data growth, and increasing energy costs. Energy cost differs by location, and at most locations fluctuates over time. The rising cost of energy makes it harder for data centers to function properly and provide a good quality of service. With reduced energy cost, data centers will have longer lasting servers/equipment, higher availability of resources, better quality of service, a greener environment, and reduced service and software costs for consumers. Some of the ways that data centers have tried to using to reduce energy costs include dynamically switching on and off servers based on the number of users and some predefined conditions, the use of environmental monitoring sensors, and the use of dynamic voltage and frequency scaling (DVFS), which enables processors to run at different combinations of frequencies with voltages to reduce energy cost. This thesis presents another method by which energy cost at data centers could be reduced. This method involves the use of Ant Colony Optimization (ACO) on a Quadratic Assignment Problem (QAP) in assigning user request to servers in geo-distributed data centers. In this paper, an effort to reduce data center energy cost involves the use of front portals, which handle users' requests, were used as ants to find cost effective ways to assign users requests to a server in heterogeneous geo-distributed data centers. The simulation results indicate that the ACO for Optimal Server Activation and Task Placement algorithm reduces energy cost on a small and large number of users' requests in a geo-distributed data center and its performance increases as the input data grows. In a simulation with 3 geo-distributed data centers, and user's resource request ranging from 25,000 to 25,000,000, the ACO algorithm was able to reduce energy cost on an average of $.70 per second. The ACO for Optimal Server Activation and Task Placement algorithm has proven to work as an alternative or improvement in reducing energy cost in geo-distributed data centers.
Jiang, Yuyi; Shao, Zhiqing; Guo, Yi
2014-01-01
A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems. PMID:25143977
Jiang, Yuyi; Shao, Zhiqing; Guo, Yi
2014-01-01
A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems.
Quantum Optimization of Fully Connected Spin Glasses
NASA Astrophysics Data System (ADS)
Venturelli, Davide; Mandrà, Salvatore; Knysh, Sergey; O'Gorman, Bryan; Biswas, Rupak; Smelyanskiy, Vadim
2015-07-01
Many NP-hard problems can be seen as the task of finding a ground state of a disordered highly connected Ising spin glass. If solutions are sought by means of quantum annealing, it is often necessary to represent those graphs in the annealer's hardware by means of the graph-minor embedding technique, generating a final Hamiltonian consisting of coupled chains of ferromagnetically bound spins, whose binding energy is a free parameter. In order to investigate the effect of embedding on problems of interest, the fully connected Sherrington-Kirkpatrick model with random ±1 couplings is programmed on the D-Wave TwoTM annealer using up to 270 qubits interacting on a Chimera-type graph. We present the best embedding prescriptions for encoding the Sherrington-Kirkpatrick problem in the Chimera graph. The results indicate that the optimal choice of embedding parameters could be associated with the emergence of the spin-glass phase of the embedded problem, whose presence was previously uncertain. This optimal parameter setting allows the performance of the quantum annealer to compete with (and potentially outperform, in the absence of analog control errors) optimized simulated annealing algorithms.
NASA Astrophysics Data System (ADS)
Moghaddam, Kamran S.; Usher, John S.
2011-07-01
In this article, a new multi-objective optimization model is developed to determine the optimal preventive maintenance and replacement schedules in a repairable and maintainable multi-component system. In this model, the planning horizon is divided into discrete and equally-sized periods in which three possible actions must be planned for each component, namely maintenance, replacement, or do nothing. The objective is to determine a plan of actions for each component in the system while minimizing the total cost and maximizing overall system reliability simultaneously over the planning horizon. Because of the complexity, combinatorial and highly nonlinear structure of the mathematical model, two metaheuristic solution methods, generational genetic algorithm, and a simulated annealing are applied to tackle the problem. The Pareto optimal solutions that provide good tradeoffs between the total cost and the overall reliability of the system can be obtained by the solution approach. Such a modeling approach should be useful for maintenance planners and engineers tasked with the problem of developing recommended maintenance plans for complex systems of components.
Sriram, Vinay K; Montgomery, Doug
2017-07-01
The Internet is subject to attacks due to vulnerabilities in its routing protocols. One proposed approach to attain greater security is to cryptographically protect network reachability announcements exchanged between Border Gateway Protocol (BGP) routers. This study proposes and evaluates the performance and efficiency of various optimization algorithms for validation of digitally signed BGP updates. In particular, this investigation focuses on the BGPSEC (BGP with SECurity extensions) protocol, currently under consideration for standardization in the Internet Engineering Task Force. We analyze three basic BGPSEC update processing algorithms: Unoptimized, Cache Common Segments (CCS) optimization, and Best Path Only (BPO) optimization. We further propose and study cache management schemes to be used in conjunction with the CCS and BPO algorithms. The performance metrics used in the analyses are: (1) routing table convergence time after BGPSEC peering reset or router reboot events and (2) peak-second signature verification workload. Both analytical modeling and detailed trace-driven simulation were performed. Results show that the BPO algorithm is 330% to 628% faster than the unoptimized algorithm for routing table convergence in a typical Internet core-facing provider edge router.
Heuristics for Multiobjective Optimization of Two-Sided Assembly Line Systems
Jawahar, N.; Ponnambalam, S. G.; Sivakumar, K.; Thangadurai, V.
2014-01-01
Products such as cars, trucks, and heavy machinery are assembled by two-sided assembly line. Assembly line balancing has significant impacts on the performance and productivity of flow line manufacturing systems and is an active research area for several decades. This paper addresses the line balancing problem of a two-sided assembly line in which the tasks are to be assigned at L side or R side or any one side (addressed as E). Two objectives, minimum number of workstations and minimum unbalance time among workstations, have been considered for balancing the assembly line. There are two approaches to solve multiobjective optimization problem: first approach combines all the objectives into a single composite function or moves all but one objective to the constraint set; second approach determines the Pareto optimal solution set. This paper proposes two heuristics to evolve optimal Pareto front for the TALBP under consideration: Enumerative Heuristic Algorithm (EHA) to handle problems of small and medium size and Simulated Annealing Algorithm (SAA) for large-sized problems. The proposed approaches are illustrated with example problems and their performances are compared with a set of test problems. PMID:24790568
Resource planning and scheduling of payload for satellite with particle swarm optimization
NASA Astrophysics Data System (ADS)
Li, Jian; Wang, Cheng
2007-11-01
The resource planning and scheduling technology of payload is a key technology to realize an automated control for earth observing satellite with limited resources on satellite, which is implemented to arrange the works states of various payloads to carry out missions by optimizing the scheme of the resources. The scheduling task is a difficult constraint optimization problem with various and mutative requests and constraints. Based on the analysis of the satellite's functions and the payload's resource constraints, a proactive planning and scheduling strategy based on the availability of consumable and replenishable resources in time-order is introduced along with dividing the planning and scheduling period to several pieces. A particle swarm optimization algorithm is proposed to address the problem with an adaptive mutation operator selection, where the swarm is divided into groups with different probabilities to employ various mutation operators viz., differential evolution, Gaussian and random mutation operators. The probabilities are adjusted adaptively by comparing the effectiveness of the groups to select a proper operator. The simulation results have shown the feasibility and effectiveness of the method.
Task Scheduling in Desktop Grids: Open Problems
NASA Astrophysics Data System (ADS)
Chernov, Ilya; Nikitina, Natalia; Ivashko, Evgeny
2017-12-01
We survey the areas of Desktop Grid task scheduling that seem to be insufficiently studied so far and are promising for efficiency, reliability, and quality of Desktop Grid computing. These topics include optimal task grouping, "needle in a haystack" paradigm, game-theoretical scheduling, domain-imposed approaches, special optimization of the final stage of the batch computation, and Enterprise Desktop Grids.
Kim, Woojong; Chang, Yongmin; Kim, Jingu; Seo, Jeehye; Ryu, Kwangmin; Lee, Eunkyung; Woo, Minjung; Janelle, Christopher M
2014-12-01
We investigated brain activity in elite, expert, and novice archers during a simulated archery aiming task to determine whether neural correlates of performance differ by skill level. Success in shooting sports depends on complex mental routines just before the shot, when the brain prepares to execute the movement. During functional magnetic resonance imaging, 40 elite, expert, or novice archers aimed at a simulated 70-meter-distant target and pushed a button when they mentally released the bowstring. At the moment of optimal aiming, the elite and expert archers relied primarily on a dorsal pathway, with greatest activity in the occipital lobe, temporoparietal lobe, and dorsolateral pre-motor cortex. The elites showed activity in the supplementary motor area, temporoparietal area, and cerebellar dentate, while the experts showed activity only in the superior frontal area. The novices showed concurrent activity in not only the dorsolateral pre-motor cortex but also the ventral pathways linked to the ventrolateral pre-motor cortex. The novices exhibited broad activity in the superior frontal area, inferior frontal area, ventral prefrontal cortex, primary motor cortex, superior parietal lobule, and primary somatosensory cortex. The more localized neural activity of elite and expert archers than novices permits greater efficiency in the complex processes subserved by these regions. The elite group's high activity in the cerebellar dentate indicates that the cerebellum is involved in automating simultaneous movements by integrating the sensorimotor memory enabled by greater expertise in self-paced aiming tasks. A companion article comments on and generalizes our findings.
NASA Technical Reports Server (NTRS)
Chen, R. T. N.; Talbot, P. D.; Gerdes, R. M.; Dugan, D. C.
1979-01-01
Four basic single-rotor helicopters, one teetering, on articulated, and two hingeless, which were found to have a variety of major deficiencies in a previous fixed-based simulator study, were selected as baseline configurations. The stability and control augmentation systems (SCAS) include simple control augmentation systems to decouple pitch and yaw responses due to collective input and to quicken the pitch and roll control responses; SCAS of rate-command type designed to optimize the sensitivity and damping and to decouple the pitch-roll due to aircraft angular tate; and attitude-command type SCAS. Pilot ratings and commentary are presented as well as performance data related to the task. SCAS control usages and their gain levels associated with specific rotor types are also discussed.
Optimizing the number of steps in learning tasks for complex skills.
Nadolski, Rob J; Kirschner, Paul A; van Merriënboer, Jeroen J G
2005-06-01
Carrying out whole tasks is often too difficult for novice learners attempting to acquire complex skills. The common solution is to split up the tasks into a number of smaller steps. The number of steps must be optimized for efficient and effective learning. The aim of the study is to investigate the relation between the number of steps provided to learners and the quality of their learning of complex skills. It is hypothesized that students receiving an optimized number of steps will learn better than those receiving either the whole task in only one step or those receiving a large number of steps. Participants were 35 sophomore law students studying at Dutch universities, mean age=22.8 years (SD=3.5), 63% were female. Participants were randomly assigned to 1 of 3 computer-delivered versions of a multimedia programme on how to prepare and carry out a law plea. The versions differed only in the number of learning steps provided. Videotaped plea-performance results were determined, various related learning measures were acquired and all computer actions were logged and analyzed. Participants exposed to an intermediate (i.e. optimized) number of steps outperformed all others on the compulsory learning task. No differences in performance on a transfer task were found. A high number of steps proved to be less efficient for carrying out the learning task. An intermediate number of steps is the most effective, proving that the number of steps can be optimized for improving learning.
Comparing genomes with rearrangements and segmental duplications.
Shao, Mingfu; Moret, Bernard M E
2015-06-15
Large-scale evolutionary events such as genomic rearrange.ments and segmental duplications form an important part of the evolution of genomes and are widely studied from both biological and computational perspectives. A basic computational problem is to infer these events in the evolutionary history for given modern genomes, a task for which many algorithms have been proposed under various constraints. Algorithms that can handle both rearrangements and content-modifying events such as duplications and losses remain few and limited in their applicability. We study the comparison of two genomes under a model including general rearrangements (through double-cut-and-join) and segmental duplications. We formulate the comparison as an optimization problem and describe an exact algorithm to solve it by using an integer linear program. We also devise a sufficient condition and an efficient algorithm to identify optimal substructures, which can simplify the problem while preserving optimality. Using the optimal substructures with the integer linear program (ILP) formulation yields a practical and exact algorithm to solve the problem. We then apply our algorithm to assign in-paralogs and orthologs (a necessary step in handling duplications) and compare its performance with that of the state-of-the-art method MSOAR, using both simulations and real data. On simulated datasets, our method outperforms MSOAR by a significant margin, and on five well-annotated species, MSOAR achieves high accuracy, yet our method performs slightly better on each of the 10 pairwise comparisons. http://lcbb.epfl.ch/softwares/coser. © The Author 2015. Published by Oxford University Press.
Assessment of simulation fidelity using measurements of piloting technique in flight. II
NASA Technical Reports Server (NTRS)
Ferguson, S. W.; Clement, W. F.; Hoh, R. H.; Cleveland, W. B.
1985-01-01
Two components of the Vertical Motion Simulator (presently being used to assess the fidelity of UH-60A simulation) are evaluated: (1) the dash/quickstop Nap-of-the-earth (NOE) piloting task, and (2) the bop-up task. Data from these two flight test experiments are presented which provide information on the effect of reduced visual field of view, variation in scene content and texture, and the affect of pure time delay in the closed-loop pilot response. In comparison with task performance results obtained in flight tests, the results from the simulation indicate that the pilot's NOE task performance in the simulator is significantly degraded.
PyNN: A Common Interface for Neuronal Network Simulators.
Davison, Andrew P; Brüderle, Daniel; Eppler, Jochen; Kremkow, Jens; Muller, Eilif; Pecevski, Dejan; Perrinet, Laurent; Yger, Pierre
2008-01-01
Computational neuroscience has produced a diversity of software for simulations of networks of spiking neurons, with both negative and positive consequences. On the one hand, each simulator uses its own programming or configuration language, leading to considerable difficulty in porting models from one simulator to another. This impedes communication between investigators and makes it harder to reproduce and build on the work of others. On the other hand, simulation results can be cross-checked between different simulators, giving greater confidence in their correctness, and each simulator has different optimizations, so the most appropriate simulator can be chosen for a given modelling task. A common programming interface to multiple simulators would reduce or eliminate the problems of simulator diversity while retaining the benefits. PyNN is such an interface, making it possible to write a simulation script once, using the Python programming language, and run it without modification on any supported simulator (currently NEURON, NEST, PCSIM, Brian and the Heidelberg VLSI neuromorphic hardware). PyNN increases the productivity of neuronal network modelling by providing high-level abstraction, by promoting code sharing and reuse, and by providing a foundation for simulator-agnostic analysis, visualization and data-management tools. PyNN increases the reliability of modelling studies by making it much easier to check results on multiple simulators. PyNN is open-source software and is available from http://neuralensemble.org/PyNN.
PyNN: A Common Interface for Neuronal Network Simulators
Davison, Andrew P.; Brüderle, Daniel; Eppler, Jochen; Kremkow, Jens; Muller, Eilif; Pecevski, Dejan; Perrinet, Laurent; Yger, Pierre
2008-01-01
Computational neuroscience has produced a diversity of software for simulations of networks of spiking neurons, with both negative and positive consequences. On the one hand, each simulator uses its own programming or configuration language, leading to considerable difficulty in porting models from one simulator to another. This impedes communication between investigators and makes it harder to reproduce and build on the work of others. On the other hand, simulation results can be cross-checked between different simulators, giving greater confidence in their correctness, and each simulator has different optimizations, so the most appropriate simulator can be chosen for a given modelling task. A common programming interface to multiple simulators would reduce or eliminate the problems of simulator diversity while retaining the benefits. PyNN is such an interface, making it possible to write a simulation script once, using the Python programming language, and run it without modification on any supported simulator (currently NEURON, NEST, PCSIM, Brian and the Heidelberg VLSI neuromorphic hardware). PyNN increases the productivity of neuronal network modelling by providing high-level abstraction, by promoting code sharing and reuse, and by providing a foundation for simulator-agnostic analysis, visualization and data-management tools. PyNN increases the reliability of modelling studies by making it much easier to check results on multiple simulators. PyNN is open-source software and is available from http://neuralensemble.org/PyNN. PMID:19194529
Optimizing Experimental Design for Comparing Models of Brain Function
Daunizeau, Jean; Preuschoff, Kerstin; Friston, Karl; Stephan, Klaas
2011-01-01
This article presents the first attempt to formalize the optimization of experimental design with the aim of comparing models of brain function based on neuroimaging data. We demonstrate our approach in the context of Dynamic Causal Modelling (DCM), which relates experimental manipulations to observed network dynamics (via hidden neuronal states) and provides an inference framework for selecting among candidate models. Here, we show how to optimize the sensitivity of model selection by choosing among experimental designs according to their respective model selection accuracy. Using Bayesian decision theory, we (i) derive the Laplace-Chernoff risk for model selection, (ii) disclose its relationship with classical design optimality criteria and (iii) assess its sensitivity to basic modelling assumptions. We then evaluate the approach when identifying brain networks using DCM. Monte-Carlo simulations and empirical analyses of fMRI data from a simple bimanual motor task in humans serve to demonstrate the relationship between network identification and the optimal experimental design. For example, we show that deciding whether there is a feedback connection requires shorter epoch durations, relative to asking whether there is experimentally induced change in a connection that is known to be present. Finally, we discuss limitations and potential extensions of this work. PMID:22125485
A Scalable and Robust Multi-Agent Approach to Distributed Optimization
NASA Technical Reports Server (NTRS)
Tumer, Kagan
2005-01-01
Modularizing a large optimization problem so that the solutions to the subproblems provide a good overall solution is a challenging problem. In this paper we present a multi-agent approach to this problem based on aligning the agent objectives with the system objectives, obviating the need to impose external mechanisms to achieve collaboration among the agents. This approach naturally addresses scaling and robustness issues by ensuring that the agents do not rely on the reliable operation of other agents We test this approach in the difficult distributed optimization problem of imperfect device subset selection [Challet and Johnson, 2002]. In this problem, there are n devices, each of which has a "distortion", and the task is to find the subset of those n devices that minimizes the average distortion. Our results show that in large systems (1000 agents) the proposed approach provides improvements of over an order of magnitude over both traditional optimization methods and traditional multi-agent methods. Furthermore, the results show that even in extreme cases of agent failures (i.e., half the agents fail midway through the simulation) the system remains coordinated and still outperforms a failure-free and centralized optimization algorithm.
A Unified Fisher's Ratio Learning Method for Spatial Filter Optimization.
Li, Xinyang; Guan, Cuntai; Zhang, Haihong; Ang, Kai Keng
To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.
NASA Technical Reports Server (NTRS)
Soloway, Donald I.; Alberts, Thomas E.
1989-01-01
It is often proposed that the redundancy in choosing a force distribution for multiple arms grasping a single object should be handled by minimizing a quadratic performance index. The performance index may be formulated in terms of joint torques or in terms of the Cartesian space force/torque applied to the body by the grippers. The former seeks to minimize power consumption while the latter minimizes body stresses. Because the cost functions are related to each other by a joint angle dependent transformation on the weight matrix, it might be argued that either method tends to reduce power consumption, but clearly the joint space minimization is optimal. A comparison of these two options is presented with consideration given to computational cost and power consumption. Simulation results using a two arm robot system are presented to show the savings realized by employing the joint space optimization. These savings are offset by additional complexity, computation time and in some cases processor power consumption.
Li, Chuan; Peng, Juan; Liang, Ming
2014-01-01
Oil debris sensors are effective tools to monitor wear particles in lubricants. For in situ applications, surrounding noise and vibration interferences often distort the oil debris signature of the sensor. Hence extracting oil debris signatures from sensor signals is a challenging task for wear particle monitoring. In this paper we employ the maximal overlap discrete wavelet transform (MODWT) with optimal decomposition depth to enhance the wear particle monitoring capability. The sensor signal is decomposed by the MODWT into different depths for detecting the wear particle existence. To extract the authentic particle signature with minimal distortion, the root mean square deviation of kurtosis value of the segmented signal residue is adopted as a criterion to obtain the optimal decomposition depth for the MODWT. The proposed approach is evaluated using both simulated and experimental wear particles. The results show that the present method can improve the oil debris monitoring capability without structural upgrade requirements. PMID:24686730
Optimizing Design Parameters for Sets of Concentric Tube Robots using Sampling-based Motion Planning
Baykal, Cenk; Torres, Luis G.; Alterovitz, Ron
2015-01-01
Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot’s behavior and reachable workspace. Optimizing a robot’s design by appropriately selecting tube parameters can improve the robot’s effectiveness on a procedure-and patient-specific basis. In this paper, we present an algorithm that generates sets of concentric tube robot designs that can collectively maximize the reachable percentage of a given goal region in the human body. Our algorithm combines a search in the design space of a concentric tube robot using a global optimization method with a sampling-based motion planner in the robot’s configuration space in order to find sets of designs that enable motions to goal regions while avoiding contact with anatomical obstacles. We demonstrate the effectiveness of our algorithm in a simulated scenario based on lung anatomy. PMID:26951790
Baykal, Cenk; Torres, Luis G; Alterovitz, Ron
2015-09-28
Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot's behavior and reachable workspace. Optimizing a robot's design by appropriately selecting tube parameters can improve the robot's effectiveness on a procedure-and patient-specific basis. In this paper, we present an algorithm that generates sets of concentric tube robot designs that can collectively maximize the reachable percentage of a given goal region in the human body. Our algorithm combines a search in the design space of a concentric tube robot using a global optimization method with a sampling-based motion planner in the robot's configuration space in order to find sets of designs that enable motions to goal regions while avoiding contact with anatomical obstacles. We demonstrate the effectiveness of our algorithm in a simulated scenario based on lung anatomy.
Li, Chuan; Peng, Juan; Liang, Ming
2014-03-28
Oil debris sensors are effective tools to monitor wear particles in lubricants. For in situ applications, surrounding noise and vibration interferences often distort the oil debris signature of the sensor. Hence extracting oil debris signatures from sensor signals is a challenging task for wear particle monitoring. In this paper we employ the maximal overlap discrete wavelet transform (MODWT) with optimal decomposition depth to enhance the wear particle monitoring capability. The sensor signal is decomposed by the MODWT into different depths for detecting the wear particle existence. To extract the authentic particle signature with minimal distortion, the root mean square deviation of kurtosis value of the segmented signal residue is adopted as a criterion to obtain the optimal decomposition depth for the MODWT. The proposed approach is evaluated using both simulated and experimental wear particles. The results show that the present method can improve the oil debris monitoring capability without structural upgrade requirements.
ISPE: A knowledge-based system for fluidization studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reddy, S.
1991-01-01
Chemical engineers use mathematical simulators to design, model, optimize and refine various engineering plants/processes. This procedure requires the following steps: (1) preparation of an input data file according to the format required by the target simulator; (2) excecuting the simulation; and (3) analyzing the results of the simulation to determine if all specified goals'' are satisfied. If the goals are not met, the input data file must be modified and the simulation repeated. This multistep process is continued until satisfactory results are obtained. This research was undertaken to develop a knowledge based system, IPSE (Intelligent Process Simulation Environment), that canmore » enhance the productivity of chemical engineers/modelers by serving as an intelligent assistant to perform a variety tasks related to process simulation. ASPEN, a widely used simulator by the US Department of Energy (DOE) at Morgantown Energy Technology Center (METC) was selected as the target process simulator in the project. IPSE, written in the C language, was developed using a number of knowledge-based programming paradigms: object-oriented knowledge representation that uses inheritance and methods, rulebased inferencing (includes processing and propagation of probabilistic information) and data-driven programming using demons. It was implemented using the knowledge based environment LASER. The relationship of IPSE with the user, ASPEN, LASER and the C language is shown in Figure 1.« less
A detailed comparison of optimality and simplicity in perceptual decision-making
Shen, Shan; Ma, Wei Ji
2017-01-01
Two prominent ideas in the study of decision-making have been that organisms behave near-optimally, and that they use simple heuristic rules. These principles might be operating in different types of tasks, but this possibility cannot be fully investigated without a direct, rigorous comparison within a single task. Such a comparison was lacking in most previous studies, because a) the optimal decision rule was simple; b) no simple suboptimal rules were considered; c) it was unclear what was optimal, or d) a simple rule could closely approximate the optimal rule. Here, we used a perceptual decision-making task in which the optimal decision rule is well-defined and complex, and makes qualitatively distinct predictions from many simple suboptimal rules. We find that all simple rules tested fail to describe human behavior, that the optimal rule accounts well for the data, and that several complex suboptimal rules are indistinguishable from the optimal one. Moreover, we found evidence that the optimal model is close to the true model: first, the better the trial-to-trial predictions of a suboptimal model agree with those of the optimal model, the better that suboptimal model fits; second, our estimate of the Kullback-Leibler divergence between the optimal model and the true model is not significantly different from zero. When observers receive no feedback, the optimal model still describes behavior best, suggesting that sensory uncertainty is implicitly represented and taken into account. Beyond the task and models studied here, our results have implications for best practices of model comparison. PMID:27177259
Farmer, George D; Janssen, Christian P; Nguyen, Anh T; Brumby, Duncan P
2018-04-01
We test people's ability to optimize performance across two concurrent tasks. Participants performed a number entry task while controlling a randomly moving cursor with a joystick. Participants received explicit feedback on their performance on these tasks in the form of a single combined score. This payoff function was varied between conditions to change the value of one task relative to the other. We found that participants adapted their strategy for interleaving the two tasks, by varying how long they spent on one task before switching to the other, in order to achieve the near maximum payoff available in each condition. In a second experiment, we show that this behavior is learned quickly (within 2-3 min over several discrete trials) and remained stable for as long as the payoff function did not change. The results of this work show that people are adaptive and flexible in how they prioritize and allocate attention in a dual-task setting. However, it also demonstrates some of the limits regarding people's ability to optimize payoff functions. Copyright © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
Alcohol impairment of performance on steering and discrete tasks in a driving simulator
DOT National Transportation Integrated Search
1974-12-01
In this program a simplified laboratory simulator was developed to test two types of tasks used in driving on the open road: a continuous "steering task" to regulate against gust induced disturbances and an intermittent "discrete response task" requi...
NASA Astrophysics Data System (ADS)
Tong, Qiujie; Wang, Qianqian; Li, Xiaoyang; Shan, Bin; Cui, Xuntai; Li, Chenyu; Peng, Zhong
2016-11-01
In order to satisfy the requirements of the real-time and generality, a laser target simulator in semi-physical simulation system based on RTX+LabWindows/CVI platform is proposed in this paper. Compared with the upper-lower computers simulation platform architecture used in the most of the real-time system now, this system has better maintainability and portability. This system runs on the Windows platform, using Windows RTX real-time extension subsystem to ensure the real-time performance of the system combining with the reflective memory network to complete some real-time tasks such as calculating the simulation model, transmitting the simulation data, and keeping real-time communication. The real-time tasks of simulation system run under the RTSS process. At the same time, we use the LabWindows/CVI to compile a graphical interface, and complete some non-real-time tasks in the process of simulation such as man-machine interaction, display and storage of the simulation data, which run under the Win32 process. Through the design of RTX shared memory and task scheduling algorithm, the data interaction between the real-time tasks process of RTSS and non-real-time tasks process of Win32 is completed. The experimental results show that this system has the strongly real-time performance, highly stability, and highly simulation accuracy. At the same time, it also has the good performance of human-computer interaction.
Stefanidis, Dimitrios; Hope, William W; Korndorffer, James R; Markley, Sarah; Scott, Daniel J
2010-04-01
Laparoscopic suturing is an advanced skill that is difficult to acquire. Simulator-based skills curricula have been developed that have been shown to transfer to the operating room. Currently available skills curricula need to be optimized. We hypothesized that mastering basic laparoscopic skills first would shorten the learning curve of a more complex laparoscopic task and reduce resource requirements for the Fundamentals of Laparoscopic Surgery suturing curriculum. Medical students (n = 20) with no previous simulator experience were enrolled in an IRB-approved protocol, pretested on the Fundamentals of Laparoscopic Surgery suturing model, and randomized into 2 groups. Group I (n = 10) trained (unsupervised) until proficiency levels were achieved on 5 basic tasks; Group II (n = 10) received no basic training. Both groups then trained (supervised) on the Fundamentals of Laparoscopic Surgery suturing model until previously reported proficiency levels were achieved. Two weeks later, they were retested to evaluate their retention scores, training parameters, instruction requirements, and cost between groups using t-test. Baseline characteristics and performance were similar for both groups, and 9 of 10 subjects in each group achieved the proficiency levels. The initial performance on the simulator was better for Group I after basic skills training, and their suturing learning curve was shorter compared with Group II. In addition, Group I required less active instruction. Overall time required to finish the curriculum was similar for both groups; but the Group I training strategy cost less, with a savings of $148 per trainee. Teaching novices basic laparoscopic skills before a more complex laparoscopic task produces substantial cost savings. Additional studies are needed to assess the impact of such integrated curricula on ultimate educational benefit. Copyright (c) 2010 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Optimizing The Number Of Steps In Learning Tasks For Complex Skills
ERIC Educational Resources Information Center
Nadolski, Rob J.; Kirschner, Paul A.; van Merrienboer, Jeroen J.G.
2005-01-01
Background: Carrying out whole tasks is often too difficult for novice learners attempting to acquire complex skills. The common solution is to split up the tasks into a number of smaller steps. The number of steps must be optimized for efficient and effective learning. Aim: The aim of the study is to investigate the relation between the number of…
Task Simulation: Word Processing. Teacher [and Student Manuals], B. Correspondence Secretary.
ERIC Educational Resources Information Center
Burch, Geralyn H.
This task-simulation learning module on being a correspondence secretary in a realty office for secondary and postsecondary teachers and students, the fifth in a series of eleven task and in-basket simulations, was designed to provide individualized instruction in office occupations courses, such as introductory business and typewriting. (Each of…
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Chen, C. L.
1989-01-01
Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.
NASA Technical Reports Server (NTRS)
Miller, G. K., Jr.; Riley, D. R.
1978-01-01
The effect of secondary tasks in determining permissible time delays in visual-motion simulation of a pursuit tracking task was examined. A single subject, a single set of aircraft handling qualities, and a single motion condition in tracking a target aircraft that oscillates sinusoidally in altitude were used. In addition to the basic simulator delays the results indicate that the permissible time delay is about 250 msec for either a tapping task, an adding task, or an audio task and is approximately 125 msec less than when no secondary task is involved. The magnitudes of the primary task performance measures, however, differ only for the tapping task. A power spectraldensity analysis basically confirms the result by comparing the root-mean-square performance measures. For all three secondary tasks, the total pilot workload was quite high.
Using cognitive task analysis to develop simulation-based training for medical tasks.
Cannon-Bowers, Jan; Bowers, Clint; Stout, Renee; Ricci, Katrina; Hildabrand, Annette
2013-10-01
Pressures to increase the efficacy and effectiveness of medical training are causing the Department of Defense to investigate the use of simulation technologies. This article describes a comprehensive cognitive task analysis technique that can be used to simultaneously generate training requirements, performance metrics, scenario requirements, and simulator/simulation requirements for medical tasks. On the basis of a variety of existing techniques, we developed a scenario-based approach that asks experts to perform the targeted task multiple times, with each pass probing a different dimension of the training development process. In contrast to many cognitive task analysis approaches, we argue that our technique can be highly cost effective because it is designed to accomplish multiple goals. The technique was pilot tested with expert instructors from a large military medical training command. These instructors were employed to generate requirements for two selected combat casualty care tasks-cricothyroidotomy and hemorrhage control. Results indicated that the technique is feasible to use and generates usable data to inform simulation-based training system design. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.
NASA Technical Reports Server (NTRS)
Frederick, D. K.; Lashmet, P. K.; Sandor, G. N.; Shen, C. N.; Smith, E. J.; Yerazunis, S. W.
1972-01-01
The problems related to the design and control of a mobile planetary vehicle to implement a systematic plan for the exploration of Mars were investigated. Problem areas receiving attention include: vehicle configuration, control, dynamics, systems and propulsion; systems analysis; navigation, terrain modeling and path selection; and chemical analysis of specimens. The following specific tasks were studied: vehicle model design, mathematical modeling of dynamic vehicle, experimental vehicle dynamics, obstacle negotiation, electromechanical controls, collapsibility and deployment, construction of a wheel tester, wheel analysis, payload design, system design optimization, effect of design assumptions, accessory optimal design, on-board computer subsystem, laser range measurement, discrete obstacle detection, obstacle detection systems, terrain modeling, path selection system simulation and evaluation, gas chromatograph/mass spectrometer system concepts, chromatograph model evaluation and improvement and transport parameter evaluation.
NASA Technical Reports Server (NTRS)
Frederick, D. K.; Lashmet, P. K.; Sandor, G. N.; Shen, C. N.; Smith, E. J.; Yerazunis, S. W.
1972-01-01
Investigation of problems related to the design and control of a mobile planetary vehicle to implement a systematic plan for the exploration of Mars has been undertaken. Problem areas receiving attention include: vehicle configuration, control, dynamics, systems and propulsion; systems analysis; terrain modeling and path selection; and chemical analysis of specimens. The following specific tasks have been under study: vehicle model design, mathematical modeling of a dynamic vehicle, experimental vehicle dynamics, obstacle negotiation, electromechanical controls, collapsibility and deployment, construction of a wheel tester, wheel analysis, payload design, system design optimization, effect of design assumptions, accessory optimal design, on-board computer sybsystem, laser range measurement, discrete obstacle detection, obstacle detection systems, terrain modeling, path selection system simulation and evaluation, gas chromatograph/mass spectrometer system concepts, chromatograph model evaluation and improvement.
A Human-Centered Smart Home System with Wearable-Sensor Behavior Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ji, Jianting; Liu, Ting; Shen, Chao
Smart home has recently attracted much research interest owing to its potential in improving the quality of human life. How to obtain user's demand is the most important and challenging task for appliance optimal scheduling in smart home, since it is highly related to user's unpredictable behavior. In this paper, a human-centered smart home system is proposed to identify user behavior, predict their demand and schedule the household appliances. Firstly, the sensor data from user's wearable devices are monitored to profile user's full-day behavior. Then, the appliance-demand matrix is constructed to predict user's demand on home environment, which is extractedmore » from the history of appliance load data and user behavior. Two simulations are designed to demonstrate user behavior identification, appliance-demand matrix construction and strategy of appliance optimal scheduling generation.« less
D'Amours, Michel; Pouliot, Jean; Dagnault, Anne; Verhaegen, Frank; Beaulieu, Luc
2011-12-01
Brachytherapy planning software relies on the Task Group report 43 dosimetry formalism. This formalism, based on a water approximation, neglects various heterogeneous materials present during treatment. Various studies have suggested that these heterogeneities should be taken into account to improve the treatment quality. The present study sought to demonstrate the feasibility of incorporating Monte Carlo (MC) dosimetry within an inverse planning algorithm to improve the dose conformity and increase the treatment quality. The method was based on precalculated dose kernels in full patient geometries, representing the dose distribution of a brachytherapy source at a single dwell position using MC simulations and the Geant4 toolkit. These dose kernels are used by the inverse planning by simulated annealing tool to produce a fast MC-based plan. A test was performed for an interstitial brachytherapy breast treatment using two different high-dose-rate brachytherapy sources: the microSelectron iridium-192 source and the electronic brachytherapy source Axxent operating at 50 kVp. A research version of the inverse planning by simulated annealing algorithm was combined with MC to provide a method to fully account for the heterogeneities in dose optimization, using the MC method. The effect of the water approximation was found to depend on photon energy, with greater dose attenuation for the lower energies of the Axxent source compared with iridium-192. For the latter, an underdosage of 5.1% for the dose received by 90% of the clinical target volume was found. A new method to optimize afterloading brachytherapy plans that uses MC dosimetric information was developed. Including computed tomography-based information in MC dosimetry in the inverse planning process was shown to take into account the full range of scatter and heterogeneity conditions. This led to significant dose differences compared with the Task Group report 43 approach for the Axxent source. Copyright © 2011 Elsevier Inc. All rights reserved.
Methodolgy For Evaluation Of Technology Impacts In Space Electric Power Systems
NASA Technical Reports Server (NTRS)
Holda, Julie
2004-01-01
The Analysis and Management branch of the Power and Propulsion Office at NASA Glenn Research Center is responsible for performing complex analyses of the space power and In-Space propulsion products developed by GRC. This work quantifies the benefits of the advanced technologies to support on-going advocacy efforts. The Power and Propulsion Office is committed to understanding how the advancement in space technologies could benefit future NASA missions. They support many diverse projects and missions throughout NASA as well as industry and academia. The area of work that we are concentrating on is space technology investment strategies. Our goal is to develop a Monte-Carlo based tool to investigate technology impacts in space electric power systems. The framework is being developed at this stage, which will be used to set up a computer simulation of a space electric power system (EPS). The outcome is expected to be a probabilistic assessment of critical technologies and potential development issues. We are developing methods for integrating existing spreadsheet-based tools into the simulation tool. Also, work is being done on defining interface protocols to enable rapid integration of future tools. Monte Carlo-based simulation programs for statistical modeling of the EPS Model. I decided to learn and evaluate Palisade's @Risk and Risk Optimizer software, and utilize it's capabilities for the Electric Power System (EPS) model. I also looked at similar software packages (JMP, SPSS, Crystal Ball, VenSim, Analytica) available from other suppliers and evaluated them. The second task was to develop the framework for the tool, in which we had to define technology characteristics using weighing factors and probability distributions. Also we had to define the simulation space and add hard and soft constraints to the model. The third task is to incorporate (preliminary) cost factors into the model. A final task is developing a cross-platform solution of this framework.
Time and frequency constrained sonar signal design for optimal detection of elastic objects.
Hamschin, Brandon; Loughlin, Patrick J
2013-04-01
In this paper, the task of model-based transmit signal design for optimizing detection is considered. Building on past work that designs the spectral magnitude for optimizing detection, two methods for synthesizing minimum duration signals with this spectral magnitude are developed. The methods are applied to the design of signals that are optimal for detecting elastic objects in the presence of additive noise and self-noise. Elastic objects are modeled as linear time-invariant systems with known impulse responses, while additive noise (e.g., ocean noise or receiver noise) and acoustic self-noise (e.g., reverberation or clutter) are modeled as stationary Gaussian random processes with known power spectral densities. The first approach finds the waveform that preserves the optimal spectral magnitude while achieving the minimum temporal duration. The second approach yields a finite-length time-domain sequence by maximizing temporal energy concentration, subject to the constraint that the spectral magnitude is close (in a least-squares sense) to the optimal spectral magnitude. The two approaches are then connected analytically, showing the former is a limiting case of the latter. Simulation examples that illustrate the theory are accompanied by discussions that address practical applicability and how one might satisfy the need for target and environmental models in the real-world.
On Maximizing the Lifetime of Wireless Sensor Networks by Optimally Assigning Energy Supplies
Asorey-Cacheda, Rafael; García-Sánchez, Antonio Javier; García-Sánchez, Felipe; García-Haro, Joan; Gonzalez-Castaño, Francisco Javier
2013-01-01
The extension of the network lifetime of Wireless Sensor Networks (WSN) is an important issue that has not been appropriately solved yet. This paper addresses this concern and proposes some techniques to plan an arbitrary WSN. To this end, we suggest a hierarchical network architecture, similar to realistic scenarios, where nodes with renewable energy sources (denoted as primary nodes) carry out most message delivery tasks, and nodes equipped with conventional chemical batteries (denoted as secondary nodes) are those with less communication demands. The key design issue of this network architecture is the development of a new optimization framework to calculate the optimal assignment of renewable energy supplies (primary node assignment) to maximize network lifetime, obtaining the minimum number of energy supplies and their node assignment. We also conduct a second optimization step to additionally minimize the number of packet hops between the source and the sink. In this work, we present an algorithm that approaches the results of the optimization framework, but with much faster execution speed, which is a good alternative for large-scale WSN networks. Finally, the network model, the optimization process and the designed algorithm are further evaluated and validated by means of computer simulation under realistic conditions. The results obtained are discussed comparatively. PMID:23939582
Rapid convergence of optimal control in NMR using numerically-constructed toggling frames
NASA Astrophysics Data System (ADS)
Coote, Paul; Anklin, Clemens; Massefski, Walter; Wagner, Gerhard; Arthanari, Haribabu
2017-08-01
We present a numerical method for rapidly solving the Bloch equation for an arbitrary time-varying spin-1/2 Hamiltonian. The method relies on fast, vectorized computations such as summation and quaternion multiplication, rather than slow computations such as matrix exponentiation. A toggling frame is constructed in which the Hamiltonian is time-invariant, and therefore has a simple analytical solution. The key insight is that constructing this frame is faster than solving the system dynamics in the original frame. Rapidly solving the Bloch equations for an arbitrary Hamiltonian is particularly useful in the context of NMR optimal control. Optimal control theory can be used to design pulse shapes for a range of tasks in NMR spectroscopy. However, it requires multiple simulations of the Bloch equations at each stage of the algorithm, and for each relevant set of parameters (e.g. chemical shift frequencies). This is typically time consuming. We demonstrate that by working in an appropriate toggling frame, optimal control pulses can be generated much faster. We present a new alternative to the well-known GRAPE algorithm to continuously update the toggling-frame as the optimal pulse is generated, and demonstrate that this approach is extremely fast. The use and benefit of rapid optimal pulse generation is demonstrated for 19F fragment screening experiments.
On maximizing the lifetime of Wireless Sensor Networks by optimally assigning energy supplies.
Asorey-Cacheda, Rafael; García-Sánchez, Antonio Javier; García-Sánchez, Felipe; García-Haro, Joan; González-Castano, Francisco Javier
2013-08-09
The extension of the network lifetime of Wireless Sensor Networks (WSN) is an important issue that has not been appropriately solved yet. This paper addresses this concern and proposes some techniques to plan an arbitrary WSN. To this end, we suggest a hierarchical network architecture, similar to realistic scenarios, where nodes with renewable energy sources (denoted as primary nodes) carry out most message delivery tasks, and nodes equipped with conventional chemical batteries (denoted as secondary nodes) are those with less communication demands. The key design issue of this network architecture is the development of a new optimization framework to calculate the optimal assignment of renewable energy supplies (primary node assignment) to maximize network lifetime, obtaining the minimum number of energy supplies and their node assignment. We also conduct a second optimization step to additionally minimize the number of packet hops between the source and the sink. In this work, we present an algorithm that approaches the results of the optimization framework, but with much faster execution speed, which is a good alternative for large-scale WSN networks. Finally, the network model, the optimization process and the designed algorithm are further evaluated and validated by means of computer simulation under realistic conditions. The results obtained are discussed comparatively.
Liu, Yan-Jun; Tong, Shaocheng
2016-11-01
In this paper, we propose an optimal control scheme-based adaptive neural network design for a class of unknown nonlinear discrete-time systems. The controlled systems are in a block-triangular multi-input-multi-output pure-feedback structure, i.e., there are both state and input couplings and nonaffine functions to be included in every equation of each subsystem. The design objective is to provide a control scheme, which not only guarantees the stability of the systems, but also achieves optimal control performance. The main contribution of this paper is that it is for the first time to achieve the optimal performance for such a class of systems. Owing to the interactions among subsystems, making an optimal control signal is a difficult task. The design ideas are that: 1) the systems are transformed into an output predictor form; 2) for the output predictor, the ideal control signal and the strategic utility function can be approximated by using an action network and a critic network, respectively; and 3) an optimal control signal is constructed with the weight update rules to be designed based on a gradient descent method. The stability of the systems can be proved based on the difference Lyapunov method. Finally, a numerical simulation is given to illustrate the performance of the proposed scheme.
Comparing surgical experience with performance on a sinus surgery simulator.
Diment, Laura E; Ruthenbeck, Greg S; Dharmawardana, Nuwan; Carney, A Simon; Woods, Charmaine M; Ooi, Eng H; Reynolds, Karen J
2016-12-01
This study evaluates whether surgical experience influences technical competence using the Flinders sinus surgery simulator, a virtual environment designed to teach nasal endoscopic surgical skills. Ten experienced sinus surgeons (five consultants and five registrars) and 14 novices (seven resident medical officers and seven interns/medical students) completed three simulation tasks using haptic controllers. Task 1 required navigation of the sinuses and identification of six anatomical landmarks, Task 2 required removal of unhealthy tissue while preserving healthy tissue and Task 3 entailed backbiting within pre-set lines on the uncinate process and microdebriding tissue between the cuts. Novices were compared with experts on a range of measures, using Mann-Whitney U -tests. Novices took longer on all tasks (Task 1: 278%, P < 0.005; Task 2: 112%, P < 0.005; Task 3: 72%, P < 0.005). In Task 1, novices' instruments travelled further than experts' (379%, P < 0.005), and provided greater maximum force (12%, P < 0.05). In Tasks 2 and 3 novices performed more cutting movements to remove the tissue (Task 2: 1500%, P < 0.005; Task 3: 72%, P < 0.005). Experts also completed more of Task 3 (66%, P < 0.05). The study demonstrated the Flinders sinus simulator's construct validity, differentiating between experts and novices with respect to procedure time, instrument distance travelled and number of cutting motions to complete the task. © 2015 Royal Australasian College of Surgeons.
Validating the simulation of large-scale parallel applications using statistical characteristics
Zhang, Deli; Wilke, Jeremiah; Hendry, Gilbert; ...
2016-03-01
Simulation is a widely adopted method to analyze and predict the performance of large-scale parallel applications. Validating the hardware model is highly important for complex simulations with a large number of parameters. Common practice involves calculating the percent error between the projected and the real execution time of a benchmark program. However, in a high-dimensional parameter space, this coarse-grained approach often suffers from parameter insensitivity, which may not be known a priori. Moreover, the traditional approach cannot be applied to the validation of software models, such as application skeletons used in online simulations. In this work, we present a methodologymore » and a toolset for validating both hardware and software models by quantitatively comparing fine-grained statistical characteristics obtained from execution traces. Although statistical information has been used in tasks like performance optimization, this is the first attempt to apply it to simulation validation. Lastly, our experimental results show that the proposed evaluation approach offers significant improvement in fidelity when compared to evaluation using total execution time, and the proposed metrics serve as reliable criteria that progress toward automating the simulation tuning process.« less
Route complexity and simulated physical ageing negatively influence wayfinding.
Zijlstra, Emma; Hagedoorn, Mariët; Krijnen, Wim P; van der Schans, Cees P; Mobach, Mark P
2016-09-01
The aim of this age-simulation field experiment was to assess the influence of route complexity and physical ageing on wayfinding. Seventy-five people (aged 18-28) performed a total of 108 wayfinding tasks (i.e., 42 participants performed two wayfinding tasks and 33 performed one wayfinding task), of which 59 tasks were performed wearing gerontologic ageing suits. Outcome variables were wayfinding performance (i.e., efficiency and walking speed) and physiological outcomes (i.e., heart and respiratory rates). Analysis of covariance showed that persons on more complex routes (i.e., more floor and building changes) walked less efficiently than persons on less complex routes. In addition, simulated elderly participants perform worse in wayfinding than young participants in terms of speed (p < 0.001). Moreover, a linear mixed model showed that simulated elderly persons had higher heart rates and respiratory rates compared to young people during a wayfinding task, suggesting that simulated elderly consumed more energy during this task. Copyright © 2016 Elsevier Ltd. All rights reserved.
OPTIMIZING EXPOSURE MEASUREMENT TECHNIQUES
The research reported in this task description addresses one of a series of interrelated NERL tasks with the common goal of optimizing the predictive power of low cost, reliable exposure measurements for the planned Interagency National Children's Study (NCS). Specifically, we w...
Mouthon, A; Ruffieux, J; Wälchli, M; Keller, M; Taube, W
2015-09-10
Non-physical balance training has demonstrated to be efficient to improve postural control in young people. However, little is known about the potential to increase corticospinal excitability by mental simulation in lower leg muscles. Mental simulation of isolated, voluntary contractions of limb muscles increase corticospinal excitability but more automated tasks like walking seem to have no or only minor effects on motor-evoked potentials (MEPs) evoked by transcranial magnetic stimulation (TMS). This may be related to the way of performing the mental simulation or the task itself. Therefore, the present study aimed to clarify how corticospinal excitability is modulated during AO+MI, MI and action observation (AO) of balance tasks. For this purpose, MEPs and H-reflexes were elicited during three different mental simulations (a) AO+MI, (b) MI and (c) passive AO. For each condition, two balance tasks were evaluated: (1) quiet upright stance (static) and (2) compensating a medio-lateral perturbation while standing on a free-swinging platform (dynamic). AO+MI resulted in the largest facilitation of MEPs followed by MI and passive AO. MEP facilitation was significantly larger in the dynamic perturbation than in the static standing task. Interestingly, passive observation resulted in hardly any facilitation independent of the task. H-reflex amplitudes were not modulated. The current results demonstrate that corticospinal excitability during mental simulation of balance tasks is influenced by both the type of mental simulation and the task difficulty. As H-reflexes and background EMG were not modulated, it may be argued that changes in excitability of the primary motor cortex were responsible for the MEP modulation. From a functional point of view, our findings suggest best training/rehabilitation effects when combining MI with AO during challenging postural tasks. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Convex Regression with Interpretable Sharp Partitions
Petersen, Ashley; Simon, Noah; Witten, Daniela
2016-01-01
We consider the problem of predicting an outcome variable on the basis of a small number of covariates, using an interpretable yet non-additive model. We propose convex regression with interpretable sharp partitions (CRISP) for this task. CRISP partitions the covariate space into blocks in a data-adaptive way, and fits a mean model within each block. Unlike other partitioning methods, CRISP is fit using a non-greedy approach by solving a convex optimization problem, resulting in low-variance fits. We explore the properties of CRISP, and evaluate its performance in a simulation study and on a housing price data set. PMID:27635120
Spares Management : Optimizing Hardware Usage for the Space Shuttle Main Engine
NASA Technical Reports Server (NTRS)
Gulbrandsen, K. A.
1999-01-01
The complexity of the Space Shuttle Main Engine (SSME), combined with mounting requirements to reduce operations costs have increased demands for accurate tracking, maintenance, and projections of SSME assets. The SSME Logistics Team is developing an integrated asset management process. This PC-based tool provides a user-friendly asset database for daily decision making, plus a variable-input hardware usage simulation with complex logic yielding output that addresses essential asset management issues. Cycle times on critical tasks are significantly reduced. Associated costs have decreased as asset data quality and decision-making capability has increased.
Calibrated Blade-Element/Momentum Theory Aerodynamic Model of the MARIN Stock Wind Turbine: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goupee, A.; Kimball, R.; de Ridder, E. J.
2015-04-02
In this paper, a calibrated blade-element/momentum theory aerodynamic model of the MARIN stock wind turbine is developed and documented. The model is created using open-source software and calibrated to closely emulate experimental data obtained by the DeepCwind Consortium using a genetic algorithm optimization routine. The provided model will be useful for those interested in validating interested in validating floating wind turbine numerical simulators that rely on experiments utilizing the MARIN stock wind turbine—for example, the International Energy Agency Wind Task 30’s Offshore Code Comparison Collaboration Continued, with Correlation project.
NASA Astrophysics Data System (ADS)
Rupcich, Franco John
The purpose of this study was to quantify the effectiveness of techniques intended to reduce dose to the breast during CT coronary angiography (CTCA) scans with respect to task-based image quality, and to evaluate the effectiveness of optimal energy weighting in improving contrast-to-noise ratio (CNR), and thus the potential for reducing breast dose, during energy-resolved dedicated breast CT. A database quantifying organ dose for several radiosensitive organs irradiated during CTCA, including the breast, was generated using Monte Carlo simulations. This database facilitates estimation of organ-specific dose deposited during CTCA protocols using arbitrary x-ray spectra or tube-current modulation schemes without the need to run Monte Carlo simulations. The database was used to estimate breast dose for simulated CT images acquired for a reference protocol and five protocols intended to reduce breast dose. For each protocol, the performance of two tasks (detection of signals with unknown locations) was compared over a range of breast dose levels using a task-based, signal-detectability metric: the estimator of the area under the exponential free-response relative operating characteristic curve, AFE. For large-diameter/medium-contrast signals, when maintaining equivalent AFE, the 80 kV partial, 80 kV, 120 kV partial, and 120 kV tube-current modulated protocols reduced breast dose by 85%, 81%, 18%, and 6%, respectively, while the shielded protocol increased breast dose by 68%. Results for the small-diameter/high-contrast signal followed similar trends, but with smaller magnitude of the percent changes in dose. The 80 kV protocols demonstrated the greatest reduction to breast dose, however, the subsequent increase in noise may be clinically unacceptable. Tube output for these protocols can be adjusted to achieve more desirable noise levels with lesser dose reduction. The improvement in CNR of optimally projection-based and image-based weighted images relative to photon-counting was investigated for six different energy bin combinations using a bench-top energy-resolving CT system with a cadmium zinc telluride (CZT) detector. The non-ideal spectral response reduced the CNR for the projection-based weighted images, while image-based weighting improved CNR for five out of the six investigated bin combinations, despite this non-ideal response, indicating potential for image-based weighting to reduce breast dose during dedicated breast CT.
Quantum autoencoders for efficient compression of quantum data
NASA Astrophysics Data System (ADS)
Romero, Jonathan; Olson, Jonathan P.; Aspuru-Guzik, Alan
2017-12-01
Classical autoencoders are neural networks that can learn efficient low-dimensional representations of data in higher-dimensional space. The task of an autoencoder is, given an input x, to map x to a lower dimensional point y such that x can likely be recovered from y. The structure of the underlying autoencoder network can be chosen to represent the data on a smaller dimension, effectively compressing the input. Inspired by this idea, we introduce the model of a quantum autoencoder to perform similar tasks on quantum data. The quantum autoencoder is trained to compress a particular data set of quantum states, where a classical compression algorithm cannot be employed. The parameters of the quantum autoencoder are trained using classical optimization algorithms. We show an example of a simple programmable circuit that can be trained as an efficient autoencoder. We apply our model in the context of quantum simulation to compress ground states of the Hubbard model and molecular Hamiltonians.
Prototypes, Exemplars, and the Natural History of Categorization
Smith, J. David
2013-01-01
The article explores—from a utility/adaptation perspective—the role of prototype and exemplar processes in categorization. The author surveys important category tasks within the categorization literature from the perspective of the optimality of applying prototype and exemplar processes. Formal simulations reveal that organisms will often (not always!) receive more useful signals about category belongingness if they average their exemplar experience into a prototype and use this as the comparative standard for categorization. This survey then provides the theoretical context for considering the evolution of cognitive systems for categorization. In the article’s final sections, the author reviews recent research on the performance of nonhuman primates and humans in the tasks analyzed in the article. Diverse species share operating principles, default commitments, and processing weaknesses in categorization. From these commonalities, it may be possible to infer some properties of the categorization ecology these species generally experienced during cognitive evolution. PMID:24005828
Li, Lian-Hui; Mo, Rong
2015-01-01
The production task queue has a great significance for manufacturing resource allocation and scheduling decision. Man-made qualitative queue optimization method has a poor effect and makes the application difficult. A production task queue optimization method is proposed based on multi-attribute evaluation. According to the task attributes, the hierarchical multi-attribute model is established and the indicator quantization methods are given. To calculate the objective indicator weight, criteria importance through intercriteria correlation (CRITIC) is selected from three usual methods. To calculate the subjective indicator weight, BP neural network is used to determine the judge importance degree, and then the trapezoid fuzzy scale-rough AHP considering the judge importance degree is put forward. The balanced weight, which integrates the objective weight and the subjective weight, is calculated base on multi-weight contribution balance model. The technique for order preference by similarity to an ideal solution (TOPSIS) improved by replacing Euclidean distance with relative entropy distance is used to sequence the tasks and optimize the queue by the weighted indicator value. A case study is given to illustrate its correctness and feasibility.
Li, Lian-hui; Mo, Rong
2015-01-01
The production task queue has a great significance for manufacturing resource allocation and scheduling decision. Man-made qualitative queue optimization method has a poor effect and makes the application difficult. A production task queue optimization method is proposed based on multi-attribute evaluation. According to the task attributes, the hierarchical multi-attribute model is established and the indicator quantization methods are given. To calculate the objective indicator weight, criteria importance through intercriteria correlation (CRITIC) is selected from three usual methods. To calculate the subjective indicator weight, BP neural network is used to determine the judge importance degree, and then the trapezoid fuzzy scale-rough AHP considering the judge importance degree is put forward. The balanced weight, which integrates the objective weight and the subjective weight, is calculated base on multi-weight contribution balance model. The technique for order preference by similarity to an ideal solution (TOPSIS) improved by replacing Euclidean distance with relative entropy distance is used to sequence the tasks and optimize the queue by the weighted indicator value. A case study is given to illustrate its correctness and feasibility. PMID:26414758
Kinematic path planning for space-based robotics
NASA Astrophysics Data System (ADS)
Seereeram, Sanjeev; Wen, John T.
1998-01-01
Future space robotics tasks require manipulators of significant dexterity, achievable through kinematic redundancy and modular reconfigurability, but with a corresponding complexity of motion planning. Existing research aims for full autonomy and completeness, at the expense of efficiency, generality or even user friendliness. Commercial simulators require user-taught joint paths-a significant burden for assembly tasks subject to collision avoidance, kinematic and dynamic constraints. Our research has developed a Kinematic Path Planning (KPP) algorithm which bridges the gap between research and industry to produce a powerful and useful product. KPP consists of three key components: path-space iterative search, probabilistic refinement, and an operator guidance interface. The KPP algorithm has been successfully applied to the SSRMS for PMA relocation and dual-arm truss assembly tasks. Other KPP capabilities include Cartesian path following, hybrid Cartesian endpoint/intermediate via-point planning, redundancy resolution and path optimization. KPP incorporates supervisory (operator) input at any detail to influence the solution, yielding desirable/predictable paths for multi-jointed arms, avoiding obstacles and obeying manipulator limits. This software will eventually form a marketable robotic planner suitable for commercialization in conjunction with existing robotic CAD/CAM packages.
Efficient Ada multitasking on a RISC register window architecture
NASA Technical Reports Server (NTRS)
Kearns, J. P.; Quammen, D.
1987-01-01
This work addresses the problem of reducing context switch overhead on a processor which supports a large register file - a register file much like that which is part of the Berkeley RISC processors and several other emerging architectures (which are not necessarily reduced instruction set machines in the purest sense). Such a reduction in overhead is particularly desirable in a real-time embedded application, in which task-to-task context switch overhead may result in failure to meet crucial deadlines. A storage management technique by which a context switch may be implemented as cheaply as a procedure call is presented. The essence of this technique is the avoidance of the save/restore of registers on the context switch. This is achieved through analysis of the static source text of an Ada tasking program. Information gained during that analysis directs the optimized storage management strategy for that program at run time. A formal verification of the technique in terms of an operational control model and an evaluation of the technique's performance via simulations driven by synthetic Ada program traces are presented.
NASA Astrophysics Data System (ADS)
Zadeh, S. M.; Powers, D. M. W.; Sammut, K.; Yazdani, A. M.
2016-12-01
Autonomous Underwater Vehicles (AUVs) are capable of spending long periods of time for carrying out various underwater missions and marine tasks. In this paper, a novel conflict-free motion planning framework is introduced to enhance underwater vehicle's mission performance by completing maximum number of highest priority tasks in a limited time through a large scale waypoint cluttered operating field, and ensuring safe deployment during the mission. The proposed combinatorial route-path planner model takes the advantages of the Biogeography-Based Optimization (BBO) algorithm toward satisfying objectives of both higher-lower level motion planners and guarantees maximization of the mission productivity for a single vehicle operation. The performance of the model is investigated under different scenarios including the particular cost constraints in time-varying operating fields. To show the reliability of the proposed model, performance of each motion planner assessed separately and then statistical analysis is undertaken to evaluate the total performance of the entire model. The simulation results indicate the stability of the contributed model and its feasible application for real experiments.
Hybrid annealing: Coupling a quantum simulator to a classical computer
NASA Astrophysics Data System (ADS)
Graß, Tobias; Lewenstein, Maciej
2017-05-01
Finding the global minimum in a rugged potential landscape is a computationally hard task, often equivalent to relevant optimization problems. Annealing strategies, either classical or quantum, explore the configuration space by evolving the system under the influence of thermal or quantum fluctuations. The thermal annealing dynamics can rapidly freeze the system into a low-energy configuration, and it can be simulated well on a classical computer, but it easily gets stuck in local minima. Quantum annealing, on the other hand, can be guaranteed to find the true ground state and can be implemented in modern quantum simulators; however, quantum adiabatic schemes become prohibitively slow in the presence of quasidegeneracies. Here, we propose a strategy which combines ideas from simulated annealing and quantum annealing. In such a hybrid algorithm, the outcome of a quantum simulator is processed on a classical device. While the quantum simulator explores the configuration space by repeatedly applying quantum fluctuations and performing projective measurements, the classical computer evaluates each configuration and enforces a lowering of the energy. We have simulated this algorithm for small instances of the random energy model, showing that it potentially outperforms both simulated thermal annealing and adiabatic quantum annealing. It becomes most efficient for problems involving many quasidegenerate ground states.
Using IMPRINT to Guide Experimental Design with Simulated Task Environments
2015-06-18
USING IMPRINT TO GUIDE EXPERIMENTAL DESIGN OF SIMULATED TASK ENVIRONMENTS THESIS Gregory...ENG-MS-15-J-052 USING IMPRINT TO GUIDE EXPERIMENTAL DESIGN WITH SIMULATED TASK ENVIRONMENTS THESIS Presented to the Faculty Department...Civilian, USAF June 2015 DISTRIBUTION STATEMENT A. APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. AFIT-ENG-MS-15-J-052 USING IMPRINT
Assessment of simulation fidelity using measurements of piloting technique in flight
NASA Technical Reports Server (NTRS)
Clement, W. F.; Cleveland, W. B.; Key, D. L.
1984-01-01
The U.S. Army and NASA joined together on a project to conduct a systematic investigation and validation of a ground based piloted simulation of the Army/Sikorsky UH-60A helicopter. Flight testing was an integral part of the validation effort. Nap-of-the-Earth (NOE) piloting tasks which were investigated included the bob-up, the hover turn, the dash/quickstop, the sidestep, the dolphin, and the slalom. Results from the simulation indicate that the pilot's NOE task performance in the simulator is noticeably and quantifiably degraded when compared with the task performance results generated in flight test. The results of the flight test and ground based simulation experiments support a unique rationale for the assessment of simulation fidelity: flight simulation fidelity should be judged quantitatively by measuring pilot's control strategy and technique as induced by the simulator. A quantitative comparison is offered between the piloting technique observed in a flight simulator and that observed in flight test for the same tasks performed by the same pilots.
Fan, Yu; Kong, Gaiqing; Meng, Yisen; Tan, Shutao; Wei, Kunlin; Zhang, Qian; Jin, Jie
2014-11-01
Flank position is extensively used in retroperitoneoscopic urological practice. Most surgeons follow the patients' position in open approaches. However, surgical ergonomics of the conventional position in the retroperitoneoscopic surgery is poor. We introduce a modified position and evaluated task performance and surgical ergonomics of both positions with simulated surgical tasks. Twenty-one novice surgeons were recruited to perform four tasks: bead transfer, ring transfer, continuous suturing, and cutting a circle. The conventional position was simulated by setting an endo-surgical simulator parallel to the long axis of a surgical desk. The modified position was simulated by rotating the simulator 30° with respect to the long axis of the desk. The outcome measurements include task performance measures, kinematic measures for body alignment, surface electromyography, relative loading between feet, and subjective ratings of fatigue. We observed significant improvements in both task performance and surgical ergonomics parameters under the modified position. For all four tasks, subjects finished tasks faster with higher accuracy (p < 0.005 or < 0.001). For ergonomics part: (1) The angle between the upper body and the head was decreased by 7.4 ± 1.7°; (2) The EMG amplitude collected from shoulders and left lumber was significantly lower (p < 0.05); (3) Relative loading between feet was more balanced (p < 0.001); (4) Manual-action muscles and postural muscles are rated less fatiguing according to the questionnaire (p < 0.05). Conventional position of patient in retroperitoneoscopic upper urinary tract surgery is associated with poor surgical ergonomics. With a simulated surgery, we demonstrated that our modified position could significantly improve task performance and surgical ergonomics. Further studies are still warranted to validate these benefits for both patients and surgeons.
Learning and inference using complex generative models in a spatial localization task.
Bejjanki, Vikranth R; Knill, David C; Aslin, Richard N
2016-01-01
A large body of research has established that, under relatively simple task conditions, human observers integrate uncertain sensory information with learned prior knowledge in an approximately Bayes-optimal manner. However, in many natural tasks, observers must perform this sensory-plus-prior integration when the underlying generative model of the environment consists of multiple causes. Here we ask if the Bayes-optimal integration seen with simple tasks also applies to such natural tasks when the generative model is more complex, or whether observers rely instead on a less efficient set of heuristics that approximate ideal performance. Participants localized a "hidden" target whose position on a touch screen was sampled from a location-contingent bimodal generative model with different variances around each mode. Over repeated exposure to this task, participants learned the a priori locations of the target (i.e., the bimodal generative model), and integrated this learned knowledge with uncertain sensory information on a trial-by-trial basis in a manner consistent with the predictions of Bayes-optimal behavior. In particular, participants rapidly learned the locations of the two modes of the generative model, but the relative variances of the modes were learned much more slowly. Taken together, our results suggest that human performance in a more complex localization task, which requires the integration of sensory information with learned knowledge of a bimodal generative model, is consistent with the predictions of Bayes-optimal behavior, but involves a much longer time-course than in simpler tasks.
2004-01-01
Cognitive Task Analysis Abstract As Department of Defense (DoD) leaders rely more on modeling and simulation to provide information on which to base...capabilities and intent. Cognitive Task Analysis (CTA) Cognitive Task Analysis (CTA) is an extensive/detailed look at tasks and subtasks performed by a...Domain Analysis and Task Analysis: A Difference That Matters. In Cognitive Task Analysis , edited by J. M. Schraagen, S.
Xiao, Dongjuan; Jakimowicz, Jack J; Albayrak, Armagan; Buzink, Sonja N; Botden, Sanne M B I; Goossens, Richard H M
2014-01-01
Laparoscopic skills can be improved effectively through laparoscopic simulation. The purpose of this study was to verify the face and content validity of a new portable Ergonomic Laparoscopic Skills simulator (Ergo-Lap simulator) and assess the construct validity of the Ergo-Lap simulator in 4 basic skills tasks. Four tasks were evaluated: 2 different translocation exercises (a basic bimanual exercise and a challenging single-handed exercise), an exercise involving tissue manipulation under tension, and a needle-handling exercise. Task performance was analyzed according to speed and accuracy. The participants rated the usability and didactic value of each task and the Ergo-Lap simulator along a 5-point Likert scale. Institutional academic medical center with its affiliated general surgery residency. Forty-six participants were allotted into 2 groups: a Novice group (n = 26, <10 clinical laparoscopic procedures) and an Experienced group (n = 20, >50 clinical laparoscopic procedures). The Experienced group completed all tasks in less time than the Novice group did (p < 0.001, Mann-Whitney U test). The Experienced group also completed tasks 1, 2, and 4 with fewer errors than the Novice group did (p < 0.05). Of the Novice participants, 96% considered that the present Ergo-Lap simulator could encourage more frequent practice of laparoscopic skills. In addition, 92% would like to purchase this simulator. All of the experienced participants confirmed that the Ergo-Lap simulator was easy to use and useful for practicing basic laparoscopic skills in an ergonomic manner. Most (95%) of these respondents would recommend this simulator to other surgical trainees. This Ergo-Lap simulator with multiple tasks was rated as a useful training tool that can distinguish between various levels of laparoscopic expertise. The Ergo-Lap simulator is also an inexpensive alternative, which surgical trainees could use to update their skills in the skills laboratory, at home, or in the office. Copyright © 2014 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Hybrid glowworm swarm optimization for task scheduling in the cloud environment
NASA Astrophysics Data System (ADS)
Zhou, Jing; Dong, Shoubin
2018-06-01
In recent years many heuristic algorithms have been proposed to solve task scheduling problems in the cloud environment owing to their optimization capability. This article proposes a hybrid glowworm swarm optimization (HGSO) based on glowworm swarm optimization (GSO), which uses a technique of evolutionary computation, a strategy of quantum behaviour based on the principle of neighbourhood, offspring production and random walk, to achieve more efficient scheduling with reasonable scheduling costs. The proposed HGSO reduces the redundant computation and the dependence on the initialization of GSO, accelerates the convergence and more easily escapes from local optima. The conducted experiments and statistical analysis showed that in most cases the proposed HGSO algorithm outperformed previous heuristic algorithms to deal with independent tasks.
Preliminary investigation of motion requirements for the simulation of helicopter hover tasks
NASA Technical Reports Server (NTRS)
Parrish, R. V.
1980-01-01
Data from a preliminary experiment are presented which attempted to define a helicopter hover task that would allow the detection of objectively-measured differences in fixed base/moving base simulator performance. The addition of heave, pitch, and roll movement of a ship at sea to the hover task, by means of an adaption of a simulator g-seat, potentially fulfills the desired definition. The feasibility of g-seat substitution for platform motion can be investigated utilizing this task.
Task-based design of a synthetic-collimator SPECT system used for small animal imaging.
Lin, Alexander; Kupinski, Matthew A; Peterson, Todd E; Shokouhi, Sepideh; Johnson, Lindsay C
2018-05-07
In traditional multipinhole SPECT systems, image multiplexing - the overlapping of pinhole projection images - may occur on the detector, which can inhibit quality image reconstructions due to photon-origin uncertainty. One proposed system to mitigate the effects of multiplexing is the synthetic-collimator SPECT system. In this system, two detectors, a silicon detector and a germanium detector, are placed at different distances behind the multipinhole aperture, allowing for image detection to occur at different magnifications and photon energies, resulting in higher overall sensitivity while maintaining high resolution. The unwanted effects of multiplexing are reduced by utilizing the additional data collected from the front silicon detector. However, determining optimal system configurations for a given imaging task requires efficient parsing of the complex parameter space, to understand how pinhole spacings and the two detector distances influence system performance. In our simulation studies, we use the ensemble mean-squared error of the Wiener estimator (EMSE W ) as the figure of merit to determine optimum system parameters for the task of estimating the uptake of an 123 I-labeled radiotracer in three different regions of a computer-generated mouse brain phantom. The segmented phantom map is constructed by using data from the MRM NeAt database and allows for the reduction in dimensionality of the system matrix which improves the computational efficiency of scanning the system's parameter space. To contextualize our results, the Wiener estimator is also compared against a region of interest estimator using maximum-likelihood reconstructed data. Our results show that the synthetic-collimator SPECT system outperforms traditional multipinhole SPECT systems in this estimation task. We also find that image multiplexing plays an important role in the system design of the synthetic-collimator SPECT system, with optimal germanium detector distances occurring at maxima in the derivative of the percent multiplexing function. Furthermore, we report that improved task performance can be achieved by using an adaptive system design in which the germanium detector distance may vary with projection angle. Finally, in our comparative study, we find that the Wiener estimator outperforms the conventional region of interest estimator. Our work demonstrates how this optimization method has the potential to quickly and efficiently explore vast parameter spaces, providing insight into the behavior of competing factors, which are otherwise very difficult to calculate and study using other existing means. © 2018 American Association of Physicists in Medicine.
Hyperbaric Oxygen Environment Can Enhance Brain Activity and Multitasking Performance
Vadas, Dor; Kalichman, Leonid; Hadanny, Amir; Efrati, Shai
2017-01-01
Background: The Brain uses 20% of the total oxygen supply consumed by the entire body. Even though, <10% of the brain is active at any given time, it utilizes almost all the oxygen delivered. In order to perform complex tasks or more than one task (multitasking), the oxygen supply is shifted from one brain region to another, via blood perfusion modulation. The aim of the present study was to evaluate whether a hyperbaric oxygen (HBO) environment, with increased oxygen supply to the brain, will enhance the performance of complex and/or multiple activities. Methods: A prospective, double-blind randomized control, crossover trial including 22 healthy volunteers. Participants were asked to perform a cognitive task, a motor task and a simultaneous cognitive-motor task (multitasking). Participants were randomized to perform the tasks in two environments: (a) normobaric air (1 ATA 21% oxygen) (b) HBO (2 ATA 100% oxygen). Two weeks later participants were crossed to the alternative environment. Blinding of the normobaric environment was achieved in the same chamber with masks on while hyperbaric sensation was simulated by increasing pressure in the first minute and gradually decreasing to normobaric environment prior to tasks performance. Results: Compared to the performance at normobaric conditions, both cognitive and motor single tasks scores were significantly enhanced by HBO environment (p < 0.001 for both). Multitasking performance was also significantly enhanced in HBO environment (p = 0.006 for the cognitive part and p = 0.02 for the motor part). Conclusions: The improvement in performance of both single and multi-tasking while in an HBO environment supports the hypothesis which according to, oxygen is indeed a rate limiting factor for brain activity. Hyperbaric oxygenation can serve as an environment for brain performance. Further studies are needed to evaluate the optimal oxygen levels for maximal brain performance. PMID:29021747
Hyperbaric Oxygen Environment Can Enhance Brain Activity and Multitasking Performance.
Vadas, Dor; Kalichman, Leonid; Hadanny, Amir; Efrati, Shai
2017-01-01
Background: The Brain uses 20% of the total oxygen supply consumed by the entire body. Even though, <10% of the brain is active at any given time, it utilizes almost all the oxygen delivered. In order to perform complex tasks or more than one task (multitasking), the oxygen supply is shifted from one brain region to another, via blood perfusion modulation. The aim of the present study was to evaluate whether a hyperbaric oxygen (HBO) environment, with increased oxygen supply to the brain, will enhance the performance of complex and/or multiple activities. Methods: A prospective, double-blind randomized control, crossover trial including 22 healthy volunteers. Participants were asked to perform a cognitive task, a motor task and a simultaneous cognitive-motor task (multitasking). Participants were randomized to perform the tasks in two environments: (a) normobaric air (1 ATA 21% oxygen) (b) HBO (2 ATA 100% oxygen). Two weeks later participants were crossed to the alternative environment. Blinding of the normobaric environment was achieved in the same chamber with masks on while hyperbaric sensation was simulated by increasing pressure in the first minute and gradually decreasing to normobaric environment prior to tasks performance. Results: Compared to the performance at normobaric conditions, both cognitive and motor single tasks scores were significantly enhanced by HBO environment ( p < 0.001 for both). Multitasking performance was also significantly enhanced in HBO environment ( p = 0.006 for the cognitive part and p = 0.02 for the motor part). Conclusions: The improvement in performance of both single and multi-tasking while in an HBO environment supports the hypothesis which according to, oxygen is indeed a rate limiting factor for brain activity. Hyperbaric oxygenation can serve as an environment for brain performance. Further studies are needed to evaluate the optimal oxygen levels for maximal brain performance.
Optimization of a hydrometric network extension using specific flow, kriging and simulated annealing
NASA Astrophysics Data System (ADS)
Chebbi, Afef; Kebaili Bargaoui, Zoubeida; Abid, Nesrine; da Conceição Cunha, Maria
2017-12-01
In hydrometric stations, water levels are continuously observed and discharge rating curves are constantly updated to achieve accurate river levels and discharge observations. An adequate spatial distribution of hydrological gauging stations presents a lot of interest in linkage with the river regime characterization, water infrastructures design, water resources management and ecological survey. Due to the increase of riverside population and the associated flood risk, hydrological networks constantly need to be developed. This paper suggests taking advantage of kriging approaches to improve the design of a hydrometric network. The context deals with the application of an optimization approach using ordinary kriging and simulated annealing (SA) in order to identify the best locations to install new hydrometric gauges. The task at hand is to extend an existing hydrometric network in order to estimate, at ungauged sites, the average specific annual discharge which is a key basin descriptor. This methodology is developed for the hydrometric network of the transboundary Medjerda River in the North of Tunisia. A Geographic Information System (GIS) is adopted to delineate basin limits and centroids. The latter are adopted to assign the location of basins in kriging development. Scenarios where the size of an existing 12 stations network is alternatively increased by 1, 2, 3, 4 and 5 new station(s) are investigated using geo-regression and minimization of the variance of kriging errors. The analysis of the optimized locations from a scenario to another shows a perfect conformity with respect to the location of the new sites. The new locations insure a better spatial coverage of the study area as seen with the increase of both the average and the maximum of inter-station distances after optimization. The optimization procedure selects the basins that insure the shifting of the mean drainage area towards higher specific discharges.
Driving performance in a power wheelchair simulator.
Archambault, Philippe S; Tremblay, Stéphanie; Cachecho, Sarah; Routhier, François; Boissy, Patrick
2012-05-01
A power wheelchair simulator can allow users to safely experience various driving tasks. For such training to be efficient, it is important that driving performance be equivalent to that in a real wheelchair. This study aimed at comparing driving performance in a real and in a simulated environment. Two groups of healthy young adults performed different driving tasks, either in a real power wheelchair or in a simulator. Smoothness of joystick control as well as the time necessary to complete each task were recorded and compared between the two groups. Driving strategies were analysed from video recordings. The sense of presence, of really being in the virtual environment, was assessed through a questionnaire. Smoothness of joystick control was the same in the real and virtual groups. Task completion time was higher in the simulator for the more difficult tasks. Both groups showed similar strategies and difficulties. The simulator generated a good sense of presence, which is important for motivation. Performance was very similar for power wheelchair driving in the simulator or in real life. Thus, the simulator could potentially be used to complement training of individuals who require a power wheelchair and use a regular joystick. [Box: see text].
Gallagher, Anthony G; Seymour, Neal E; Jordan-Black, Julie-Anne; Bunting, Brendan P; McGlade, Kieran; Satava, Richard Martin
2013-06-01
We assessed the effectiveness of ToT from VR laparoscopic simulation training in 2 studies. In a second study, we also assessed the TER. ToT is a detectable performance improvement between equivalent groups, and TER is the observed percentage performance differences between 2 matched groups carrying out the same task but with 1 group pretrained on VR simulation. Concordance between simulated and in-vivo procedure performance was also assessed. Prospective, randomized, and blinded. In Study 1, experienced laparoscopic surgeons (n = 195) and in Study 2 laparoscopic novices (n = 30) were randomized to either train on VR simulation before completing an equivalent real-world task or complete the real-world task only. Experienced laparoscopic surgeons and novices who trained on the simulator performed significantly better than their controls, thus demonstrating ToT. Their performance showed a TER between 7% and 42% from the virtual to the real tasks. Simulation training impacted most on procedural error reduction in both studies (32-42%). The correlation observed between the VR and real-world task performance was r > 0·96 (Study 2). VR simulation training offers a powerful and effective platform for training safer skills.
Dynamic whole body PET parametric imaging: II. Task-oriented statistical estimation
Karakatsanis, Nicolas A.; Lodge, Martin A.; Zhou, Y.; Wahl, Richard L.; Rahmim, Arman
2013-01-01
In the context of oncology, dynamic PET imaging coupled with standard graphical linear analysis has been previously employed to enable quantitative estimation of tracer kinetic parameters of physiological interest at the voxel level, thus, enabling quantitative PET parametric imaging. However, dynamic PET acquisition protocols have been confined to the limited axial field-of-view (~15–20cm) of a single bed position and have not been translated to the whole-body clinical imaging domain. On the contrary, standardized uptake value (SUV) PET imaging, considered as the routine approach in clinical oncology, commonly involves multi-bed acquisitions, but is performed statically, thus not allowing for dynamic tracking of the tracer distribution. Here, we pursue a transition to dynamic whole body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. In a companion study, we presented a novel clinically feasible dynamic (4D) multi-bed PET acquisition protocol as well as the concept of whole body PET parametric imaging employing Patlak ordinary least squares (OLS) regression to estimate the quantitative parameters of tracer uptake rate Ki and total blood distribution volume V. In the present study, we propose an advanced hybrid linear regression framework, driven by Patlak kinetic voxel correlations, to achieve superior trade-off between contrast-to-noise ratio (CNR) and mean squared error (MSE) than provided by OLS for the final Ki parametric images, enabling task-based performance optimization. Overall, whether the observer's task is to detect a tumor or quantitatively assess treatment response, the proposed statistical estimation framework can be adapted to satisfy the specific task performance criteria, by adjusting the Patlak correlation-coefficient (WR) reference value. The multi-bed dynamic acquisition protocol, as optimized in the preceding companion study, was employed along with extensive Monte Carlo simulations and an initial clinical FDG patient dataset to validate and demonstrate the potential of the proposed statistical estimation methods. Both simulated and clinical results suggest that hybrid regression in the context of whole-body Patlak Ki imaging considerably reduces MSE without compromising high CNR. Alternatively, for a given CNR, hybrid regression enables larger reductions than OLS in the number of dynamic frames per bed, allowing for even shorter acquisitions of ~30min, thus further contributing to the clinical adoption of the proposed framework. Compared to the SUV approach, whole body parametric imaging can provide better tumor quantification, and can act as a complement to SUV, for the task of tumor detection. PMID:24080994
Dynamic whole-body PET parametric imaging: II. Task-oriented statistical estimation.
Karakatsanis, Nicolas A; Lodge, Martin A; Zhou, Y; Wahl, Richard L; Rahmim, Arman
2013-10-21
In the context of oncology, dynamic PET imaging coupled with standard graphical linear analysis has been previously employed to enable quantitative estimation of tracer kinetic parameters of physiological interest at the voxel level, thus, enabling quantitative PET parametric imaging. However, dynamic PET acquisition protocols have been confined to the limited axial field-of-view (~15-20 cm) of a single-bed position and have not been translated to the whole-body clinical imaging domain. On the contrary, standardized uptake value (SUV) PET imaging, considered as the routine approach in clinical oncology, commonly involves multi-bed acquisitions, but is performed statically, thus not allowing for dynamic tracking of the tracer distribution. Here, we pursue a transition to dynamic whole-body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. In a companion study, we presented a novel clinically feasible dynamic (4D) multi-bed PET acquisition protocol as well as the concept of whole-body PET parametric imaging employing Patlak ordinary least squares (OLS) regression to estimate the quantitative parameters of tracer uptake rate Ki and total blood distribution volume V. In the present study, we propose an advanced hybrid linear regression framework, driven by Patlak kinetic voxel correlations, to achieve superior trade-off between contrast-to-noise ratio (CNR) and mean squared error (MSE) than provided by OLS for the final Ki parametric images, enabling task-based performance optimization. Overall, whether the observer's task is to detect a tumor or quantitatively assess treatment response, the proposed statistical estimation framework can be adapted to satisfy the specific task performance criteria, by adjusting the Patlak correlation-coefficient (WR) reference value. The multi-bed dynamic acquisition protocol, as optimized in the preceding companion study, was employed along with extensive Monte Carlo simulations and an initial clinical (18)F-deoxyglucose patient dataset to validate and demonstrate the potential of the proposed statistical estimation methods. Both simulated and clinical results suggest that hybrid regression in the context of whole-body Patlak Ki imaging considerably reduces MSE without compromising high CNR. Alternatively, for a given CNR, hybrid regression enables larger reductions than OLS in the number of dynamic frames per bed, allowing for even shorter acquisitions of ~30 min, thus further contributing to the clinical adoption of the proposed framework. Compared to the SUV approach, whole-body parametric imaging can provide better tumor quantification, and can act as a complement to SUV, for the task of tumor detection.
NASA Technical Reports Server (NTRS)
Grantham, William D.
1989-01-01
The primary objective was to provide information to the flight controls/flying qualities engineer that will assist him in determining the incremental flying qualities and/or pilot-performance differences that may be expected between results obtained via ground-based simulation (and, in particular, the six-degree-of-freedom Langley Visual/Motion Simulator (VMS)) and flight tests. Pilot opinion and performance parameters derived from a ground-based simulator and an in-flight simulator are compared for a jet-transport airplane having 32 different longitudinal dynamic response characteristics. The primary pilot tasks were the approach and landing tasks with emphasis on the landing-flare task. The results indicate that, in general, flying qualities results obtained from the ground-based simulator may be considered conservative-especially when the pilot task requires tight pilot control as during the landing flare. The one exception to this, according to the present study, was that the pilots were more tolerant of large time delays in the airplane response on the ground-based simulator. The results also indicated that the ground-based simulator (particularly the Langley VMS) is not adequate for assessing pilot/vehicle performance capabilities (i.e., the sink rate performance for the landing-flare task when the pilot has little depth/height perception from the outside scene presentation).
Implicit Formulation of Muscle Dynamics in OpenSim
NASA Technical Reports Server (NTRS)
Humphreys, Brad; Dembia, Chris; Lewandowski, Beth; Van Den Bogert, Antonie
2017-01-01
Astronauts lose bone and muscle mass during spaceflight. Exercise countermeasure is the primary method for counteracting bone and muscle mass loss in space. New spacecraft exercise device concepts are currently being developed for the NASAs new crew exploration vehicle. The NASA Digital Astronaut Project (DAP) uses computational modeling to help determine if the new exercise devices will be effective as countermeasures. The NASA Digital Astronaut Project is developing the ability to utilize predictive simulation to provide insight into the change in kinematics and kinetics with a change in device and gravitational environment (1-g versus 0-g). For example, in space exercise the subject's body weight is applied in addition to the loads prescribed for musculoskeletal maintenance. How and where these loads are applied obviously directly impacts bone and tissue loads. Additionally, due to space vehicle structural requirements, exercise devices are often placed on vibration isolation systems. This changes the apparent impedance or stiffness of the device as seen by the user. Data collection under these conditions is often impractical and limited. Predictive modeling provides a means to have a virtual subject to test hypotheses. Predictive simulation provides a virtual subject for which we are able to perform studies such as sensitivity to device loading and vibration isolation without the need for laboratory kinematic or kinetic test data.Direct Collocation optimization provides an efficient means to perform task based optimization and predictive modeling. It is relatively straight forward to structure a physical exercise task in a Direct Collocation mathematical formulation: perform a motion such that you start at an initial pose, achieve a given amount of deflection i.e a squat, return to the initial pose, and minimize muscle activation cost. Direct Collocation is advantageous in that it does not require numerical integration to evaluate the objective function. Instead, the system dynamics are transformed to discrete time and the optimizer is constrained such that the solution is not considered to be a valid unless the dynamic equations are satisfied at all time points. The simulation and optimization are effectively done simultaneously. Due to the implicit integration, time steps can be more coarse than in a differential equation solver. In a gait scenario this means that that the model constraints and cost function are evaluated at 100 nodes in the gait cycle versus 10,000 integration steps in a variable-step forward dynamic simulation. Furthermore, no time is wasted on accurate simulations of movements that are far from the optimum. Constrained optimization algorithms require a Jacobian matrix that contains the partial derivatives of each of the dynamic constraints with respect to of each of the state and control variables at all time points. This is a large but sparse matrix. An implicit dynamics formulation requires computation of the dynamic residuals f as a function of the states x and their derivatives, and controls u:f(x, dxdt, u) 0If the dynamics of musculoskeletal system are formulated implicitly, the Jacobian elements are often available analytically, eliminating the need for numerical differentiation; this is obviously computationally advantageous. Additionally, implicit formulation of musculoskeletal dynamics do not suffer from singularities from low mass bodies, zero muscle activation, or other stiff system or
Vecchiato, Giovanni; Borghini, Gianluca; Aricò, Pietro; Graziani, Ilenia; Maglione, Anton Giulio; Cherubino, Patrizia; Babiloni, Fabio
2016-10-01
Brain-computer interfaces (BCIs) are widely used for clinical applications and exploited to design robotic and interactive systems for healthy people. We provide evidence to control a sensorimotor electroencephalographic (EEG) BCI system while piloting a flight simulator and attending a double attentional task simultaneously. Ten healthy subjects were trained to learn how to manage a flight simulator, use the BCI system, and answer to the attentional tasks independently. Afterward, the EEG activity was collected during a first flight where subjects were required to concurrently use the BCI, and a second flight where they were required to simultaneously use the BCI and answer to the attentional tasks. Results showed that the concurrent use of the BCI system during the flight simulation does not affect the flight performances. However, BCI performances decrease from the 83 to 63 % while attending additional alertness and vigilance tasks. This work shows that it is possible to successfully control a BCI system during the execution of multiple tasks such as piloting a flight simulator with an extra cognitive load induced by attentional tasks. Such framework aims to foster the knowledge on BCI systems embedded into vehicles and robotic devices to allow the simultaneous execution of secondary tasks.
Energy-efficient container handling using hybrid model predictive control
NASA Astrophysics Data System (ADS)
Xin, Jianbin; Negenborn, Rudy R.; Lodewijks, Gabriel
2015-11-01
The performance of container terminals needs to be improved to adapt the growth of containers while maintaining sustainability. This paper provides a methodology for determining the trajectory of three key interacting machines for carrying out the so-called bay handling task, involving transporting containers between a vessel and the stacking area in an automated container terminal. The behaviours of the interacting machines are modelled as a collection of interconnected hybrid systems. Hybrid model predictive control (MPC) is proposed to achieve optimal performance, balancing the handling capacity and energy consumption. The underlying control problem is hereby formulated as a mixed-integer linear programming problem. Simulation studies illustrate that a higher penalty on energy consumption indeed leads to improved sustainability using less energy. Moreover, simulations illustrate how the proposed energy-efficient hybrid MPC controller performs under different types of uncertainties.
Cognitive simulators for medical education and training.
Kahol, Kanav; Vankipuram, Mithra; Smith, Marshall L
2009-08-01
Simulators for honing procedural skills (such as surgical skills and central venous catheter placement) have proven to be valuable tools for medical educators and students. While such simulations represent an effective paradigm in surgical education, there is an opportunity to add a layer of cognitive exercises to these basic simulations that can facilitate robust skill learning in residents. This paper describes a controlled methodology, inspired by neuropsychological assessment tasks and embodied cognition, to develop cognitive simulators for laparoscopic surgery. These simulators provide psychomotor skill training and offer the additional challenge of accomplishing cognitive tasks in realistic environments. A generic framework for design, development and evaluation of such simulators is described. The presented framework is generalizable and can be applied to different task domains. It is independent of the types of sensors, simulation environment and feedback mechanisms that the simulators use. A proof of concept of the framework is provided through developing a simulator that includes cognitive variations to a basic psychomotor task. The results of two pilot studies are presented that show the validity of the methodology in providing an effective evaluation and learning environments for surgeons.
Bhambhani, Y; Esmail, S; Brintnell, S
1994-01-01
The Baltimore Therapeutic Equipment (BTE) work simulator is routinely used by occupational therapists in functional capacity evaluation. Currently, there is a lack of normative data for various attachments on this instrument. The purposes of this study were to (a) establish norms for the biomechanical and physiological responses during three tasks on the BTE work simulator, namely, wheel-turn, push-pull, and overhead-reach; (b) compare these responses during the three tasks, and (c) examine the interrelationships of these responses during the tasks. Twenty healthy men completed five testing sessions: (a) task familiarization on the BTE work simulator to identify the work intensity, which was perceived as hard on the Borg scale; (b) an incremental arm ergometer exercise test to determine their peak oxygen uptake (pVO2) and peak heart rate (pHR); and (c) one of the three tasks on the BTE work simulator for 4 min in each of the next three sessions. Analysis of variance indicated that torque, work, and power during the overhead-reach were significantly higher (p = .000) compared with the wheel-turn and push-pull tasks. However, no significant differences (p > .05) were observed among the tasks for the VO2 and HR, which were approximately 50% and 70% of pVO2 and pHR respectively. Although there was a significant relationship (p < .05) among tasks for the torque, work, and power, the common variance ranged only between 38% and 67%. The relative pVO2 was significantly related to work (p = .028) and power (p = .027) only during the push-pull task but not the wheel-turn and overhead-reach tasks. These results suggest that occupational therapists should include as many tasks as possible when designing functional capacity evaluation test batteries, and that there is no consistent relationship between cardiorespiratory fitness and performance of various tasks on the BTE work simulator.
Simulation of tunneling construction methods of the Cisumdawu toll road
NASA Astrophysics Data System (ADS)
Abduh, Muhamad; Sukardi, Sapto Nugroho; Ola, Muhammad Rusdian La; Ariesty, Anita; Wirahadikusumah, Reini D.
2017-11-01
Simulation can be used as a tool for planning and analysis of a construction method. Using simulation technique, a contractor could design optimally resources associated with a construction method and compare to other methods based on several criteria, such as productivity, waste, and cost. This paper discusses the use of simulation using Norwegian Method of Tunneling (NMT) for a 472-meter tunneling work in the Cisumdawu Toll Road project. Primary and secondary data were collected to provide useful information for simulation as well as problems that may be faced by the contractor. The method was modelled using the CYCLONE and then simulated using the WebCYCLONE. The simulation could show the duration of the project from the duration model of each work tasks which based on literature review, machine productivity, and several assumptions. The results of simulation could also show the total cost of the project that was modeled based on journal construction & building unit cost and online websites of local and international suppliers. The analysis of the advantages and disadvantages of the method was conducted based on its, wastes, and cost. The simulation concluded the total cost of this operation is about Rp. 900,437,004,599 and the total duration of the tunneling operation is 653 days. The results of the simulation will be used for a recommendation to the contractor before the implementation of the already selected tunneling operation.