Research on Production Scheduling System with Bottleneck Based on Multi-agent
NASA Astrophysics Data System (ADS)
Zhenqiang, Bao; Weiye, Wang; Peng, Wang; Pan, Quanke
Aimed at the imbalance problem of resource capacity in Production Scheduling System, this paper uses Production Scheduling System based on multi-agent which has been constructed, and combines the dynamic and autonomous of Agent; the bottleneck problem in the scheduling is solved dynamically. Firstly, this paper uses Bottleneck Resource Agent to find out the bottleneck resource in the production line, analyses the inherent mechanism of bottleneck, and describes the production scheduling process based on bottleneck resource. Bottleneck Decomposition Agent harmonizes the relationship of job's arrival time and transfer time in Bottleneck Resource Agent and Non-Bottleneck Resource Agents, therefore, the dynamic scheduling problem is simplified as the single machine scheduling of each resource which takes part in the scheduling. Finally, the dynamic real-time scheduling problem is effectively solved in Production Scheduling System.
Applications of dynamic scheduling technique to space related problems: Some case studies
NASA Astrophysics Data System (ADS)
Nakasuka, Shinichi; Ninomiya, Tetsujiro
1994-10-01
The paper discusses the applications of 'Dynamic Scheduling' technique, which has been invented for the scheduling of Flexible Manufacturing System, to two space related scheduling problems: operation scheduling of a future space transportation system, and resource allocation in a space system with limited resources such as space station or space shuttle.
Seol, Ye-In; Kim, Young-Kuk
2014-01-01
Power-aware scheduling reduces CPU energy consumption in hard real-time systems through dynamic voltage scaling (DVS). In this paper, we deal with pinwheel task model which is known as static and predictable task model and could be applied to various embedded or ubiquitous systems. In pinwheel task model, each task's priority is static and its execution sequence could be predetermined. There have been many static approaches to power-aware scheduling in pinwheel task model. But, in this paper, we will show that the dynamic priority scheduling results in power-aware scheduling could be applied to pinwheel task model. This method is more effective than adopting the previous static priority scheduling methods in saving energy consumption and, for the system being still static, it is more tractable and applicable to small sized embedded or ubiquitous computing. Also, we introduce a novel power-aware scheduling algorithm which exploits all slacks under preemptive earliest-deadline first scheduling which is optimal in uniprocessor system. The dynamic priority method presented in this paper could be applied directly to static systems of pinwheel task model. The simulation results show that the proposed algorithm with the algorithmic complexity of O(n) reduces the energy consumption by 10-80% over the existing algorithms.
2014-01-01
Power-aware scheduling reduces CPU energy consumption in hard real-time systems through dynamic voltage scaling (DVS). In this paper, we deal with pinwheel task model which is known as static and predictable task model and could be applied to various embedded or ubiquitous systems. In pinwheel task model, each task's priority is static and its execution sequence could be predetermined. There have been many static approaches to power-aware scheduling in pinwheel task model. But, in this paper, we will show that the dynamic priority scheduling results in power-aware scheduling could be applied to pinwheel task model. This method is more effective than adopting the previous static priority scheduling methods in saving energy consumption and, for the system being still static, it is more tractable and applicable to small sized embedded or ubiquitous computing. Also, we introduce a novel power-aware scheduling algorithm which exploits all slacks under preemptive earliest-deadline first scheduling which is optimal in uniprocessor system. The dynamic priority method presented in this paper could be applied directly to static systems of pinwheel task model. The simulation results show that the proposed algorithm with the algorithmic complexity of O(n) reduces the energy consumption by 10–80% over the existing algorithms. PMID:25121126
NASA Astrophysics Data System (ADS)
Nejad, Hossein Tehrani Nik; Sugimura, Nobuhiro; Iwamura, Koji; Tanimizu, Yoshitaka
Process planning and scheduling are important manufacturing planning activities which deal with resource utilization and time span of manufacturing operations. The process plans and the schedules generated in the planning phase shall be modified in the execution phase due to the disturbances in the manufacturing systems. This paper deals with a multi-agent architecture of an integrated and dynamic system for process planning and scheduling for multi jobs. A negotiation protocol is discussed, in this paper, to generate the process plans and the schedules of the manufacturing resources and the individual jobs, dynamically and incrementally, based on the alternative manufacturing processes. The alternative manufacturing processes are presented by the process plan networks discussed in the previous paper, and the suitable process plans and schedules are searched and generated to cope with both the dynamic status and the disturbances of the manufacturing systems. We initiatively combine the heuristic search algorithms of the process plan networks with the negotiation protocols, in order to generate suitable process plans and schedules in the dynamic manufacturing environment. A simulation software has been developed to carry out case studies, aimed at verifying the performance of the proposed multi-agent architecture.
On the number of different dynamics in Boolean networks with deterministic update schedules.
Aracena, J; Demongeot, J; Fanchon, E; Montalva, M
2013-04-01
Deterministic Boolean networks are a type of discrete dynamical systems widely used in the modeling of genetic networks. The dynamics of such systems is characterized by the local activation functions and the update schedule, i.e., the order in which the nodes are updated. In this paper, we address the problem of knowing the different dynamics of a Boolean network when the update schedule is changed. We begin by proving that the problem of the existence of a pair of update schedules with different dynamics is NP-complete. However, we show that certain structural properties of the interaction diagraph are sufficient for guaranteeing distinct dynamics of a network. In [1] the authors define equivalence classes which have the property that all the update schedules of a given class yield the same dynamics. In order to determine the dynamics associated to a network, we develop an algorithm to efficiently enumerate the above equivalence classes by selecting a representative update schedule for each class with a minimum number of blocks. Finally, we run this algorithm on the well known Arabidopsis thaliana network to determine the full spectrum of its different dynamics. Copyright © 2013 Elsevier Inc. All rights reserved.
A self-organizing neural network for job scheduling in distributed systems
NASA Astrophysics Data System (ADS)
Newman, Harvey B.; Legrand, Iosif C.
2001-08-01
The aim of this work is to describe a possible approach for the optimization of the job scheduling in large distributed systems, based on a self-organizing Neural Network. This dynamic scheduling system should be seen as adaptive middle layer software, aware of current available resources and making the scheduling decisions using the "past experience." It aims to optimize job specific parameters as well as the resource utilization. The scheduling system is able to dynamically learn and cluster information in a large dimensional parameter space and at the same time to explore new regions in the parameters space. This self-organizing scheduling system may offer a possible solution to provide an effective use of resources for the off-line data processing jobs for future HEP experiments.
A Scheduling Algorithm for Cloud Computing System Based on the Driver of Dynamic Essential Path.
Xie, Zhiqiang; Shao, Xia; Xin, Yu
2016-01-01
To solve the problem of task scheduling in the cloud computing system, this paper proposes a scheduling algorithm for cloud computing based on the driver of dynamic essential path (DDEP). This algorithm applies a predecessor-task layer priority strategy to solve the problem of constraint relations among task nodes. The strategy assigns different priority values to every task node based on the scheduling order of task node as affected by the constraint relations among task nodes, and the task node list is generated by the different priority value. To address the scheduling order problem in which task nodes have the same priority value, the dynamic essential long path strategy is proposed. This strategy computes the dynamic essential path of the pre-scheduling task nodes based on the actual computation cost and communication cost of task node in the scheduling process. The task node that has the longest dynamic essential path is scheduled first as the completion time of task graph is indirectly influenced by the finishing time of task nodes in the longest dynamic essential path. Finally, we demonstrate the proposed algorithm via simulation experiments using Matlab tools. The experimental results indicate that the proposed algorithm can effectively reduce the task Makespan in most cases and meet a high quality performance objective.
A Scheduling Algorithm for Cloud Computing System Based on the Driver of Dynamic Essential Path
Xie, Zhiqiang; Shao, Xia; Xin, Yu
2016-01-01
To solve the problem of task scheduling in the cloud computing system, this paper proposes a scheduling algorithm for cloud computing based on the driver of dynamic essential path (DDEP). This algorithm applies a predecessor-task layer priority strategy to solve the problem of constraint relations among task nodes. The strategy assigns different priority values to every task node based on the scheduling order of task node as affected by the constraint relations among task nodes, and the task node list is generated by the different priority value. To address the scheduling order problem in which task nodes have the same priority value, the dynamic essential long path strategy is proposed. This strategy computes the dynamic essential path of the pre-scheduling task nodes based on the actual computation cost and communication cost of task node in the scheduling process. The task node that has the longest dynamic essential path is scheduled first as the completion time of task graph is indirectly influenced by the finishing time of task nodes in the longest dynamic essential path. Finally, we demonstrate the proposed algorithm via simulation experiments using Matlab tools. The experimental results indicate that the proposed algorithm can effectively reduce the task Makespan in most cases and meet a high quality performance objective. PMID:27490901
143. GENERAL DYNAMICS SPACE SYSTEMS DIVISION SCHEDULE BOARD IN LUNCH ...
143. GENERAL DYNAMICS SPACE SYSTEMS DIVISION SCHEDULE BOARD IN LUNCH ROOM (120), LSB (BLDG. 770) - Vandenberg Air Force Base, Space Launch Complex 3, Launch Pad 3 West, Napa & Alden Roads, Lompoc, Santa Barbara County, CA
Static-dynamic hybrid communication scheduling and control co-design for networked control systems.
Wen, Shixi; Guo, Ge
2017-11-01
In this paper, the static-dynamic hybrid communication scheduling and control co-design is proposed for the networked control systems (NCSs) to solve the capacity limitation of the wireless communication network. The analytical most regular binary sequences (MRBSs) are used as the communication scheduling function for NCSs. When the communication conflicts yielded in the binary sequence MRBSs, a dynamic scheduling strategy is proposed to on-line reallocate the medium access status for each plant. Under such static-dynamic hybrid scheduling policy, plants in NCSs are described as the non-uniform sampled-control systems, whose controller have a group of controller gains and switch according to the sampling interval yielded by the binary sequence. A useful communication scheduling and control co-design framework is proposed for the NCSs to simultaneously decide the controller gains and the parameters used to generate the communication sequences MRBS. Numerical example and realistic example are respectively given to demonstrate the effectiveness of the proposed co-design method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
The GBT Dynamic Scheduling System: A New Scheduling Paradigm
NASA Astrophysics Data System (ADS)
O'Neil, K.; Balser, D.; Bignell, C.; Clark, M.; Condon, J.; McCarty, M.; Marganian, P.; Shelton, A.; Braatz, J.; Harnett, J.; Maddalena, R.; Mello, M.; Sessoms, E.
2009-09-01
The Robert C. Byrd Green Bank Telescope (GBT) is implementing a new Dynamic Scheduling System (DSS) designed to maximize the observing efficiency of the telescope while ensuring that none of the flexibility and ease of use of the GBT is harmed and that the data quality of observations is not adversely affected. To accomplish this, the GBT DSS is implementing a dynamic scheduling system which schedules observers, rather than running scripts. The DSS works by breaking each project into one or more sessions which have associated observing criteria such as RA, Dec, and frequency. Potential observers may also enter dates when members of their team will not be available for either on-site or remote observing. The scheduling algorithm uses those data, along with the predicted weather, to determine the most efficient schedule for the GBT. The DSS provides all observers at least 24 hours notice of their upcoming observing. In the uncommon (< 20%) case where the actual weather does not match the predictions, a backup project, chosen from the database, is run instead. Here we give an overview of the GBT DSS project, including the ranking and scheduling algorithms for the sessions, the scheduling probabilities generation, the web framework for the system, and an overview of the results from the beta testing which were held from June - September, 2008.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsumoto, H.; Eki, Y.; Kaji, A.
1993-12-01
An expert system which can support operators of fossil power plants in creating the optimum startup schedule and executing it accurately is described. The optimum turbine speed-up and load-up pattern is obtained through an iterative manner which is based on fuzzy resonating using quantitative calculations as plant dynamics models and qualitative knowledge as schedule optimization rules with fuzziness. The rules represent relationships between stress margins and modification rates of the schedule parameters. Simulations analysis proves that the system provides quick and accurate plant startups.
System-level power optimization for real-time distributed embedded systems
NASA Astrophysics Data System (ADS)
Luo, Jiong
Power optimization is one of the crucial design considerations for modern electronic systems. In this thesis, we present several system-level power optimization techniques for real-time distributed embedded systems, based on dynamic voltage scaling, dynamic power management, and management of peak power and variance of the power profile. Dynamic voltage scaling has been widely acknowledged as an important and powerful technique to trade off dynamic power consumption and delay. Efficient dynamic voltage scaling requires effective variable-voltage scheduling mechanisms that can adjust voltages and clock frequencies adaptively based on workloads and timing constraints. For this purpose, we propose static variable-voltage scheduling algorithms utilizing criticalpath driven timing analysis for the case when tasks are assumed to have uniform switching activities, as well as energy-gradient driven slack allocation for a more general scenario. The proposed techniques can achieve closeto-optimal power savings with very low computational complexity, without violating any real-time constraints. We also present algorithms for power-efficient joint scheduling of multi-rate periodic task graphs along with soft aperiodic tasks. The power issue is addressed through both dynamic voltage scaling and power management. Periodic task graphs are scheduled statically. Flexibility is introduced into the static schedule to allow the on-line scheduler to make local changes to PE schedules through resource reclaiming and slack stealing, without interfering with the validity of the global schedule. We provide a unified framework in which the response times of aperiodic tasks and power consumption are dynamically optimized simultaneously. Interconnection network fabrics point to a new generation of power-efficient and scalable interconnection architectures for distributed embedded systems. As the system bandwidth continues to increase, interconnection networks become power/energy limited as well. Variable-frequency links have been designed by circuit designers for both parallel and serial links, which can adaptively regulate the supply voltage of transceivers to a desired link frequency, to exploit the variations in bandwidth requirement for power savings. We propose solutions for simultaneous dynamic voltage scaling of processors and links. The proposed solution considers real-time scheduling, flow control, and packet routing jointly. It can trade off the power consumption on processors and communication links via efficient slack allocation, and lead to more power savings than dynamic voltage scaling on processors alone. For battery-operated systems, the battery lifespan is an important concern. Due to the effects of discharge rate and battery recovery, the discharge pattern of batteries has an impact on the battery lifespan. Battery models indicate that even under the same average power consumption, reducing peak power current and variance in the power profile can increase the battery efficiency and thereby prolong battery lifetime. To take advantage of these effects, we propose battery-driven scheduling techniques for embedded applications, to reduce the peak power and the variance in the power profile of the overall system under real-time constraints. The proposed scheduling algorithms are also beneficial in addressing reliability and signal integrity concerns by effectively controlling peak power and variance of the power profile.
Integration of scheduling and discrete event simulation systems to improve production flow planning
NASA Astrophysics Data System (ADS)
Krenczyk, D.; Paprocka, I.; Kempa, W. M.; Grabowik, C.; Kalinowski, K.
2016-08-01
The increased availability of data and computer-aided technologies such as MRPI/II, ERP and MES system, allowing producers to be more adaptive to market dynamics and to improve production scheduling. Integration of production scheduling and computer modelling, simulation and visualization systems can be useful in the analysis of production system constraints related to the efficiency of manufacturing systems. A integration methodology based on semi-automatic model generation method for eliminating problems associated with complexity of the model and labour-intensive and time-consuming process of simulation model creation is proposed. Data mapping and data transformation techniques for the proposed method have been applied. This approach has been illustrated through examples of practical implementation of the proposed method using KbRS scheduling system and Enterprise Dynamics simulation system.
Coordinating space telescope operations in an integrated planning and scheduling architecture
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Smith, Stephen F.; Cesta, Amedeo; D'Aloisi, Daniela
1992-01-01
The Heuristic Scheduling Testbed System (HSTS), a software architecture for integrated planning and scheduling, is discussed. The architecture has been applied to the problem of generating observation schedules for the Hubble Space Telescope. This problem is representative of the class of problems that can be addressed: their complexity lies in the interaction of resource allocation and auxiliary task expansion. The architecture deals with this interaction by viewing planning and scheduling as two complementary aspects of the more general process of constructing behaviors of a dynamical system. The principal components of the software architecture are described, indicating how to model the structure and dynamics of a system, how to represent schedules at multiple levels of abstraction in the temporal database, and how the problem solving machinery operates. A scheduler for the detailed management of Hubble Space Telescope operations that has been developed within HSTS is described. Experimental performance results are given that indicate the utility and practicality of the approach.
A Conceptual Level Design for a Static Scheduler for Hard Real-Time Systems
1988-03-01
The design of hard real - time systems is gaining a great deal of attention in the software engineering field as more and more real-world processes are...for these hard real - time systems . PSDL, as an executable design language, is supported by an execution support system consisting of a static scheduler, dynamic scheduler, and translator.
Tera-OP Reliable Intelligently Adaptive Processing System (TRIPS) Implementation
2008-09-01
38 6.8 Instruction Scheduling ...39 6.8.1 Spatial Path Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 6.8.2...oblivious scheduling for rapid application prototyping and deployment, environmental adaptivity for resilience in hostile environments, and dynamic
OGUPSA sensor scheduling architecture and algorithm
NASA Astrophysics Data System (ADS)
Zhang, Zhixiong; Hintz, Kenneth J.
1996-06-01
This paper introduces a new architecture for a sensor measurement scheduler as well as a dynamic sensor scheduling algorithm called the on-line, greedy, urgency-driven, preemptive scheduling algorithm (OGUPSA). OGUPSA incorporates a preemptive mechanism which uses three policies, (1) most-urgent-first (MUF), (2) earliest- completed-first (ECF), and (3) least-versatile-first (LVF). The three policies are used successively to dynamically allocate and schedule and distribute a set of arriving tasks among a set of sensors. OGUPSA also can detect the failure of a task to meet a deadline as well as generate an optimal schedule in the sense of minimum makespan for a group of tasks with the same priorities. A side benefit is OGUPSA's ability to improve dynamic load balance among all sensors while being a polynomial time algorithm. Results of a simulation are presented for a simple sensor system.
Designing an optimal software intensive system acquisition: A game theoretic approach
NASA Astrophysics Data System (ADS)
Buettner, Douglas John
The development of schedule-constrained software-intensive space systems is challenging. Case study data from national security space programs developed at the U.S. Air Force Space and Missile Systems Center (USAF SMC) provide evidence of the strong desire by contractors to skip or severely reduce software development design and early defect detection methods in these schedule-constrained environments. The research findings suggest recommendations to fully address these issues at numerous levels. However, the observations lead us to investigate modeling and theoretical methods to fundamentally understand what motivated this behavior in the first place. As a result, Madachy's inspection-based system dynamics model is modified to include unit testing and an integration test feedback loop. This Modified Madachy Model (MMM) is used as a tool to investigate the consequences of this behavior on the observed defect dynamics for two remarkably different case study software projects. Latin Hypercube sampling of the MMM with sample distributions for quality, schedule and cost-driven strategies demonstrate that the higher cost and effort quality-driven strategies provide consistently better schedule performance than the schedule-driven up-front effort-reduction strategies. Game theory reasoning for schedule-driven engineers cutting corners on inspections and unit testing is based on the case study evidence and Austin's agency model to describe the observed phenomena. Game theory concepts are then used to argue that the source of the problem and hence the solution to developers cutting corners on quality for schedule-driven system acquisitions ultimately lies with the government. The game theory arguments also lead to the suggestion that the use of a multi-player dynamic Nash bargaining game provides a solution for our observed lack of quality game between the government (the acquirer) and "large-corporation" software developers. A note is provided that argues this multi-player dynamic Nash bargaining game also provides the solution to Freeman Dyson's problem, for a way to place a label of good or bad on systems.
A new task scheduling algorithm based on value and time for cloud platform
NASA Astrophysics Data System (ADS)
Kuang, Ling; Zhang, Lichen
2017-08-01
Tasks scheduling, a key part of increasing resource utilization and enhancing system performance, is a never outdated problem especially in cloud platforms. Based on the value density algorithm of the real-time task scheduling system and the character of the distributed system, the paper present a new task scheduling algorithm by further studying the cloud technology and the real-time system: Least Level Value Density First (LLVDF). The algorithm not only introduces some attributes of time and value for tasks, it also can describe weighting relationships between these properties mathematically. As this feature of the algorithm, it can gain some advantages to distinguish between different tasks more dynamically and more reasonably. When the scheme was used in the priority calculation of the dynamic task scheduling on cloud platform, relying on its advantage, it can schedule and distinguish tasks with large amounts and many kinds more efficiently. The paper designs some experiments, some distributed server simulation models based on M/M/C model of queuing theory and negative arrivals, to compare the algorithm against traditional algorithm to observe and show its characters and advantages.
Tethered satellite system dynamics and control
NASA Technical Reports Server (NTRS)
Musetti, B.; Cibrario, B.; Bussolino, L.; Bodley, C. S.; Flanders, H. A.; Mowery, D. K.; Tomlin, D. D.
1990-01-01
The first tethered satellite system, scheduled for launch in May 1991, is reviewed. The system dynamics, dynamics control, and dynamics simulations are discussed. Particular attention is given to in-plane and out-of-plane librations; tether oscillation modes; orbiter and sub-satellite dynamics; deployer control system; the sub-satellite attitude measurement and control system; the Aeritalia Dynamics Model; the Martin-Marietta and NASA-MSFC Dynamics Model; and simulation results.
NASA Astrophysics Data System (ADS)
Cervero, T.; Gómez, A.; López, S.; Sarmiento, R.; Dondo, J.; Rincón, F.; López, J. C.
2013-05-01
One of the limiting factors that have prevented a widely dissemination of the reconfigurable technology is the absence of an appropriate model for certain target applications capable of offering a reliable control. Moreover, the lack of flexible and easy-to-use scheduling and management systems are also relevant drawbacks to be considered. Under static scenarios, it is relatively easy to schedule and manage the reconfiguration process since all the variations corresponding to predetermined and well-known tasks. However, the difficulty increases when the adaptation needs of the overall system change semi-randomly according to the environmental fluctuations. In this context, this work proposes a change in the paradigm of dynamically reconfigurable systems, by attending to the dynamically reconfigurable control problematic as a whole, in which the scheduling and the placement issues are packed together as a hierarchical management structure, interacting together as one entity from the system point of view, but performing their tasks with certain degree of independence each other. In this sense, the top hierarchical level corresponds with a dynamic scheduler in charge of planning and adjusting all the reconfigurable modules according to the variations of the external stimulus. The lower level interacts with the physical layer of the device by means of instantiating, relocating, removing a reconfigurable module following the scheduler's instructions. In regards to how fast is the proposed solution, the total partial reconfiguration time achieved with this proposal has been measured and compared with other two approaches: 1) using traditional Xilinx's tools; 2) using an optimized version of the Xilinx's drivers. The collected numbers demonstrate that our solution reaches a gain up to 10 times faster than the other approaches.
Cui, Laizhong; Lu, Nan; Chen, Fu
2014-01-01
Most large-scale peer-to-peer (P2P) live streaming systems use mesh to organize peers and leverage pull scheduling to transmit packets for providing robustness in dynamic environment. The pull scheduling brings large packet delay. Network coding makes the push scheduling feasible in mesh P2P live streaming and improves the efficiency. However, it may also introduce some extra delays and coding computational overhead. To improve the packet delay, streaming quality, and coding overhead, in this paper are as follows. we propose a QoS driven push scheduling approach. The main contributions of this paper are: (i) We introduce a new network coding method to increase the content diversity and reduce the complexity of scheduling; (ii) we formulate the push scheduling as an optimization problem and transform it to a min-cost flow problem for solving it in polynomial time; (iii) we propose a push scheduling algorithm to reduce the coding overhead and do extensive experiments to validate the effectiveness of our approach. Compared with previous approaches, the simulation results demonstrate that packet delay, continuity index, and coding ratio of our system can be significantly improved, especially in dynamic environments. PMID:25114968
Hard real-time beam scheduler enables adaptive images in multi-probe systems
NASA Astrophysics Data System (ADS)
Tobias, Richard J.
2014-03-01
Real-time embedded-system concepts were adapted to allow an imaging system to responsively control the firing of multiple probes. Large-volume, operator-independent (LVOI) imaging would increase the diagnostic utility of ultrasound. An obstacle to this innovation is the inability of current systems to drive multiple transducers dynamically. Commercial systems schedule scanning with static lists of beams to be fired and processed; here we allow an imager to adapt to changing beam schedule demands, as an intelligent response to incoming image data. An example of scheduling changes is demonstrated with a flexible duplex mode two-transducer application mimicking LVOI imaging. Embedded-system concepts allow an imager to responsively control the firing of multiple probes. Operating systems use powerful dynamic scheduling algorithms, such as fixed priority preemptive scheduling. Even real-time operating systems lack the timing constraints required for ultrasound. Particularly for Doppler modes, events must be scheduled with sub-nanosecond precision, and acquired data is useless without this requirement. A successful scheduler needs unique characteristics. To get close to what would be needed in LVOI imaging, we show two transducers scanning different parts of a subjects leg. When one transducer notices flow in a region where their scans overlap, the system reschedules the other transducer to start flow mode and alter its beams to get a view of the observed vessel and produce a flow measurement. The second transducer does this in a focused region only. This demonstrates key attributes of a successful LVOI system, such as robustness against obstructions and adaptive self-correction.
Dynamic Appliances Scheduling in Collaborative MicroGrids System
Bilil, Hasnae; Aniba, Ghassane; Gharavi, Hamid
2017-01-01
In this paper a new approach which is based on a collaborative system of MicroGrids (MG’s), is proposed to enable household appliance scheduling. To achieve this, appliances are categorized into flexible and non-flexible Deferrable Loads (DL’s), according to their electrical components. We propose a dynamic scheduling algorithm where users can systematically manage the operation of their electric appliances. The main challenge is to develop a flattening function calculus (reshaping) for both flexible and non-flexible DL’s. In addition, implementation of the proposed algorithm would require dynamically analyzing two successive multi-objective optimization (MOO) problems. The first targets the activation schedule of non-flexible DL’s and the second deals with the power profiles of flexible DL’s. The MOO problems are resolved by using a fast and elitist multi-objective genetic algorithm (NSGA-II). Finally, in order to show the efficiency of the proposed approach, a case study of a collaborative system that consists of 40 MG’s registered in the load curve for the flattening program has been developed. The results verify that the load curve can indeed become very flat by applying the proposed scheduling approach. PMID:28824226
Dynamic I/O Power Management for Hard Real-Time Systems
2005-01-01
recently emerged as an attractive alternative to inflexible hardware solutions. DPM for hard real - time systems has received relatively little attention...In particular, energy-driven I/O device scheduling for real - time systems has not been considered before. We present the first online DPM algorithm...which we call Low Energy Device Scheduler (LEDES), for hard real - time systems . LEDES takes as inputs a predetermined task schedule and a device-usage
Scheduling for energy and reliability management on multiprocessor real-time systems
NASA Astrophysics Data System (ADS)
Qi, Xuan
Scheduling algorithms for multiprocessor real-time systems have been studied for years with many well-recognized algorithms proposed. However, it is still an evolving research area and many problems remain open due to their intrinsic complexities. With the emergence of multicore processors, it is necessary to re-investigate the scheduling problems and design/develop efficient algorithms for better system utilization, low scheduling overhead, high energy efficiency, and better system reliability. Focusing cluster schedulings with optimal global schedulers, we study the utilization bound and scheduling overhead for a class of cluster-optimal schedulers. Then, taking energy/power consumption into consideration, we developed energy-efficient scheduling algorithms for real-time systems, especially for the proliferating embedded systems with limited energy budget. As the commonly deployed energy-saving technique (e.g. dynamic voltage frequency scaling (DVFS)) will significantly affect system reliability, we study schedulers that have intelligent mechanisms to recuperate system reliability to satisfy the quality assurance requirements. Extensive simulation is conducted to evaluate the performance of the proposed algorithms on reduction of scheduling overhead, energy saving, and reliability improvement. The simulation results show that the proposed reliability-aware power management schemes could preserve the system reliability while still achieving substantial energy saving.
The GBT Dynamic Scheduling System: Powered by the Web
NASA Astrophysics Data System (ADS)
Marganian, P.; Clark, M.; McCarty, M.; Sessoms, E.; Shelton, A.
2009-09-01
The web technologies utilized for the Robert C. Byrd Green Bank Telescope's (GBT) new Dynamic Scheduling System are discussed, focusing on languages, frameworks, and tools. We use a popular Python web framework, TurboGears, to take advantage of the extensive web services the system provides. TurboGears is a model-view-controller framework, which aggregates SQLAlchemy, Genshi, and CherryPy respectively. On top of this framework, Javascript (Prototype, script.aculo.us, and JQuery) and cascading style sheets (Blueprint) are used for desktop-quality web pages.
A dynamic case-based planning system for space station application
NASA Technical Reports Server (NTRS)
Oppacher, F.; Deugo, D.
1988-01-01
We are currently investigating the use of a case-based reasoning approach to develop a dynamic planning system. The dynamic planning system (DPS) is designed to perform resource management, i.e., to efficiently schedule tasks both with and without failed components. This approach deviates from related work on scheduling and on planning in AI in several aspects. In particular, an attempt is made to equip the planner with an ability to cope with a changing environment by dynamic replanning, to handle resource constraints and feedback, and to achieve some robustness and autonomy through plan learning by dynamic memory techniques. We briefly describe the proposed architecture of DPS and its four major components: the PLANNER, the plan EXECUTOR, the dynamic REPLANNER, and the plan EVALUATOR. The planner, which is implemented in Smalltalk, is being evaluated for use in connection with the Space Station Mobile Service System (MSS).
NASA Astrophysics Data System (ADS)
Setiawan, A.; Wangsaputra, R.; Martawirya, Y. Y.; Halim, A. H.
2016-02-01
This paper deals with Flexible Manufacturing System (FMS) production rescheduling due to unavailability of cutting tools caused either of cutting tool failure or life time limit. The FMS consists of parallel identical machines integrated with an automatic material handling system and it runs fully automatically. Each machine has a same cutting tool configuration that consists of different geometrical cutting tool types on each tool magazine. The job usually takes two stages. Each stage has sequential operations allocated to machines considering the cutting tool life. In the real situation, the cutting tool can fail before the cutting tool life is reached. The objective in this paper is to develop a dynamic scheduling algorithm when a cutting tool is broken during unmanned and a rescheduling needed. The algorithm consists of four steps. The first step is generating initial schedule, the second step is determination the cutting tool failure time, the third step is determination of system status at cutting tool failure time and the fourth step is the rescheduling for unfinished jobs. The approaches to solve the problem are complete-reactive scheduling and robust-proactive scheduling. The new schedules result differences starting time and completion time of each operations from the initial schedule.
NASA Technical Reports Server (NTRS)
Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke
1989-01-01
Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.
Designing a fuzzy scheduler for hard real-time systems
NASA Technical Reports Server (NTRS)
Yen, John; Lee, Jonathan; Pfluger, Nathan; Natarajan, Swami
1992-01-01
In hard real-time systems, tasks have to be performed not only correctly, but also in a timely fashion. If timing constraints are not met, there might be severe consequences. Task scheduling is the most important problem in designing a hard real-time system, because the scheduling algorithm ensures that tasks meet their deadlines. However, the inherent nature of uncertainty in dynamic hard real-time systems increases the problems inherent in scheduling. In an effort to alleviate these problems, we have developed a fuzzy scheduler to facilitate searching for a feasible schedule. A set of fuzzy rules are proposed to guide the search. The situation we are trying to address is the performance of the system when no feasible solution can be found, and therefore, certain tasks will not be executed. We wish to limit the number of important tasks that are not scheduled.
ROBUS-2: A Fault-Tolerant Broadcast Communication System
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Miner, Paul S.
2005-01-01
The Reliable Optical Bus (ROBUS) is the core communication system of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER), a general-purpose fault-tolerant integrated modular architecture currently under development at NASA Langley Research Center. The ROBUS is a time-division multiple access (TDMA) broadcast communication system with medium access control by means of time-indexed communication schedule. ROBUS-2 is a developmental version of the ROBUS providing guaranteed fault-tolerant services to the attached processing elements (PEs), in the presence of a bounded number of faults. These services include message broadcast (Byzantine Agreement), dynamic communication schedule update, clock synchronization, and distributed diagnosis (group membership). The ROBUS also features fault-tolerant startup and restart capabilities. ROBUS-2 is tolerant to internal as well as PE faults, and incorporates a dynamic self-reconfiguration capability driven by the internal diagnostic system. This version of the ROBUS is intended for laboratory experimentation and demonstrations of the capability to reintegrate failed nodes, dynamically update the communication schedule, and tolerate and recover from correlated transient faults.
Autonomous scheduling technology for Earth orbital missions
NASA Technical Reports Server (NTRS)
Srivastava, S.
1982-01-01
The development of a dynamic autonomous system (DYASS) of resources for the mission support of near-Earth NASA spacecraft is discussed and the current NASA space data system is described from a functional perspective. The future (late 80's and early 90's) NASA space data system is discussed. The DYASS concept, the autonomous process control, and the NASA space data system are introduced. Scheduling and related disciplines are surveyed. DYASS as a scheduling problem is also discussed. Artificial intelligence and knowledge representation is considered as well as the NUDGE system and the I-Space system.
Resource Management in Constrained Dynamic Situations
NASA Astrophysics Data System (ADS)
Seok, Jinwoo
Resource management is considered in this dissertation for systems with limited resources, possibly combined with other system constraints, in unpredictably dynamic environments. Resources may represent fuel, power, capabilities, energy, and so on. Resource management is important for many practical systems; usually, resources are limited, and their use must be optimized. Furthermore, systems are often constrained, and constraints must be satisfied for safe operation. Simplistic resource management can result in poor use of resources and failure of the system. Furthermore, many real-world situations involve dynamic environments. Many traditional problems are formulated based on the assumptions of given probabilities or perfect knowledge of future events. However, in many cases, the future is completely unknown, and information on or probabilities about future events are not available. In other words, we operate in unpredictably dynamic situations. Thus, a method is needed to handle dynamic situations without knowledge of the future, but few formal methods have been developed to address them. Thus, the goal is to design resource management methods for constrained systems, with limited resources, in unpredictably dynamic environments. To this end, resource management is organized hierarchically into two levels: 1) planning, and 2) control. In the planning level, the set of tasks to be performed is scheduled based on limited resources to maximize resource usage in unpredictably dynamic environments. In the control level, the system controller is designed to follow the schedule by considering all the system constraints for safe and efficient operation. Consequently, this dissertation is mainly divided into two parts: 1) planning level design, based on finite state machines, and 2) control level methods, based on model predictive control. We define a recomposable restricted finite state machine to handle limited resource situations and unpredictably dynamic environments for the planning level. To obtain a policy, dynamic programing is applied, and to obtain a solution, limited breadth-first search is applied to the recomposable restricted finite state machine. A multi-function phased array radar resource management problem and an unmanned aerial vehicle patrolling problem are treated using recomposable restricted finite state machines. Then, we use model predictive control for the control level, because it allows constraint handling and setpoint tracking for the schedule. An aircraft power system management problem is treated that aims to develop an integrated control system for an aircraft gas turbine engine and electrical power system using rate-based model predictive control. Our results indicate that at the planning level, limited breadth-first search for recomposable restricted finite state machines generates good scheduling solutions in limited resource situations and unpredictably dynamic environments. The importance of cooperation in the planning level is also verified. At the control level, a rate-based model predictive controller allows good schedule tracking and safe operations. The importance of considering the system constraints and interactions between the subsystems is indicated. For the best resource management in constrained dynamic situations, the planning level and the control level need to be considered together.
Linear-parameter-varying gain-scheduled control of aerospace systems
NASA Astrophysics Data System (ADS)
Barker, Jeffrey Michael
The dynamics of many aerospace systems vary significantly as a function of flight condition. Robust control provides methods of guaranteeing performance and stability goals across flight conditions. In mu-syntthesis, changes to the dynamical system are primarily treated as uncertainty. This method has been successfully applied to many control problems, and here is applied to flutter control. More recently, two techniques for generating robust gain-scheduled controller have been developed. Linear fractional transformation (LFT) gain-scheduled control is an extension of mu-synthesis in which the plant and controller are explicit functions of parameters measurable in real-time. This LFT gain-scheduled control technique is applied to the Benchmark Active Control Technology (BACT) wing, and compared with mu-synthesis control. Linear parameter-varying (LPV) gain-scheduled control is an extension of Hinfinity control to parameter varying systems. LPV gain-scheduled control directly incorporates bounds on the rate of change of the scheduling parameters, and often reduces conservatism inherent in LFT gain-scheduled control. Gain-scheduled LPV control of the BACT wing compares very favorably with the LFT controller. Gain-scheduled LPV controllers are generated for the lateral-directional and longitudinal axes of the Innovative Control Effectors (ICE) aircraft and implemented in nonlinear simulations and real-time piloted nonlinear simulations. Cooper-Harper and pilot-induced oscillation ratings were obtained for an initial design, a reference aircraft and a redesign. Piloted simulation results for the initial LPV gain-scheduled control of the ICE aircraft are compared with results for a conventional fighter aircraft in discrete pitch and roll angle tracking tasks. The results for the redesigned controller are significantly better than both the previous LPV controller and the conventional aircraft.
16 CFR 1203.13 - Test schedule.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Test schedule. 1203.13 Section 1203.13... STANDARD FOR BICYCLE HELMETS The Standard § 1203.13 Test schedule. (a) Helmet sample 1 of the set of eight... environments, respectively) shall be tested in accordance with the dynamic retention system strength test at...
Scheduling Policies for an Antiterrorist Surveillance System
2008-06-27
times; for example, see Reiman and Wein [17] and Olsen [15]. For real-time scheduling problems involving impatient customers, see Gaver et al. [2...heavy traffic with throughput time constraints: Asymptotically optimal dynamic controls. Queueing Systems 39, 23–54. 30 [17] Reiman , M. I. and Wein
Learning to improve iterative repair scheduling
NASA Technical Reports Server (NTRS)
Zweben, Monte; Davis, Eugene
1992-01-01
This paper presents a general learning method for dynamically selecting between repair heuristics in an iterative repair scheduling system. The system employs a version of explanation-based learning called Plausible Explanation-Based Learning (PEBL) that uses multiple examples to confirm conjectured explanations. The basic approach is to conjecture contradictions between a heuristic and statistics that measure the quality of the heuristic. When these contradictions are confirmed, a different heuristic is selected. To motivate the utility of this approach we present an empirical evaluation of the performance of a scheduling system with respect to two different repair strategies. We show that the scheduler that learns to choose between the heuristics outperforms the same scheduler with any one of two heuristics alone.
Abdulhamid, Shafi’i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid
2016-01-01
Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques. PMID:27384239
Abdulhamid, Shafi'i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid
2016-01-01
Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques.
On program restructuring, scheduling, and communication for parallel processor systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polychronopoulos, Constantine D.
1986-08-01
This dissertation discusses several software and hardware aspects of program execution on large-scale, high-performance parallel processor systems. The issues covered are program restructuring, partitioning, scheduling and interprocessor communication, synchronization, and hardware design issues of specialized units. All this work was performed focusing on a single goal: to maximize program speedup, or equivalently, to minimize parallel execution time. Parafrase, a Fortran restructuring compiler was used to transform programs in a parallel form and conduct experiments. Two new program restructuring techniques are presented, loop coalescing and subscript blocking. Compile-time and run-time scheduling schemes are covered extensively. Depending on the program construct, thesemore » algorithms generate optimal or near-optimal schedules. For the case of arbitrarily nested hybrid loops, two optimal scheduling algorithms for dynamic and static scheduling are presented. Simulation results are given for a new dynamic scheduling algorithm. The performance of this algorithm is compared to that of self-scheduling. Techniques for program partitioning and minimization of interprocessor communication for idealized program models and for real Fortran programs are also discussed. The close relationship between scheduling, interprocessor communication, and synchronization becomes apparent at several points in this work. Finally, the impact of various types of overhead on program speedup and experimental results are presented.« less
Expert system for on-board satellite scheduling and control
NASA Technical Reports Server (NTRS)
Barry, John M.; Sary, Charisse
1988-01-01
An Expert System is described which Rockwell Satellite and Space Electronics Division (S&SED) is developing to dynamically schedule the allocation of on-board satellite resources and activities. This expert system is the Satellite Controller. The resources to be scheduled include power, propellant and recording tape. The activities controlled include scheduling satellite functions such as sensor checkout and operation. The scheduling of these resources and activities is presently a labor intensive and time consuming ground operations task. Developing a schedule requires extensive knowledge of the system and subsystems operations, operational constraints, and satellite design and configuration. This scheduling process requires highly trained experts anywhere from several hours to several weeks to accomplish. The process is done through brute force, that is examining cryptic mnemonic data off line to interpret the health and status of the satellite. Then schedules are formulated either as the result of practical operator experience or heuristics - that is rules of thumb. Orbital operations must become more productive in the future to reduce life cycle costs and decrease dependence on ground control. This reduction is required to increase autonomy and survivability of future systems. The design of future satellites require that the scheduling function be transferred from ground to on board systems.
A Model and Algorithms For a Software Evolution Control System
1993-12-01
dynamic scheduling approaches can be found in [67). Task scheduling can also be characterized as preemptive and nonpreemptive . A task is preemptive ...is NP-hard for both the preemptive and nonpreemptive cases [671 [84). Scheduling nonpreemptive tasks with arbitrary ready times is NP-hard in both...the preemptive and nonpreemptive cases [671 [841. Scheduling nonpreemptive tasks with arbitrary ready times is NP-hard in both multiprocessor and
A scalable approach to solving dense linear algebra problems on hybrid CPU-GPU systems
Song, Fengguang; Dongarra, Jack
2014-10-01
Aiming to fully exploit the computing power of all CPUs and all graphics processing units (GPUs) on hybrid CPU-GPU systems to solve dense linear algebra problems, in this paper we design a class of heterogeneous tile algorithms to maximize the degree of parallelism, to minimize the communication volume, and to accommodate the heterogeneity between CPUs and GPUs. The new heterogeneous tile algorithms are executed upon our decentralized dynamic scheduling runtime system, which schedules a task graph dynamically and transfers data between compute nodes automatically. The runtime system uses a new distributed task assignment protocol to solve data dependencies between tasksmore » without any coordination between processing units. By overlapping computation and communication through dynamic scheduling, we are able to attain scalable performance for the double-precision Cholesky factorization and QR factorization. Finally, our approach demonstrates a performance comparable to Intel MKL on shared-memory multicore systems and better performance than both vendor (e.g., Intel MKL) and open source libraries (e.g., StarPU) in the following three environments: heterogeneous clusters with GPUs, conventional clusters without GPUs, and shared-memory systems with multiple GPUs.« less
A scalable approach to solving dense linear algebra problems on hybrid CPU-GPU systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Fengguang; Dongarra, Jack
Aiming to fully exploit the computing power of all CPUs and all graphics processing units (GPUs) on hybrid CPU-GPU systems to solve dense linear algebra problems, in this paper we design a class of heterogeneous tile algorithms to maximize the degree of parallelism, to minimize the communication volume, and to accommodate the heterogeneity between CPUs and GPUs. The new heterogeneous tile algorithms are executed upon our decentralized dynamic scheduling runtime system, which schedules a task graph dynamically and transfers data between compute nodes automatically. The runtime system uses a new distributed task assignment protocol to solve data dependencies between tasksmore » without any coordination between processing units. By overlapping computation and communication through dynamic scheduling, we are able to attain scalable performance for the double-precision Cholesky factorization and QR factorization. Finally, our approach demonstrates a performance comparable to Intel MKL on shared-memory multicore systems and better performance than both vendor (e.g., Intel MKL) and open source libraries (e.g., StarPU) in the following three environments: heterogeneous clusters with GPUs, conventional clusters without GPUs, and shared-memory systems with multiple GPUs.« less
Coordinated scheduling for dynamic real-time systems
NASA Technical Reports Server (NTRS)
Natarajan, Swaminathan; Zhao, Wei
1994-01-01
In this project, we addressed issues in coordinated scheduling for dynamic real-time systems. In particular, we concentrated on design and implementation of a new distributed real-time system called R-Shell. The design objective of R-Shell is to provide computing support for space programs that have large, complex, fault-tolerant distributed real-time applications. In R-shell, the approach is based on the concept of scheduling agents, which reside in the application run-time environment, and are customized to provide just those resource management functions which are needed by the specific application. With this approach, we avoid the need for a sophisticated OS which provides a variety of generalized functionality, while still not burdening application programmers with heavy responsibility for resource management. In this report, we discuss the R-Shell approach, summarize the achievement of the project, and describe a preliminary prototype of R-Shell system.
Using Grid Benchmarks for Dynamic Scheduling of Grid Applications
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Hood, Robert
2003-01-01
Navigation or dynamic scheduling of applications on computational grids can be improved through the use of an application-specific characterization of grid resources. Current grid information systems provide a description of the resources, but do not contain any application-specific information. We define a GridScape as dynamic state of the grid resources. We measure the dynamic performance of these resources using the grid benchmarks. Then we use the GridScape for automatic assignment of the tasks of a grid application to grid resources. The scalability of the system is achieved by limiting the navigation overhead to a few percent of the application resource requirements. Our task submission and assignment protocol guarantees that the navigation system does not cause grid congestion. On a synthetic data mining application we demonstrate that Gridscape-based task assignment reduces the application tunaround time.
ERIC Educational Resources Information Center
Li, Wenhao
2011-01-01
Distributed workflow technology has been widely used in modern education and e-business systems. Distributed web applications have shown cross-domain and cooperative characteristics to meet the need of current distributed workflow applications. In this paper, the author proposes a dynamic and adaptive scheduling algorithm PCSA (Pre-Calculated…
Empirical results on scheduling and dynamic backtracking
NASA Technical Reports Server (NTRS)
Boddy, Mark S.; Goldman, Robert P.
1994-01-01
At the Honeywell Technology Center (HTC), we have been working on a scheduling problem related to commercial avionics. This application is large, complex, and hard to solve. To be a little more concrete: 'large' means almost 20,000 activities, 'complex' means several activity types, periodic behavior, and assorted types of temporal constraints, and 'hard to solve' means that we have been unable to eliminate backtracking through the use of search heuristics. At this point, we can generate solutions, where solutions exist, or report failure and sometimes why the system failed. To the best of our knowledge, this is among the largest and most complex scheduling problems to have been solved as a constraint satisfaction problem, at least that has appeared in the published literature. This abstract is a preliminary report on what we have done and how. In the next section, we present our approach to treating scheduling as a constraint satisfaction problem. The following sections present the application in more detail and describe how we solve scheduling problems in the application domain. The implemented system makes use of Ginsberg's Dynamic Backtracking algorithm, with some minor extensions to improve its utility for scheduling. We describe those extensions and the performance of the resulting system. The paper concludes with some general remarks, open questions and plans for future work.
Dynamic Photorefractive Memory and its Application for Opto-Electronic Neural Networks.
NASA Astrophysics Data System (ADS)
Sasaki, Hironori
This dissertation describes the analysis of the photorefractive crystal dynamics and its application for opto-electronic neural network systems. The realization of the dynamic photorefractive memory is investigated in terms of the following aspects: fast memory update, uniform grating multiplexing schedules and the prevention of the partial erasure of existing gratings. The fast memory update is realized by the selective erasure process that superimposes a new grating on the original one with an appropriate phase shift. The dynamics of the selective erasure process is analyzed using the first-order photorefractive material equations and experimentally confirmed. The effects of beam coupling and fringe bending on the selective erasure dynamics are also analyzed by numerically solving a combination of coupled wave equations and the photorefractive material equation. Incremental recording technique is proposed as a uniform grating multiplexing schedule and compared with the conventional scheduled recording technique in terms of phase distribution in the presence of an external dc electric field, as well as the image gray scale dependence. The theoretical analysis and experimental results proved the superiority of the incremental recording technique over the scheduled recording. Novel recirculating information memory architecture is proposed and experimentally demonstrated to prevent partial degradation of the existing gratings by accessing the memory. Gratings are circulated through a memory feed back loop based on the incremental recording dynamics and demonstrate robust read/write/erase capabilities. The dynamic photorefractive memory is applied to opto-electronic neural network systems. Module architecture based on the page-oriented dynamic photorefractive memory is proposed. This module architecture can implement two complementary interconnection organizations, fan-in and fan-out. The module system scalability and the learning capabilities are theoretically investigated using the photorefractive dynamics described in previous chapters of the dissertation. The implementation of the feed-forward image compression network with 900 input and 9 output neurons with 6-bit interconnection accuracy is experimentally demonstrated. Learning of the Perceptron network that determines sex based on input face images of 900 pixels is also successfully demonstrated.
DBMS as a Tool for Project Management
NASA Technical Reports Server (NTRS)
Linder, H.
1984-01-01
Scientific objectives of crustal dynamics are listed as well as the contents of the centralized data information system for the crustal dynamics project. The system provides for project observation schedules, gives project configuration control information and project site information.
Dynamic Modeling of Solar Dynamic Components and Systems
NASA Technical Reports Server (NTRS)
Hochstein, John I.; Korakianitis, T.
1992-01-01
The purpose of this grant was to support NASA in modeling efforts to predict the transient dynamic and thermodynamic response of the space station solar dynamic power generation system. In order to meet the initial schedule requirement of providing results in time to support installation of the system as part of the initial phase of space station, early efforts were executed with alacrity and often in parallel. Initially, methods to predict the transient response of a Rankine as well as a Brayton cycle were developed. Review of preliminary design concepts led NASA to select a regenerative gas-turbine cycle using a helium-xenon mixture as the working fluid and, from that point forward, the modeling effort focused exclusively on that system. Although initial project planning called for a three year period of performance, revised NASA schedules moved system installation to later and later phases of station deployment. Eventually, NASA selected to halt development of the solar dynamic power generation system for space station and to reduce support for this project to two-thirds of the original level.
An Extended Deterministic Dendritic Cell Algorithm for Dynamic Job Shop Scheduling
NASA Astrophysics Data System (ADS)
Qiu, X. N.; Lau, H. Y. K.
The problem of job shop scheduling in a dynamic environment where random perturbation exists in the system is studied. In this paper, an extended deterministic Dendritic Cell Algorithm (dDCA) is proposed to solve such a dynamic Job Shop Scheduling Problem (JSSP) where unexpected events occurred randomly. This algorithm is designed based on dDCA and makes improvements by considering all types of signals and the magnitude of the output values. To evaluate this algorithm, ten benchmark problems are chosen and different kinds of disturbances are injected randomly. The results show that the algorithm performs competitively as it is capable of triggering the rescheduling process optimally with much less run time for deciding the rescheduling action. As such, the proposed algorithm is able to minimize the rescheduling times under the defined objective and to keep the scheduling process stable and efficient.
Dynamic Modeling of ALS Systems
NASA Technical Reports Server (NTRS)
Jones, Harry
2002-01-01
The purpose of dynamic modeling and simulation of Advanced Life Support (ALS) systems is to help design them. Static steady state systems analysis provides basic information and is necessary to guide dynamic modeling, but static analysis is not sufficient to design and compare systems. ALS systems must respond to external input variations and internal off-nominal behavior. Buffer sizing, resupply scheduling, failure response, and control system design are aspects of dynamic system design. We develop two dynamic mass flow models and use them in simulations to evaluate systems issues, optimize designs, and make system design trades. One model is of nitrogen leakage in the space station, the other is of a waste processor failure in a regenerative life support system. Most systems analyses are concerned with optimizing the cost/benefit of a system at its nominal steady-state operating point. ALS analysis must go beyond the static steady state to include dynamic system design. All life support systems exhibit behavior that varies over time. ALS systems must respond to equipment operating cycles, repair schedules, and occasional off-nominal behavior or malfunctions. Biological components, such as bioreactors, composters, and food plant growth chambers, usually have operating cycles or other complex time behavior. Buffer sizes, material stocks, and resupply rates determine dynamic system behavior and directly affect system mass and cost. Dynamic simulation is needed to avoid the extremes of costly over-design of buffers and material reserves or system failure due to insufficient buffers and lack of stored material.
Scheduling multicore workload on shared multipurpose clusters
NASA Astrophysics Data System (ADS)
Templon, J. A.; Acosta-Silva, C.; Flix Molina, J.; Forti, A. C.; Pérez-Calero Yzquierdo, A.; Starink, R.
2015-12-01
With the advent of workloads containing explicit requests for multiple cores in a single grid job, grid sites faced a new set of challenges in workload scheduling. The most common batch schedulers deployed at HEP computing sites do a poor job at multicore scheduling when using only the native capabilities of those schedulers. This paper describes how efficient multicore scheduling was achieved at the sites the authors represent, by implementing dynamically-sized multicore partitions via a minimalistic addition to the Torque/Maui batch system already in use at those sites. The paper further includes example results from use of the system in production, as well as measurements on the dependence of performance (especially the ramp-up in throughput for multicore jobs) on node size and job size.
Continual planning and scheduling for managing patient tests in hospital laboratories.
Marinagi, C C; Spyropoulos, C D; Papatheodorou, C; Kokkotos, S
2000-10-01
Hospital laboratories perform examination tests upon patients, in order to assist medical diagnosis or therapy progress. Planning and scheduling patient requests for examination tests is a complicated problem because it concerns both minimization of patient stay in hospital and maximization of laboratory resources utilization. In the present paper, we propose an integrated patient-wise planning and scheduling system which supports the dynamic and continual nature of the problem. The proposed combination of multiagent and blackboard architecture allows the dynamic creation of agents that share a set of knowledge sources and a knowledge base to service patient test requests.
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Smith, Steven S.
1996-01-01
This final report summarizes research performed under NASA contract NCC 2-531 toward generalization of constraint-based scheduling theories and techniques for application to space telescope observation scheduling problems. Our work into theories and techniques for solution of this class of problems has led to the development of the Heuristic Scheduling Testbed System (HSTS), a software system for integrated planning and scheduling. Within HSTS, planning and scheduling are treated as two complementary aspects of the more general process of constructing a feasible set of behaviors of a target system. We have validated the HSTS approach by applying it to the generation of observation schedules for the Hubble Space Telescope. This report summarizes the HSTS framework and its application to the Hubble Space Telescope domain. First, the HSTS software architecture is described, indicating (1) how the structure and dynamics of a system is modeled in HSTS, (2) how schedules are represented at multiple levels of abstraction, and (3) the problem solving machinery that is provided. Next, the specific scheduler developed within this software architecture for detailed management of Hubble Space Telescope operations is presented. Finally, experimental performance results are given that confirm the utility and practicality of the approach.
A human factors approach to range scheduling for satellite control
NASA Technical Reports Server (NTRS)
Wright, Cameron H. G.; Aitken, Donald J.
1991-01-01
Range scheduling for satellite control presents a classical problem: supervisory control of a large-scale dynamic system, with unwieldy amounts of interrelated data used as inputs to the decision process. Increased automation of the task, with the appropriate human-computer interface, is highly desirable. The development and user evaluation of a semi-automated network range scheduling system is described. The system incorporates a synergistic human-computer interface consisting of a large screen color display, voice input/output, a 'sonic pen' pointing device, a touchscreen color CRT, and a standard keyboard. From a human factors standpoint, this development represents the first major improvement in almost 30 years to the satellite control network scheduling task.
Dynamic Routing for Delay-Tolerant Networking in Space Flight Operations
NASA Technical Reports Server (NTRS)
Burleigh, Scott C.
2008-01-01
Contact Graph Routing (CGR) is a dynamic routing system that computes routes through a time-varying topology composed of scheduled, bounded communication contacts in a network built on the Delay-Tolerant Networking (DTN) architecture. It is designed to support operations in a space network based on DTN, but it also could be used in terrestrial applications where operation according to a predefined schedule is preferable to opportunistic communication, as in a low-power sensor network. This paper will describe the operation of the CGR system and explain how it can enable data delivery over scheduled transmission opportunities, fully utilizing the available transmission capacity, without knowing the current state of any bundle protocol node (other than the local node itself) and without exhausting processing resources at any bundle router.
A Study on Real-Time Scheduling Methods in Holonic Manufacturing Systems
NASA Astrophysics Data System (ADS)
Iwamura, Koji; Taimizu, Yoshitaka; Sugimura, Nobuhiro
Recently, new architectures of manufacturing systems have been proposed to realize flexible control structures of the manufacturing systems, which can cope with the dynamic changes in the volume and the variety of the products and also the unforeseen disruptions, such as failures of manufacturing resources and interruptions by high priority jobs. They are so called as the autonomous distributed manufacturing system, the biological manufacturing system and the holonic manufacturing system. Rule-based scheduling methods were proposed and applied to the real-time production scheduling problems of the HMS (Holonic Manufacturing System) in the previous report. However, there are still remaining problems from the viewpoint of the optimization of the whole production schedules. New procedures are proposed, in the present paper, to select the production schedules, aimed at generating effective production schedules in real-time. The proposed methods enable the individual holons to select suitable machining operations to be carried out in the next time period. Coordination process among the holons is also proposed to carry out the coordination based on the effectiveness values of the individual holons.
Optimisation of assembly scheduling in VCIM systems using genetic algorithm
NASA Astrophysics Data System (ADS)
Dao, Son Duy; Abhary, Kazem; Marian, Romeo
2017-09-01
Assembly plays an important role in any production system as it constitutes a significant portion of the lead time and cost of a product. Virtual computer-integrated manufacturing (VCIM) system is a modern production system being conceptually developed to extend the application of traditional computer-integrated manufacturing (CIM) system to global level. Assembly scheduling in VCIM systems is quite different from one in traditional production systems because of the difference in the working principles of the two systems. In this article, the assembly scheduling problem in VCIM systems is modeled and then an integrated approach based on genetic algorithm (GA) is proposed to search for a global optimised solution to the problem. Because of dynamic nature of the scheduling problem, a novel GA with unique chromosome representation and modified genetic operations is developed herein. Robustness of the proposed approach is verified by a numerical example.
77 FR 3752 - Commission Information Collection Activities (FERC-725I); Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-25
... the Bulk-Power System to system disturbances, including scheduled and unscheduled outages; requires each reliability coordinator to establish requirements for its area's dynamic disturbance recording... Retention--10.... 10 acquisition and installation of dynamic disturbance recorders. GO, TO, and RC to...
Sharing intelligence: Decision-making interactions between users and software in MAESTRO
NASA Technical Reports Server (NTRS)
Geoffroy, Amy L.; Gohring, John R.; Britt, Daniel L.
1991-01-01
By combining the best of automated and human decision-making in scheduling many advantages can accrue. The joint performance of the user and system is potentially much better than either alone. Features of the MAESTRO scheduling system serve to illustrate concepts of user/software cooperation. MAESTRO may be operated at a user-determinable and dynamic level of autonomy. Because the system allows so much flexibility in the allocation of decision-making responsibilities, and provides users with a wealth of information and other support for their own decision-making, better overall schedules may result.
Wave scheduling - Decentralized scheduling of task forces in multicomputers
NASA Technical Reports Server (NTRS)
Van Tilborg, A. M.; Wittie, L. D.
1984-01-01
Decentralized operating systems that control large multicomputers need techniques to schedule competing parallel programs called task forces. Wave scheduling is a probabilistic technique that uses a hierarchical distributed virtual machine to schedule task forces by recursively subdividing and issuing wavefront-like commands to processing elements capable of executing individual tasks. Wave scheduling is highly resistant to processing element failures because it uses many distributed schedulers that dynamically assign scheduling responsibilities among themselves. The scheduling technique is trivially extensible as more processing elements join the host multicomputer. A simple model of scheduling cost is used by every scheduler node to distribute scheduling activity and minimize wasted processing capacity by using perceived workload to vary decentralized scheduling rules. At low to moderate levels of network activity, wave scheduling is only slightly less efficient than a central scheduler in its ability to direct processing elements to accomplish useful work.
Dynamic Scheduling for Veterans Health Administration Patients using Geospatial Dynamic Overbooking.
Adams, Stephen; Scherer, William T; White, K Preston; Payne, Jason; Hernandez, Oved; Gerber, Mathew S; Whitehead, N Peter
2017-10-12
The Veterans Health Administration (VHA) is plagued by abnormally high no-show and cancellation rates that reduce the productivity and efficiency of its medical outpatient clinics. We address this issue by developing a dynamic scheduling system that utilizes mobile computing via geo-location data to estimate the likelihood of a patient arriving on time for a scheduled appointment. These likelihoods are used to update the clinic's schedule in real time. When a patient's arrival probability falls below a given threshold, the patient's appointment is canceled. This appointment is immediately reassigned to another patient drawn from a pool of patients who are actively seeking an appointment. The replacement patients are prioritized using their arrival probability. Real-world data were not available for this study, so synthetic patient data were generated to test the feasibility of the design. The method for predicting the arrival probability was verified on a real set of taxicab data. This study demonstrates that dynamic scheduling using geo-location data can reduce the number of unused appointments with minimal risk of double booking resulting from incorrect predictions. We acknowledge that there could be privacy concerns with regards to government possession of one's location and offer strategies for alleviating these concerns in our conclusion.
A novel dynamic wavelength bandwidth allocation scheme over OFDMA PONs
NASA Astrophysics Data System (ADS)
Yan, Bo; Guo, Wei; Jin, Yaohui; Hu, Weisheng
2011-12-01
With rapid growth of Internet applications, supporting differentiated service and enlarging system capacity have been new tasks for next generation access system. In recent years, research in OFDMA Passive Optical Networks (PON) has experienced extraordinary development as for its large capacity and flexibility in scheduling. Although much work has been done to solve hardware layer obstacles for OFDMA PON, scheduling algorithm on OFDMA PON system is still under primary discussion. In order to support QoS service on OFDMA PON system, a novel dynamic wavelength bandwidth allocation (DWBA) algorithm is proposed in this paper. Per-stream QoS service is supported in this algorithm. Through simulation, we proved our bandwidth allocation algorithm performs better in bandwidth utilization and differentiate service support.
A COTS-Based Attitude Dependent Contact Scheduling System
NASA Technical Reports Server (NTRS)
DeGumbia, Jonathan D.; Stezelberger, Shane T.; Woodard, Mark
2006-01-01
The mission architecture of the Gamma-ray Large Area Space Telescope (GLAST) requires a sophisticated ground system component for scheduling the downlink of science data. Contacts between the ````````````````` satellite and the Tracking and Data Relay Satellite System (TDRSS) are restricted by the limited field-of-view of the science data downlink antenna. In addition, contacts must be scheduled when permitted by the satellite s complex and non-repeating attitude profile. Complicating the matter further, the long lead-time required to schedule TDRSS services, combined with the short duration of the downlink contact opportunities, mandates accurate GLAST orbit and attitude modeling. These circumstances require the development of a scheduling system that is capable of predictively and accurately modeling not only the orbital position of GLAST but also its attitude. This paper details the methods used in the design of a Commercial Off The Shelf (COTS)-based attitude-dependent. TDRSS contact Scheduling system that meets the unique scheduling requirements of the GLAST mission, and it suggests a COTS-based scheduling approach to support future missions. The scheduling system applies filtering and smoothing algorithms to telemetered GPS data to produce high-accuracy predictive GLAST orbit ephemerides. Next, bus pointing commands from the GLAST Science Support Center are used to model the complexities of the two dynamic science gathering attitude modes. Attitude-dependent view periods are then generated between GLAST and each of the supporting TDRSs. Numerous scheduling constraints are then applied to account for various mission specific resource limitations. Next, an optimization engine is used to produce an optimized TDRSS contact schedule request which is sent to TDRSS scheduling for confirmation. Lastly, the confirmed TDRSS contact schedule is rectified with an updated ephemeris and adjusted bus pointing commands to produce a final science downlink contact schedule.
Defense Science Board Task Force Report: The Role of Autonomy in DoD Systems
2012-07-01
ASD(R&E) and the Military Services should schedule periodic, on-site collaborations that bring together academia, government and not-for-profit labs...expressing UxV activities, increased problem solving, planning and scheduling capabilities to enable dynamic tasking of distributed UxVs and tools for...industrial, governmental and military. Manufacturing has long exploited planning for logistics and matching product demand to production schedules
Cross-Layer Adaptive Feedback Scheduling of Wireless Control Systems
Xia, Feng; Ma, Longhua; Peng, Chen; Sun, Youxian; Dong, Jinxiang
2008-01-01
There is a trend towards using wireless technologies in networked control systems. However, the adverse properties of the radio channels make it difficult to design and implement control systems in wireless environments. To attack the uncertainty in available communication resources in wireless control systems closed over WLAN, a cross-layer adaptive feedback scheduling (CLAFS) scheme is developed, which takes advantage of the co-design of control and wireless communications. By exploiting cross-layer design, CLAFS adjusts the sampling periods of control systems at the application layer based on information about deadline miss ratio and transmission rate from the physical layer. Within the framework of feedback scheduling, the control performance is maximized through controlling the deadline miss ratio. Key design parameters of the feedback scheduler are adapted to dynamic changes in the channel condition. An event-driven invocation mechanism for the feedback scheduler is also developed. Simulation results show that the proposed approach is efficient in dealing with channel capacity variations and noise interference, thus providing an enabling technology for control over WLAN. PMID:27879934
Butt, Muhammad Arif; Akram, Muhammad
2016-01-01
We present a new intuitionistic fuzzy rule-based decision-making system based on intuitionistic fuzzy sets for a process scheduler of a batch operating system. Our proposed intuitionistic fuzzy scheduling algorithm, inputs the nice value and burst time of all available processes in the ready queue, intuitionistically fuzzify the input values, triggers appropriate rules of our intuitionistic fuzzy inference engine and finally calculates the dynamic priority (dp) of all the processes in the ready queue. Once the dp of every process is calculated the ready queue is sorted in decreasing order of dp of every process. The process with maximum dp value is sent to the central processing unit for execution. Finally, we show complete working of our algorithm on two different data sets and give comparisons with some standard non-preemptive process schedulers.
NASA Astrophysics Data System (ADS)
Wang, Qian; Xue, Anke
2018-06-01
This paper has proposed a robust control for the spacecraft rendezvous system by considering the parameter uncertainties and actuator unsymmetrical saturation based on the discrete gain scheduling approach. By changing of variables, we transform the actuator unsymmetrical saturation control problem into a symmetrical one. The main advantage of the proposed method is improving the dynamic performance of the closed-loop system with a region of attraction as large as possible. By the Lyapunov approach and the scheduling technology, the existence conditions for the admissible controller are formulated in the form of linear matrix inequalities. The numerical simulation illustrates the effectiveness of the proposed method.
Improved Weather Forecasting for the Dynamic Scheduling System of the Green Bank Telescope
NASA Astrophysics Data System (ADS)
Henry, Kari; Maddalena, Ronald
2018-01-01
The Robert C Byrd Green Bank Telescope (GBT) uses a software system that dynamically schedules observations based on models of vertical weather forecasts produced by the National Weather Service (NWS). The NWS provides hourly forecasted values for ~60 layers that extend into the stratosphere over the observatory. We use models, recommended by the Radiocommunication Sector of the International Telecommunications Union, to derive the absorption coefficient in each layer for each hour in the NWS forecasts and for all frequencies over which the GBT has receivers, 0.1 to 115 GHz. We apply radiative transfer models to derive the opacity and the atmospheric contributions to the system temperature, thereby deriving forecasts applicable to scheduling radio observations for up to 10 days into the future. Additionally, the algorithms embedded in the data processing pipeline use historical values of the forecasted opacity to calibrate observations. Until recently, we have concentrated on predictions for high frequency (> 15 GHz) observing, as these need to be scheduled carefully around bad weather. We have been using simple models for the contribution of rain and clouds since we only schedule low-frequency observations under these conditions. In this project, we wanted to improve the scheduling of the GBT and data calibration at low frequencies by deriving better algorithms for clouds and rain. To address the limitation at low frequency, the observatory acquired a Radiometrics Corporation MP-1500A radiometer, which operates in 27 channels between 22 and 30 GHz. By comparing 16 months of measurements from the radiometer against forecasted system temperatures, we have confirmed that forecasted system temperatures are indistinguishable from those measured under good weather conditions. Small miss-calibrations of the radiometer data dominate the comparison. By using recalibrated radiometer measurements, we looked at bad weather days to derive better models for forecasting the contribution of clouds to the opacity and system temperatures. We will show how these revised algorithms should help us improve both data calibration and the accuracy of scheduling low-frequency observations.
NASA Astrophysics Data System (ADS)
Wu, NaiQi; Zhu, MengChu; Bai, LiPing; Li, ZhiWu
2016-07-01
In some refineries, storage tanks are located at two different sites, one for low-fusion-point crude oil and the other for high one. Two pipelines are used to transport different oil types. Due to the constraints resulting from the high-fusion-point oil transportation, it is challenging to schedule such a system. This work studies the scheduling problem from a control-theoretic perspective. It proposes to use a hybrid Petri net method to model the system. It then finds the schedulability conditions by analysing the dynamic behaviour of the net model. Next, it proposes an efficient scheduling method to minimize the cost of high-fusion-point oil transportation. Finally, it gives a complex industrial case study to show its application.
Ant Colony Optimization for Mapping, Scheduling and Placing in Reconfigurable Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferrandi, Fabrizio; Lanzi, Pier Luca; Pilato, Christian
Modern heterogeneous embedded platforms, com- posed of several digital signal, application specific and general purpose processors, also include reconfigurable devices support- ing partial dynamic reconfiguration. These devices can change the behavior of some of their parts during execution, allowing hardware acceleration of more sections of the applications. Never- theless, partial dynamic reconfiguration imposes severe overheads in terms of latency. For such systems, a critical part of the design phase is deciding on which processing elements (mapping) and when (scheduling) executing a task, but also how to place them on the reconfigurable device to guarantee the most efficient reuse of themore » programmable logic. In this paper we propose an algorithm based on Ant Colony Optimization (ACO) that simultaneously executes the scheduling, the mapping and the linear placing of tasks, hiding reconfiguration overheads through prefetching. Our heuristic gradually constructs solutions and then searches around the best ones, cutting out non-promising areas of the design space. We show how to consider the partial dynamic reconfiguration constraints in the scheduling, placing and mapping problems and compare our formulation to other heuristics that address the same problems. We demonstrate that our proposal is more general and robust, and finds better solutions (16.5% in average) with respect to competing solutions.« less
NASA Astrophysics Data System (ADS)
Ernawati; Carnia, E.; Supriatna, A. K.
2018-03-01
Eigenvalues and eigenvectors in max-plus algebra have the same important role as eigenvalues and eigenvectors in conventional algebra. In max-plus algebra, eigenvalues and eigenvectors are useful for knowing dynamics of the system such as in train system scheduling, scheduling production systems and scheduling learning activities in moving classes. In the translation of proteins in which the ribosome move uni-directionally along the mRNA strand to recruit the amino acids that make up the protein, eigenvalues and eigenvectors are used to calculate protein production rates and density of ribosomes on the mRNA. Based on this, it is important to examine the eigenvalues and eigenvectors in the process of protein translation. In this paper an eigenvector formula is given for a ribosome dynamics during mRNA translation by using the Kleene star algorithm in which the resulting eigenvector formula is simpler and easier to apply to the system than that introduced elsewhere. This paper also discusses the properties of the matrix {B}λ \\otimes n of model. Among the important properties, it always has the same elements in the first column for n = 1, 2,… if the eigenvalue is the time of initiation, λ = τin , and the column is the eigenvector of the model corresponding to λ.
Hogiri, Tomoharu; Tamashima, Hiroshi; Nishizawa, Akitoshi; Okamoto, Masahiro
2018-02-01
To optimize monoclonal antibody (mAb) production in Chinese hamster ovary cell cultures, culture pH should be temporally controlled with high resolution. In this study, we propose a new pH-dependent dynamic model represented by simultaneous differential equations including a minimum of six system component, depending on pH value. All kinetic parameters in the dynamic model were estimated using an evolutionary numerical optimization (real-coded genetic algorithm) method based on experimental time-course data obtained at different pH values ranging from 6.6 to 7.2. We determined an optimal pH-shift schedule theoretically. We validated this optimal pH-shift schedule experimentally and mAb production increased by approximately 40% with this schedule. Throughout this study, it was suggested that the culture pH-shift optimization strategy using a pH-dependent dynamic model is suitable to optimize any pH-shift schedule for CHO cell lines used in mAb production projects. Copyright © 2017 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
A Dynamic Scheduling Method of Earth-Observing Satellites by Employing Rolling Horizon Strategy
Dishan, Qiu; Chuan, He; Jin, Liu; Manhao, Ma
2013-01-01
Focused on the dynamic scheduling problem for earth-observing satellites (EOS), an integer programming model is constructed after analyzing the main constraints. The rolling horizon (RH) strategy is proposed according to the independent arriving time and deadline of the imaging tasks. This strategy is designed with a mixed triggering mode composed of periodical triggering and event triggering, and the scheduling horizon is decomposed into a series of static scheduling intervals. By optimizing the scheduling schemes in each interval, the dynamic scheduling of EOS is realized. We also propose three dynamic scheduling algorithms by the combination of the RH strategy and various heuristic algorithms. Finally, the scheduling results of different algorithms are compared and the presented methods in this paper are demonstrated to be efficient by extensive experiments. PMID:23690742
A dynamic scheduling method of Earth-observing satellites by employing rolling horizon strategy.
Dishan, Qiu; Chuan, He; Jin, Liu; Manhao, Ma
2013-01-01
Focused on the dynamic scheduling problem for earth-observing satellites (EOS), an integer programming model is constructed after analyzing the main constraints. The rolling horizon (RH) strategy is proposed according to the independent arriving time and deadline of the imaging tasks. This strategy is designed with a mixed triggering mode composed of periodical triggering and event triggering, and the scheduling horizon is decomposed into a series of static scheduling intervals. By optimizing the scheduling schemes in each interval, the dynamic scheduling of EOS is realized. We also propose three dynamic scheduling algorithms by the combination of the RH strategy and various heuristic algorithms. Finally, the scheduling results of different algorithms are compared and the presented methods in this paper are demonstrated to be efficient by extensive experiments.
Re-scheduling as a tool for the power management on board a spacecraft
NASA Technical Reports Server (NTRS)
Albasheer, Omar; Momoh, James A.
1995-01-01
The scheduling of events on board a spacecraft is based on forecast energy levels. The real time values of energy may not coincide with the forecast values; consequently, a dynamic revising to the allocation of power is needed. The re-scheduling is also needed for other reasons on board a spacecraft like the addition of new event which must be scheduled, or a failure of an event due to many different contingencies. This need of rescheduling is very important to the survivability of the spacecraft. In this presentation, a re-scheduling tool will be presented as a part of an overall scheme for the power management on board a spacecraft from the allocation of energy point of view. The overall scheme is based on the optimal use of energy available on board a spacecraft using expert systems combined with linear optimization techniques. The system will be able to schedule maximum number of events utilizing most energy available. The outcome is more events scheduled to share the operation cost of that spacecraft. The system will also be able to re-schedule in case of a contingency with minimal time and minimal disturbance of the original schedule. The end product is a fully integrated planning system capable of producing the right decisions in short time with less human error. The overall system will be presented with the re-scheduling algorithm discussed in detail, then the tests and results will be presented for validations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Allan Ray
1987-05-01
Increases in high speed hardware have mandated studies in software techniques to exploit the parallel capabilities. This thesis examines the effects a run-time scheduler has on a multiprocessor. The model consists of directed, acyclic graphs, generated from serial FORTRAN benchmark programs by the parallel compiler Parafrase. A multitasked, multiprogrammed environment is created. Dependencies are generated by the compiler. Tasks are bidimensional, i.e., they may specify both time and processor requests. Processor requests may be folded into execution time by the scheduler. The graphs may arrive at arbitrary time intervals. The general case is NP-hard, thus, a variety of heuristics aremore » examined by a simulator. Multiprogramming demonstrates a greater need for a run-time scheduler than does monoprogramming for a variety of reasons, e.g., greater stress on the processors, a larger number of independent control paths, more variety in the task parameters, etc. The dynamic critical path series of algorithms perform well. Dynamic critical volume did not add much. Unfortunately, dynamic critical path maximizes turnaround time as well as throughput. Two schedulers are presented which balance throughput and turnaround time. The first requires classification of jobs by type; the second requires selection of a ratio value which is dependent upon system parameters. 45 refs., 19 figs., 20 tabs.« less
NASA Technical Reports Server (NTRS)
Wong, Gregory L.; Denery, Dallas (Technical Monitor)
2000-01-01
The Dynamic Planner (DP) has been designed, implemented, and integrated into the Center-TRACON Automation System (CTAS) to assist Traffic Management Coordinators (TMCs), in real time, with the task of planning and scheduling arrival traffic approximately 35 to 200 nautical miles from the destination airport. The TMC may input to the DP a series of current and future scheduling constraints that reflect the operation and environmental conditions of the airspace. Under these constraints, the DP uses flight plans, track updates, and Estimated Time of Arrival (ETA) predictions to calculate optimal runway assignments and arrival schedules that help ensure an orderly, efficient, and conflict-free flow of traffic into the terminal area. These runway assignments and schedules can be shown directly to controllers or they can be used by other CTAS tools to generate advisories to the controllers. Additionally, the TMC and controllers may override the decisions made by the DP for tactical considerations. The DP will adapt to computations to accommodate these manual inputs.
A comparison of multiprocessor scheduling methods for iterative data flow architectures
NASA Technical Reports Server (NTRS)
Storch, Matthew
1993-01-01
A comparative study is made between the Algorithm to Architecture Mapping Model (ATAMM) and three other related multiprocessing models from the published literature. The primary focus of all four models is the non-preemptive scheduling of large-grain iterative data flow graphs as required in real-time systems, control applications, signal processing, and pipelined computations. Important characteristics of the models such as injection control, dynamic assignment, multiple node instantiations, static optimum unfolding, range-chart guided scheduling, and mathematical optimization are identified. The models from the literature are compared with the ATAMM for performance, scheduling methods, memory requirements, and complexity of scheduling and design procedures.
Generation of Look-Up Tables for Dynamic Job Shop Scheduling Decision Support Tool
NASA Astrophysics Data System (ADS)
Oktaviandri, Muchamad; Hassan, Adnan; Mohd Shaharoun, Awaluddin
2016-02-01
Majority of existing scheduling techniques are based on static demand and deterministic processing time, while most job shop scheduling problem are concerned with dynamic demand and stochastic processing time. As a consequence, the solutions obtained from the traditional scheduling technique are ineffective wherever changes occur to the system. Therefore, this research intends to develop a decision support tool (DST) based on promising artificial intelligent that is able to accommodate the dynamics that regularly occur in job shop scheduling problem. The DST was designed through three phases, i.e. (i) the look-up table generation, (ii) inverse model development and (iii) integration of DST components. This paper reports the generation of look-up tables for various scenarios as a part in development of the DST. A discrete event simulation model was used to compare the performance among SPT, EDD, FCFS, S/OPN and Slack rules; the best performances measures (mean flow time, mean tardiness and mean lateness) and the job order requirement (inter-arrival time, due dates tightness and setup time ratio) which were compiled into look-up tables. The well-known 6/6/J/Cmax Problem from Muth and Thompson (1963) was used as a case study. In the future, the performance measure of various scheduling scenarios and the job order requirement will be mapped using ANN inverse model.
Linear parameter varying representations for nonlinear control design
NASA Astrophysics Data System (ADS)
Carter, Lance Huntington
Linear parameter varying (LPV) systems are investigated as a framework for gain-scheduled control design and optimal hybrid control. An LPV system is defined as a linear system whose dynamics depend upon an a priori unknown but measurable exogenous parameter. A gain-scheduled autopilot design is presented for a bank-to-turn (BTT) missile. The method is novel in that the gain-scheduled design does not involve linearizations about operating points. Instead, the missile dynamics are brought to LPV form via a state transformation. This idea is applied to the design of a coupled longitudinal/lateral BTT missile autopilot. The pitch and yaw/roll dynamics are separately transformed to LPV form, where the cross axis states are treated as "exogenous" parameters. These are actually endogenous variables, so such a plant is called "quasi-LPV." Once in quasi-LPV form, a family of robust controllers using mu synthesis is designed for both the pitch and yaw/roll channels, using angle-of-attack and roll rate as the scheduling variables. The closed-loop time response is simulated using the original nonlinear model and also using perturbed aerodynamic coefficients. Modeling and control of engine idle speed is investigated using LPV methods. It is shown how generalized discrete nonlinear systems may be transformed into quasi-LPV form. A discrete nonlinear engine model is developed and expressed in quasi-LPV form with engine speed as the scheduling variable. An example control design is presented using linear quadratic methods. Simulations are shown comparing the LPV based controller performance to that using PID control. LPV representations are also shown to provide a setting for hybrid systems. A hybrid system is characterized by control inputs consisting of both analog signals and discrete actions. A solution is derived for the optimal control of hybrid systems with generalized cost functions. This is shown to be computationally intensive, so a suboptimal strategy is proposed that neglects a subset of possible parameter trajectories. A computational algorithm is constructed for this suboptimal solution applied to a class of linear non-quadratic cost functions.
16 CFR 1203.13 - Test schedule.
Code of Federal Regulations, 2010 CFR
2010-01-01
... environments, respectively) shall be tested in accordance with the dynamic retention system strength test at... Peripheral vision § 1203.15 Positional stability § 1203.16 Retention system strength § 1203.17 Impact tests...
Method for resource control in parallel environments using program organization and run-time support
NASA Technical Reports Server (NTRS)
Ekanadham, Kattamuri (Inventor); Moreira, Jose Eduardo (Inventor); Naik, Vijay Krishnarao (Inventor)
2001-01-01
A system and method for dynamic scheduling and allocation of resources to parallel applications during the course of their execution. By establishing well-defined interactions between an executing job and the parallel system, the system and method support dynamic reconfiguration of processor partitions, dynamic distribution and redistribution of data, communication among cooperating applications, and various other monitoring actions. The interactions occur only at specific points in the execution of the program where the aforementioned operations can be performed efficiently.
Method for resource control in parallel environments using program organization and run-time support
NASA Technical Reports Server (NTRS)
Ekanadham, Kattamuri (Inventor); Moreira, Jose Eduardo (Inventor); Naik, Vijay Krishnarao (Inventor)
1999-01-01
A system and method for dynamic scheduling and allocation of resources to parallel applications during the course of their execution. By establishing well-defined interactions between an executing job and the parallel system, the system and method support dynamic reconfiguration of processor partitions, dynamic distribution and redistribution of data, communication among cooperating applications, and various other monitoring actions. The interactions occur only at specific points in the execution of the program where the aforementioned operations can be performed efficiently.
Three-dimensional user interfaces for scientific visualization
NASA Technical Reports Server (NTRS)
Vandam, Andries
1995-01-01
The main goal of this project is to develop novel and productive user interface techniques for creating and managing visualizations of computational fluid dynamics (CFD) datasets. We have implemented an application framework in which we can visualize computational fluid dynamics user interfaces. This UI technology allows users to interactively place visualization probes in a dataset and modify some of their parameters. We have also implemented a time-critical scheduling system which strives to maintain a constant frame-rate regardless of the number of visualization techniques. In the past year, we have published parts of this research at two conferences, the research annotation system at Visualization 1994, and the 3D user interface at UIST 1994. The real-time scheduling system has been submitted to SIGGRAPH 1995 conference. Copies of these documents are included with this report.
Workflow management in large distributed systems
NASA Astrophysics Data System (ADS)
Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.
2011-12-01
The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.
NASA Technical Reports Server (NTRS)
Mizell, Carolyn Barrett; Malone, Linda
2007-01-01
The development process for a large software development project is very complex and dependent on many variables that are dynamic and interrelated. Factors such as size, productivity and defect injection rates will have substantial impact on the project in terms of cost and schedule. These factors can be affected by the intricacies of the process itself as well as human behavior because the process is very labor intensive. The complex nature of the development process can be investigated with software development process models that utilize discrete event simulation to analyze the effects of process changes. The organizational environment and its effects on the workforce can be analyzed with system dynamics that utilizes continuous simulation. Each has unique strengths and the benefits of both types can be exploited by combining a system dynamics model and a discrete event process model. This paper will demonstrate how the two types of models can be combined to investigate the impacts of human resource interactions on productivity and ultimately on cost and schedule.
AWAS: A dynamic work scheduling system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Y.; Hao, J.; Kocur, G.
1994-12-31
The Automated Work Administration System (AWAS) is an automated scheduling system developed at GTE. A typical work center has 1000 employees and processes 4000 jobs each day. Jobs are geographically distributed within the service area of the work center, require different skills, and have to be done within specified time windows. Each job can take anywhere from 12 minutes to several hours to complete. Each employee can have his/her individual schedule, skill, or working area. The jobs can enter and leave the system at any time The employees dial up to the system to request for their next job atmore » the beginning of a day or after a job is done. The system is able to respond to the changes dynamically and produce close to optimum solutions at real time. We formulate the real world problem as a minimum cost network flow problem. Both employees and jobs are formulated as nodes. Relationship between jobs and employees are formulated as arcs, and working hours contributed by employees and consumed by jobs are formulated as flow. The goal is to minimize missed commitments. We solve the problem with the successive shortest path algorithm. Combined with pre-processing and post-processing, the system produces reasonable outputs and the response time is very good.« less
Development of a Dynamic Time Sharing Scheduled Environment Final Report CRADA No. TC-824-94E
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jette, M.; Caliga, D.
Massively parallel computers, such as the Cray T3D, have historically supported resource sharing solely with space sharing. In that method, multiple problems are solved by executing them on distinct processors. This project developed a dynamic time- and space-sharing scheduler to achieve greater interactivity and throughput than could be achieved with space-sharing alone. CRI and LLNL worked together on the design, testing, and review aspects of this project. There were separate software deliverables. CFU implemented a general purpose scheduling system as per the design specifications. LLNL ported the local gang scheduler software to the LLNL Cray T3D. In this approach, processorsmore » are allocated simultaneously to aU components of a parallel program (in a “gang”). Program execution is preempted as needed to provide for interactivity. Programs are also reIocated to different processors as needed to efficiently pack the computer’s torus of processors. In phase one, CRI developed an interface specification after discussions with LLNL for systemlevel software supporting a time- and space-sharing environment on the LLNL T3D. The two parties also discussed interface specifications for external control tools (such as scheduling policy tools, system administration tools) and applications programs. CRI assumed responsibility for the writing and implementation of all the necessary system software in this phase. In phase two, CRI implemented job-rolling on the Cray T3D, a mechanism for preempting a program, saving its state to disk, and later restoring its state to memory for continued execution. LLNL ported its gang scheduler to the LLNL T3D utilizing the CRI interface implemented in phases one and two. During phase three, the functionality and effectiveness of the LLNL gang scheduler was assessed to provide input to CRI time- and space-sharing, efforts. CRI will utilize this information in the development of general schedulers suitable for other sites and future architectures.« less
The effect of dynamic scheduling and routing in a solid waste management system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johansson, Ola M.
2006-07-01
Solid waste collection and hauling account for the greater part of the total cost in modern solid waste management systems. In a recent initiative, 3300 Swedish recycling containers have been fitted with level sensors and wireless communication equipment, thereby giving waste collection operators access to real-time information on the status of each container. In this study, analytical modeling and discrete-event simulation have been used to evaluate different scheduling and routing policies utilizing the real-time data. In addition to the general models developed, an empirical simulation study has been performed on the downtown recycling station system in Malmoe, Sweden. From themore » study, it can be concluded that dynamic scheduling and routing policies exist that have lower operating costs, shorter collection and hauling distances, and reduced labor hours compared to the static policy with fixed routes and pre-determined pick-up frequencies employed by many waste collection operators today. The results of the analytical model and the simulation models are coherent, and consistent with experiences of the waste collection operators.« less
Integrating light rail transit into traditional bus systems
DOT National Transportation Integrated Search
2007-07-01
This document identifies those dynamics that facilitate a citys addition of light rail as a successful component of its urban system, with success deemed to be opening on schedule with minimal start-up issues. The study examines several new system...
2 kWe Solar Dynamic Ground Test Demonstration Project. Volume 1; Executive Summary
NASA Technical Reports Server (NTRS)
Alexander, Dennis
1997-01-01
The Solar Dynamic Ground Test Demonstration (SDGTD) successfully demonstrated a solar-powered closed Brayton cycle system in a relevant space thermal environment. In addition to meeting technical requirements the project was completed 4 months ahead of schedule and under budget. The following conclusions can be supported: 1. The component technology for solar dynamic closed Brayton cycle technology has clearly been demonstrated. 2. The thermal, optical, control, and electrical integration aspects of systems integration have also been successfully demonstrated. Physical integration aspects were not attempted as these tend to be driven primarily by mission-specific requirements. 3. System efficiency of greater than 15 percent (all losses fully accounted for) was demonstrated using equipment and designs which were not optimized. Some preexisting hardware was used to minimize cost and schedule. 4. Power generation of 2 kWe. 5. A NASA/industry team was developed that successfully worked together to accomplish project goals. The material presented in this report will show that the technology necessary to design and fabricate solar dynamic electrical power systems for space has been successfully developed and demonstrated. The data will further show that achieved results compare well with pretest predictions. The next step in the development of solar dynamic space power will be a flight test.
Integration of domain and resource-based reasoning for real-time control in dynamic environments
NASA Technical Reports Server (NTRS)
Morgan, Keith; Whitebread, Kenneth R.; Kendus, Michael; Cromarty, Andrew S.
1993-01-01
A real-time software controller that successfully integrates domain-based and resource-based control reasoning to perform task execution in a dynamically changing environment is described. The design of the controller is based on the concept of partitioning the process to be controlled into a set of tasks, each of which achieves some process goal. It is assumed that, in general, there are multiple ways (tasks) to achieve a goal. The controller dynamically determines current goals and their current criticality, choosing and scheduling tasks to achieve those goals in the time available. It incorporates rule-based goal reasoning, a TMS-based criticality propagation mechanism, and a real-time scheduler. The controller has been used to build a knowledge-based situation assessment system that formed a major component of a real-time, distributed, cooperative problem solving system built under DARPA contract. It is also being employed in other applications now in progress.
A New Distributed Optimization for Community Microgrids Scheduling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Starke, Michael R; Tomsovic, Kevin
This paper proposes a distributed optimization model for community microgrids considering the building thermal dynamics and customer comfort preference. The microgrid central controller (MCC) minimizes the total cost of operating the community microgrid, including fuel cost, purchasing cost, battery degradation cost and voluntary load shedding cost based on the customers' consumption, while the building energy management systems (BEMS) minimize their electricity bills as well as the cost associated with customer discomfort due to room temperature deviation from the set point. The BEMSs and the MCC exchange information on energy consumption and prices. When the optimization converges, the distributed generation scheduling,more » energy storage charging/discharging and customers' consumption as well as the energy prices are determined. In particular, we integrate the detailed thermal dynamic characteristics of buildings into the proposed model. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of proposed model.« less
The GBT Dynamic Scheduling System: Development and Testing
NASA Astrophysics Data System (ADS)
McCarty, M.; Clark, M.; Marganian, P.; O'Neil, K.; Shelton, A.; Sessoms, E.
2009-09-01
During the summer trimester of 2008, all observations on the Robert C. Byrd Green Bank Telescope (GBT) were scheduled using the new Dynamic Scheduling System (DSS). Beta testing exercised the policies, algorithms, and software developed for the DSS project. Since observers are located all over the world, the DSS was implemented as a web application. Technologies such as iCalendar, Really Simple Syndication (RSS) feeds, email, and instant messaging are used to transfer as much or as little information to observers as they request. We discuss the software engineering challenges leading to our implementation such as information distribution and building rich user interfaces in the web browser. We also relate our adaptation of agile development practices to design and develop the DSS. Additionally, we describe handling differences in expected versus actual initial conditions in the pool of project proposals for the 08B trimester. We then identify lessons learned from beta testing and present statistics on how the DSS was used during the trimester.
Task path planning, scheduling and learning for free-ranging robot systems
NASA Technical Reports Server (NTRS)
Wakefield, G. Steve
1987-01-01
The development of robotics applications for space operations is often restricted by the limited movement available to guided robots. Free ranging robots can offer greater flexibility than physically guided robots in these applications. Presented here is an object oriented approach to path planning and task scheduling for free-ranging robots that allows the dynamic determination of paths based on the current environment. The system also provides task learning for repetitive jobs. This approach provides a basis for the design of free-ranging robot systems which are adaptable to various environments and tasks.
IpexT: Integrated Planning and Execution for Military Satellite Tele-Communications
NASA Technical Reports Server (NTRS)
Plaunt, Christian; Rajan, Kanna
2004-01-01
The next generation of military communications satellites may be designed as a fast packet-switched constellation of spacecraft able to withstand substantial bandwidth capacity fluctuation in the face of dynamic resource utilization and rapid environmental changes including jamming of communication frequencies and unstable weather phenomena. We are in the process of designing an integrated scheduling and execution tool which will aid in the analysis of the design parameters needed for building such a distributed system for nominal and battlefield communications. This paper discusses the design of such a system based on a temporal constraint posting planner/scheduler and a smart executive which can cope with a dynamic environment to make a more optimal utilization of bandwidth than the current circuit switched based approach.
Liu, Shichao; Liu, Xiaoping P; El Saddik, Abdulmotaleb
2014-03-01
In this paper, we investigate the modeling and distributed control problems for the load frequency control (LFC) in a smart grid. In contrast with existing works, we consider more practical and real scenarios, where the communication topology of the smart grid changes because of either link failures or packet losses. These topology changes are modeled as a time-varying communication topology matrix. By using this matrix, a new closed-loop power system model is proposed to integrate the communication topology changes into the dynamics of a physical power system. The globally asymptotical stability of this closed-loop power system is analyzed. A distributed gain scheduling LFC strategy is proposed to compensate for the potential degradation of dynamic performance (mean square errors of state vectors) of the power system under communication topology changes. In comparison to conventional centralized control approaches, the proposed method can improve the robustness of the smart grid to the variation of the communication network as well as to reduce computation load. Simulation results show that the proposed distributed gain scheduling approach is capable to improve the robustness of the smart grid to communication topology changes. © 2013 ISA. Published by ISA. All rights reserved.
NASA Astrophysics Data System (ADS)
Iles, E. J.; McCallum, L.; Lovell, J. E. J.; McCallum, J. N.
2018-02-01
As we move into the next era of geodetic VLBI, the scheduling process is one focus for improvement in terms of increased flexibility and the ability to react with changing conditions. A range of simulations were conducted to ascertain the impact of scheduling on geodetic results such as Earth Orientation Parameters (EOPs) and station coordinates. The potential capabilities of new automated scheduling modes were also simulated, using the so-called 'dynamic scheduling' technique. The primary aim was to improve efficiency for both cost and time without losing geodetic precision, particularly to maximise the uses of the Australian AuScope VLBI array. We show that short breaks in observation will not significantly degrade the results of a typical 24 h experiment, whereas simply shortening observing time degrades precision exponentially. We also confirm the new automated, dynamic scheduling mode is capable of producing the same standard of result as a traditional schedule, with close to real-time flexibility. Further, it is possible to use the dynamic scheduler to augment the 3 station Australian AuScope array and thereby attain EOPs of the current global precision with only intermittent contribution from 2 additional stations. We thus confirm automated, dynamic scheduling bears great potential for flexibility and automation in line with aims for future continuous VLBI operations.
Li, Xiangyu; Xie, Nijie; Tian, Xinyue
2017-01-01
This paper proposes a scheduling and power management solution for energy harvesting heterogeneous multi-core WSN node SoC such that the system continues to operate perennially and uses the harvested energy efficiently. The solution consists of a heterogeneous multi-core system oriented task scheduling algorithm and a low-complexity dynamic workload scaling and configuration optimization algorithm suitable for light-weight platforms. Moreover, considering the power consumption of most WSN applications have the characteristic of data dependent behavior, we introduce branches handling mechanism into the solution as well. The experimental result shows that the proposed algorithm can operate in real-time on a lightweight embedded processor (MSP430), and that it can make a system do more valuable works and make more than 99.9% use of the power budget. PMID:28208730
Li, Xiangyu; Xie, Nijie; Tian, Xinyue
2017-02-08
This paper proposes a scheduling and power management solution for energy harvesting heterogeneous multi-core WSN node SoC such that the system continues to operate perennially and uses the harvested energy efficiently. The solution consists of a heterogeneous multi-core system oriented task scheduling algorithm and a low-complexity dynamic workload scaling and configuration optimization algorithm suitable for light-weight platforms. Moreover, considering the power consumption of most WSN applications have the characteristic of data dependent behavior, we introduce branches handling mechanism into the solution as well. The experimental result shows that the proposed algorithm can operate in real-time on a lightweight embedded processor (MSP430), and that it can make a system do more valuable works and make more than 99.9% use of the power budget.
Optimizing Mars Airplane Trajectory with the Application Navigation System
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Riley, Derek
2004-01-01
Planning complex missions requires a number of programs to be executed in concert. The Application Navigation System (ANS), developed in the NAS Division, can execute many interdependent programs in a distributed environment. We show that the ANS simplifies user effort and reduces time in optimization of the trajectory of a martian airplane. We use a software package, Cart3D, to evaluate trajectories and a shortest path algorithm to determine the optimal trajectory. ANS employs the GridScape to represent the dynamic state of the available computer resources. Then, ANS uses a scheduler to dynamically assign ready task to machine resources and the GridScape for tracking available resources and forecasting completion time of running tasks. We demonstrate system capability to schedule and run the trajectory optimization application with efficiency exceeding 60% on 64 processors.
Enabling Autonomous Rover Science through Dynamic Planning and Scheduling
NASA Technical Reports Server (NTRS)
Estlin, Tara A.; Gaines, Daniel; Chouinard, Caroline; Fisher, Forest; Castano, Rebecca; Judd, Michele; Nesnas, Issa
2005-01-01
This paper describes how dynamic planning and scheduling techniques can be used onboard a rover to autonomously adjust rover activities in support of science goals. These goals could be identified by scientists on the ground or could be identified by onboard data-analysis software. Several different types of dynamic decisions are described, including the handling of opportunistic science goals identified during rover traverses, preserving high priority science targets when resources, such as power, are unexpectedly over-subscribed, and dynamically adding additional, ground-specified science targets when rover actions are executed more quickly than expected. After describing our specific system approach, we discuss some of the particular challenges we have examined to support autonomous rover decision-making. These include interaction with rover navigation and path-planning software and handling large amounts of uncertainty in state and resource estimations.
Facilitating preemptive hardware system design using partial reconfiguration techniques.
Dondo Gazzano, Julio; Rincon, Fernando; Vaderrama, Carlos; Villanueva, Felix; Caba, Julian; Lopez, Juan Carlos
2014-01-01
In FPGA-based control system design, partial reconfiguration is especially well suited to implement preemptive systems. In real-time systems, the deadline for critical task can compel the preemption of noncritical one. Besides, an asynchronous event can demand immediate attention and, then, force launching a reconfiguration process for high-priority task implementation. If the asynchronous event is previously scheduled, an explicit activation of the reconfiguration process is performed. If the event cannot be previously programmed, such as in dynamically scheduled systems, an implicit activation to the reconfiguration process is demanded. This paper provides a hardware-based approach to explicit and implicit activation of the partial reconfiguration process in dynamically reconfigurable SoCs and includes all the necessary tasks to cope with this issue. Furthermore, the reconfiguration service introduced in this work allows remote invocation of the reconfiguration process and then the remote integration of off-chip components. A model that offers component location transparency is also presented to enhance and facilitate system integration.
Facilitating Preemptive Hardware System Design Using Partial Reconfiguration Techniques
Rincon, Fernando; Vaderrama, Carlos; Villanueva, Felix; Caba, Julian; Lopez, Juan Carlos
2014-01-01
In FPGA-based control system design, partial reconfiguration is especially well suited to implement preemptive systems. In real-time systems, the deadline for critical task can compel the preemption of noncritical one. Besides, an asynchronous event can demand immediate attention and, then, force launching a reconfiguration process for high-priority task implementation. If the asynchronous event is previously scheduled, an explicit activation of the reconfiguration process is performed. If the event cannot be previously programmed, such as in dynamically scheduled systems, an implicit activation to the reconfiguration process is demanded. This paper provides a hardware-based approach to explicit and implicit activation of the partial reconfiguration process in dynamically reconfigurable SoCs and includes all the necessary tasks to cope with this issue. Furthermore, the reconfiguration service introduced in this work allows remote invocation of the reconfiguration process and then the remote integration of off-chip components. A model that offers component location transparency is also presented to enhance and facilitate system integration. PMID:24672292
Scheduling based on a dynamic resource connection
NASA Astrophysics Data System (ADS)
Nagiyev, A. E.; Botygin, I. A.; Shersntneva, A. I.; Konyaev, P. A.
2017-02-01
The practical using of distributed computing systems associated with many problems, including troubles with the organization of an effective interaction between the agents located at the nodes of the system, with the specific configuration of each node of the system to perform a certain task, with the effective distribution of the available information and computational resources of the system, with the control of multithreading which implements the logic of solving research problems and so on. The article describes the method of computing load balancing in distributed automatic systems, focused on the multi-agency and multi-threaded data processing. The scheme of the control of processing requests from the terminal devices, providing the effective dynamic scaling of computing power under peak load is offered. The results of the model experiments research of the developed load scheduling algorithm are set out. These results show the effectiveness of the algorithm even with a significant expansion in the number of connected nodes and zoom in the architecture distributed computing system.
Toward interactive scheduling systems for managing medical resources.
Oddi, A; Cesta, A
2000-10-01
Managers of medico-hospital facilities are facing two general problems when allocating resources to activities: (1) to find an agreement between several and contrasting requirements; (2) to manage dynamic and uncertain situations when constraints suddenly change over time due to medical needs. This paper describes the results of a research aimed at applying constraint-based scheduling techniques to the management of medical resources. A mixed-initiative problem solving approach is adopted in which a user and a decision support system interact to incrementally achieve a satisfactory solution to the problem. A running prototype is described called Interactive Scheduler which offers a set of functionalities for a mixed-initiative interaction to cope with the medical resource management. Interactive Scheduler is endowed with a representation schema used for describing the medical environment, a set of algorithms that address the specific problems of the domain, and an innovative interaction module that offers functionalities for the dialogue between the support system and its user. A particular contribution of this work is the explicit representation of constraint violations, and the definition of scheduling algorithms that aim at minimizing the amount of constraint violations in a solution.
Optimized Hypervisor Scheduler for Parallel Discrete Event Simulations on Virtual Machine Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S
2013-01-01
With the advent of virtual machine (VM)-based platforms for parallel computing, it is now possible to execute parallel discrete event simulations (PDES) over multiple virtual machines, in contrast to executing in native mode directly over hardware as is traditionally done over the past decades. While mature VM-based parallel systems now offer new, compelling benefits such as serviceability, dynamic reconfigurability and overall cost effectiveness, the runtime performance of parallel applications can be significantly affected. In particular, most VM-based platforms are optimized for general workloads, but PDES execution exhibits unique dynamics significantly different from other workloads. Here we first present results frommore » experiments that highlight the gross deterioration of the runtime performance of VM-based PDES simulations when executed using traditional VM schedulers, quantitatively showing the bad scaling properties of the scheduler as the number of VMs is increased. The mismatch is fundamental in nature in the sense that any fairness-based VM scheduler implementation would exhibit this mismatch with PDES runs. We also present a new scheduler optimized specifically for PDES applications, and describe its design and implementation. Experimental results obtained from running PDES benchmarks (PHOLD and vehicular traffic simulations) over VMs show over an order of magnitude improvement in the run time of the PDES-optimized scheduler relative to the regular VM scheduler, with over 20 reduction in run time of simulations using up to 64 VMs. The observations and results are timely in the context of emerging systems such as cloud platforms and VM-based high performance computing installations, highlighting to the community the need for PDES-specific support, and the feasibility of significantly reducing the runtime overhead for scalable PDES on VM platforms.« less
Cascaded Optimization for a Persistent Data Ferrying Unmanned Aircraft
NASA Astrophysics Data System (ADS)
Carfang, Anthony
This dissertation develops and assesses a cascaded method for designing optimal periodic trajectories and link schedules for an unmanned aircraft to ferry data between stationary ground nodes. This results in a fast solution method without the need to artificially constrain system dynamics. Focusing on a fundamental ferrying problem that involves one source and one destination, but includes complex vehicle and Radio-Frequency (RF) dynamics, a cascaded structure to the system dynamics is uncovered. This structure is exploited by reformulating the nonlinear optimization problem into one that reduces the independent control to the vehicle's motion, while the link scheduling control is folded into the objective function and implemented as an optimal policy that depends on candidate motion control. This formulation is proven to maintain optimality while reducing computation time in comparison to traditional ferry optimization methods. The discrete link scheduling problem takes the form of a combinatorial optimization problem that is known to be NP-Hard. A derived necessary condition for optimality guides the development of several heuristic algorithms, specifically the Most-Data-First Algorithm and the Knapsack Adaptation. These heuristics are extended to larger ferrying scenarios, and assessed analytically and through Monte Carlo simulation, showing better throughput performance in the same order of magnitude of computation time in comparison to other common link scheduling policies. The cascaded optimization method is implemented with a novel embedded software system on a small, unmanned aircraft to validate the simulation results with field experiments. To address the sensitivity of results on trajectory tracking performance, a system that combines motion and link control with waypoint-based navigation is developed and assessed through field experiments. The data ferrying algorithms are further extended by incorporating a Gaussian process to opportunistically learn the RF environment. By continuously improving RF models, the cascaded planner can continually improve the ferrying system's overall performance.
Autonomous power expert system
NASA Technical Reports Server (NTRS)
Ringer, Mark J.; Quinn, Todd M.
1990-01-01
The goal of the Autonomous Power System (APS) program is to develop and apply intelligent problem solving and control technologies to the Space Station Freedom Electrical Power Systems (SSF/EPS). The objectives of the program are to establish artificial intelligence/expert system technology paths, to create knowledge based tools with advanced human-operator interfaces, and to integrate and interface knowledge-based and conventional control schemes. This program is being developed at the NASA-Lewis. The APS Brassboard represents a subset of a 20 KHz Space Station Power Management And Distribution (PMAD) testbed. A distributed control scheme is used to manage multiple levels of computers and switchgear. The brassboard is comprised of a set of intelligent switchgear used to effectively switch power from the sources to the loads. The Autonomous Power Expert System (APEX) portion of the APS program integrates a knowledge based fault diagnostic system, a power resource scheduler, and an interface to the APS Brassboard. The system includes knowledge bases for system diagnostics, fault detection and isolation, and recommended actions. The scheduler autonomously assigns start times to the attached loads based on temporal and power constraints. The scheduler is able to work in a near real time environment for both scheduling and dynamic replanning.
Autonomous power expert system
NASA Technical Reports Server (NTRS)
Ringer, Mark J.; Quinn, Todd M.
1990-01-01
The goal of the Autonomous Power System (APS) program is to develop and apply intelligent problem solving and control technologies to the Space Station Freedom Electrical Power Systems (SSF/EPS). The objectives of the program are to establish artificial intelligence/expert system technology paths, to create knowledge based tools with advanced human-operator interfaces, and to integrate and interface knowledge-based and conventional control schemes. This program is being developed at the NASA-Lewis. The APS Brassboard represents a subset of a 20 KHz Space Station Power Management And Distribution (PMAD) testbed. A distributed control scheme is used to manage multiple levels of computers and switchgear. The brassboard is comprised of a set of intelligent switchgear used to effectively switch power from the sources to the loads. The Autonomous Power Expert System (APEX) portion of the APS program integrates a knowledge based fault diagnostic system, a power resource scheduler, and an interface to the APS Brassboard. The system includes knowledge bases for system diagnostics, fault detection and isolation, and recommended actions. The scheduler autonomously assigns start times to the attached loads based on temporal and power constraints. The scheduler is able to work in a near real time environment for both scheduling an dynamic replanning.
Optimization Scheduling Model for Wind-thermal Power System Considering the Dynamic penalty factor
NASA Astrophysics Data System (ADS)
PENG, Siyu; LUO, Jianchun; WANG, Yunyu; YANG, Jun; RAN, Hong; PENG, Xiaodong; HUANG, Ming; LIU, Wanyu
2018-03-01
In this paper, a new dynamic economic dispatch model for power system is presented.Objective function of the proposed model presents a major novelty in the dynamic economic dispatch including wind farm: introduced the “Dynamic penalty factor”, This factor could be computed by using fuzzy logic considering both the variable nature of active wind power and power demand, and it could change the wind curtailment cost according to the different state of the power system. Case studies were carried out on the IEEE30 system. Results show that the proposed optimization model could mitigate the wind curtailment and the total cost effectively, demonstrate the validity and effectiveness of the proposed model.
A dynamic scheduling algorithm for singe-arm two-cluster tools with flexible processing times
NASA Astrophysics Data System (ADS)
Li, Xin; Fung, Richard Y. K.
2018-02-01
This article presents a dynamic algorithm for job scheduling in two-cluster tools producing multi-type wafers with flexible processing times. Flexible processing times mean that the actual times for processing wafers should be within given time intervals. The objective of the work is to minimize the completion time of the newly inserted wafer. To deal with this issue, a two-cluster tool is decomposed into three reduced single-cluster tools (RCTs) in a series based on a decomposition approach proposed in this article. For each single-cluster tool, a dynamic scheduling algorithm based on temporal constraints is developed to schedule the newly inserted wafer. Three experiments have been carried out to test the dynamic scheduling algorithm proposed, comparing with the results the 'earliest starting time' heuristic (EST) adopted in previous literature. The results show that the dynamic algorithm proposed in this article is effective and practical.
Advanced order management in ERM systems: the tic-tac-toe algorithm
NASA Astrophysics Data System (ADS)
Badell, Mariana; Fernandez, Elena; Puigjaner, Luis
2000-10-01
The concept behind improved enterprise resource planning systems (ERP) systems is the overall integration of the whole enterprise functionality into the management systems through financial links. Converting current software into real management decision tools requires crucial changes in the current approach to ERP systems. This evolution must be able to incorporate the technological achievements both properly and in time. The exploitation phase of plants needs an open web-based environment for collaborative business-engineering with on-line schedulers. Today's short lifecycles of products and processes require sharp and finely tuned management actions that must be guided by scheduling tools. Additionally, such actions must be able to keep track of money movements related to supply chain events. Thus, the necessary outputs require financial-production integration at the scheduling level as proposed in the new approach of enterprise management systems (ERM). Within this framework, the economical analysis of the due date policy and its optimization become essential to manage dynamically realistic and optimal delivery dates with price-time trade-off during the marketing activities. In this work we propose a scheduling tool with web-based interface conducted by autonomous agents when precise economic information relative to plant and business actions and their effects are provided. It aims to attain a better arrangement of the marketing and production events in order to face the bid/bargain process during e-commerce. Additionally, management systems require real time execution and an efficient transaction-oriented approach capable to dynamically adopt realistic and optimal actions to support marketing management. To this end the TicTacToe algorithm provides sequence optimization with acceptable tolerances in realistic time.
Model development for prediction of soil water dynamics in plant production.
Hu, Zhengfeng; Jin, Huixia; Zhang, Kefeng
2015-09-01
Optimizing water use in agriculture and medicinal plants is crucially important worldwide. Soil sensor-controlled irrigation systems are increasingly becoming available. However it is questionable whether irrigation scheduling based on soil measurements in the top soil could make best use of water for deep-rooted crops. In this study a mechanistic model was employed to investigate water extraction by a deep-rooted cabbage crop from the soil profile throughout crop growth. The model accounts all key processes governing water dynamics in the soil-plant-atmosphere system. Results show that the subsoil provides a significant proportion of the seasonal transpiration, about a third of water transpired over the whole growing season. This suggests that soil water in the entire root zone should be taken into consideration in irrigation scheduling, and for sensor-controlled irrigation systems sensors in the subsoil are essential for detecting soil water status for deep-rooted crops.
A real-time programming system.
Townsend, H R
1979-03-01
The paper describes a Basic Operating and Scheduling System (BOSS) designed for a small computer. User programs are organised as self-contained modular 'processes' and the way in which the scheduler divides the time of the computer equally between them, while arranging for any process which has to respond to an interrupt from a peripheral device to be given the necessary priority, is described in detail. Next the procedures provided by the operating system to organise communication between processes are described, and how they are used to construct dynamically self-modifying real-time systems. Finally, the general philosophy of BOSS and applications to a multi-processor assembly are discussed.
A General Cross-Layer Cloud Scheduling Framework for Multiple IoT Computer Tasks.
Wu, Guanlin; Bao, Weidong; Zhu, Xiaomin; Zhang, Xiongtao
2018-05-23
The diversity of IoT services and applications brings enormous challenges to improving the performance of multiple computer tasks' scheduling in cross-layer cloud computing systems. Unfortunately, the commonly-employed frameworks fail to adapt to the new patterns on the cross-layer cloud. To solve this issue, we design a new computer task scheduling framework for multiple IoT services in cross-layer cloud computing systems. Specifically, we first analyze the features of the cross-layer cloud and computer tasks. Then, we design the scheduling framework based on the analysis and present detailed models to illustrate the procedures of using the framework. With the proposed framework, the IoT services deployed in cross-layer cloud computing systems can dynamically select suitable algorithms and use resources more effectively to finish computer tasks with different objectives. Finally, the algorithms are given based on the framework, and extensive experiments are also given to validate its effectiveness, as well as its superiority.
Power-based Shift Schedule for Pure Electric Vehicle with a Two-speed Automatic Transmission
NASA Astrophysics Data System (ADS)
Wang, Jiaqi; Liu, Yanfang; Liu, Qiang; Xu, Xiangyang
2016-11-01
This paper introduces a comprehensive shift schedule for a two-speed automatic transmission of pure electric vehicle. Considering about driving ability and efficiency performance of electric vehicles, the power-based shift schedule is proposed with three principles. This comprehensive shift schedule regards the vehicle current speed and motor load power as input parameters to satisfy the vehicle driving power demand with lowest energy consumption. A simulation model has been established to verify the dynamic and economic performance of comprehensive shift schedule. Compared with traditional dynamic and economic shift schedules, simulation results indicate that the power-based shift schedule is superior to traditional shift schedules.
Intelligent Planning and Scheduling for Controlled Life Support Systems
NASA Technical Reports Server (NTRS)
Leon, V. Jorge
1996-01-01
Planning in Controlled Ecological Life Support Systems (CELSS) requires special look ahead capabilities due to the complex and long-term dynamic behavior of biological systems. This project characterizes the behavior of CELSS, identifies the requirements of intelligent planning systems for CELSS, proposes the decomposition of the planning task into short-term and long-term planning, and studies the crop scheduling problem as an initial approach to long-term planning. CELSS is studied in the realm of Chaos. The amount of biomass in the system is modeled using a bounded quadratic iterator. The results suggests that closed ecological systems can exhibit periodic behavior when imposed external or artificial control. The main characteristics of CELSS from the planning and scheduling perspective are discussed and requirements for planning systems are given. Crop scheduling problem is identified as an important component of the required long-term lookahead capabilities of a CELSS planner. The main characteristics of crop scheduling are described and a model is proposed to represent the problem. A surrogate measure of the probability of survival is developed. The measure reflects the absolute deviation of the vital reservoir levels from their nominal values. The solution space is generated using a probability distribution which captures both knowledge about the system and the current state of affairs at each decision epoch. This probability distribution is used in the context of an evolution paradigm. The concepts developed serve as the basis for the development of a simple crop scheduling tool which is used to demonstrate its usefulness in the design and operation of CELSS.
Sum-of-Squares-Based Region of Attraction Analysis for Gain-Scheduled Three-Loop Autopilot
NASA Astrophysics Data System (ADS)
Seo, Min-Won; Kwon, Hyuck-Hoon; Choi, Han-Lim
2018-04-01
A conventional method of designing a missile autopilot is to linearize the original nonlinear dynamics at several trim points, then to determine linear controllers for each linearized model, and finally implement gain-scheduling technique. The validation of such a controller is often based on linear system analysis for the linear closed-loop system at the trim conditions. Although this type of gain-scheduled linear autopilot works well in practice, validation based solely on linear analysis may not be sufficient to fully characterize the closed-loop system especially when the aerodynamic coefficients exhibit substantial nonlinearity with respect to the flight condition. The purpose of this paper is to present a methodology for analyzing the stability of a gain-scheduled controller in a setting close to the original nonlinear setting. The method is based on sum-of-squares (SOS) optimization that can be used to characterize the region of attraction of a polynomial system by solving convex optimization problems. The applicability of the proposed SOS-based methodology is verified on a short-period autopilot of a skid-to-turn missile.
DTS: Building custom, intelligent schedulers
NASA Technical Reports Server (NTRS)
Hansson, Othar; Mayer, Andrew
1994-01-01
DTS is a decision-theoretic scheduler, built on top of a flexible toolkit -- this paper focuses on how the toolkit might be reused in future NASA mission schedulers. The toolkit includes a user-customizable scheduling interface, and a 'Just-For-You' optimization engine. The customizable interface is built on two metaphors: objects and dynamic graphs. Objects help to structure problem specifications and related data, while dynamic graphs simplify the specification of graphical schedule editors (such as Gantt charts). The interface can be used with any 'back-end' scheduler, through dynamically-loaded code, interprocess communication, or a shared database. The 'Just-For-You' optimization engine includes user-specific utility functions, automatically compiled heuristic evaluations, and a postprocessing facility for enforcing scheduling policies. The optimization engine is based on BPS, the Bayesian Problem-Solver (1,2), which introduced a similar approach to solving single-agent and adversarial graph search problems.
Suboptimal Scheduling in Switched Systems With Continuous-Time Dynamics: A Least Squares Approach.
Sardarmehni, Tohid; Heydari, Ali
2018-06-01
Two approximate solutions for optimal control of switched systems with autonomous subsystems and continuous-time dynamics are presented. The first solution formulates a policy iteration (PI) algorithm for the switched systems with recursive least squares. To reduce the computational burden imposed by the PI algorithm, a second solution, called single loop PI, is presented. Online and concurrent training algorithms are discussed for implementing each solution. At last, effectiveness of the presented algorithms is evaluated through numerical simulations.
Dypas: A dynamic payload scheduler for shuttle missions
NASA Technical Reports Server (NTRS)
Davis, Stephen
1988-01-01
Decision and analysis systems have had broad and very practical application areas in the human decision making process. These software systems range from the help sections in simple accounting packages, to the more complex computer configuration programs. Dypas is a decision and analysis system that aids prelaunch shutlle scheduling, and has added functionality to aid the rescheduling done in flight. Dypas is written in Common Lisp on a Symbolics Lisp machine. Dypas differs from other scheduling programs in that it can draw its knowledge from different rule bases and apply them to different rule interpretation schemes. The system has been coded with Flavors, an object oriented extension to Common Lisp on the Symbolics hardware. This allows implementation of objects (experiments) to better match the problem definition, and allows a more coherent solution space to be developed. Dypas was originally developed to test a programmer's aptitude toward Common Lisp and the Symbolics software environment. Since then the system has grown into a large software effort with several programmers and researchers thrown into the effort. Dypas is currently using two expert systems and three inferencing procedures to generate a many object schedule. The paper will review the abilities of Dypas and comment on its functionality.
NASA Astrophysics Data System (ADS)
Ikegami, Takashi; Iwafune, Yumiko; Ogimoto, Kazuhiko
The high penetration of variable renewable generation such as Photovoltaic (PV) systems will cause the issue of supply-demand imbalance in a whole power system. The activation of the residential power usage, storage and generation by sophisticated scheduling and control using the Home Energy Management System (HEMS) will be needed to balance power supply and demand in the near future. In order to evaluate the applicability of the HEMS as a distributed controller for local and system-wide supply-demand balances, we developed an optimum operation scheduling model of domestic electric appliances using the mixed integer linear programming. Applying this model to several houses with dynamic electricity prices reflecting the power balance of the total power system, it was found that the adequate changes in electricity prices bring about the shift of residential power usages to control the amount of the reverse power flow due to excess PV generation.
For operation of the Computer Software Management and Information Center (COSMIC)
NASA Technical Reports Server (NTRS)
Carmon, J. L.
1983-01-01
Computer programs for degaussing, magnetic field calculation, low speed wing flap systems aerodynamics, structural panel analysis, dynamic stress/strain data acquisition, allocation and network scheduling, and digital filters are discussed.
Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments
Kadima, Hubert; Granado, Bertrand
2013-01-01
We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach. PMID:24319361
Multi-objective approach for energy-aware workflow scheduling in cloud computing environments.
Yassa, Sonia; Chelouah, Rachid; Kadima, Hubert; Granado, Bertrand
2013-01-01
We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach.
Multi-Satellite Scheduling Approach for Dynamic Areal Tasks Triggered by Emergent Disasters
NASA Astrophysics Data System (ADS)
Niu, X. N.; Zhai, X. J.; Tang, H.; Wu, L. X.
2016-06-01
The process of satellite mission scheduling, which plays a significant role in rapid response to emergent disasters, e.g. earthquake, is used to allocate the observation resources and execution time to a series of imaging tasks by maximizing one or more objectives while satisfying certain given constraints. In practice, the information obtained of disaster situation changes dynamically, which accordingly leads to the dynamic imaging requirement of users. We propose a satellite scheduling model to address dynamic imaging tasks triggered by emergent disasters. The goal of proposed model is to meet the emergency response requirements so as to make an imaging plan to acquire rapid and effective information of affected area. In the model, the reward of the schedule is maximized. To solve the model, we firstly present a dynamic segmenting algorithm to partition area targets. Then the dynamic heuristic algorithm embedding in a greedy criterion is designed to obtain the optimal solution. To evaluate the model, we conduct experimental simulations in the scene of Wenchuan Earthquake. The results show that the simulated imaging plan can schedule satellites to observe a wider scope of target area. We conclude that our satellite scheduling model can optimize the usage of satellite resources so as to obtain images in disaster response in a more timely and efficient manner.
Conflict-Aware Scheduling Algorithm
NASA Technical Reports Server (NTRS)
Wang, Yeou-Fang; Borden, Chester
2006-01-01
conflict-aware scheduling algorithm is being developed to help automate the allocation of NASA s Deep Space Network (DSN) antennas and equipment that are used to communicate with interplanetary scientific spacecraft. The current approach for scheduling DSN ground resources seeks to provide an equitable distribution of tracking services among the multiple scientific missions and is very labor intensive. Due to the large (and increasing) number of mission requests for DSN services, combined with technical and geometric constraints, the DSN is highly oversubscribed. To help automate the process, and reduce the DSN and spaceflight project labor effort required for initiating, maintaining, and negotiating schedules, a new scheduling algorithm is being developed. The scheduling algorithm generates a "conflict-aware" schedule, where all requests are scheduled based on a dynamic priority scheme. The conflict-aware scheduling algorithm allocates all requests for DSN tracking services while identifying and maintaining the conflicts to facilitate collaboration and negotiation between spaceflight missions. These contrast with traditional "conflict-free" scheduling algorithms that assign tracks that are not in conflict and mark the remainder as unscheduled. In the case where full schedule automation is desired (based on mission/event priorities, fairness, allocation rules, geometric constraints, and ground system capabilities/ constraints), a conflict-free schedule can easily be created from the conflict-aware schedule by removing lower priority items that are in conflict.
Uncertainty-based Estimation of the Secure Range for ISO New England Dynamic Interchange Adjustment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Etingov, Pavel V.; Makarov, Yuri V.; Wu, Di
2014-04-14
The paper proposes an approach to estimate the secure range for dynamic interchange adjustment, which assists system operators in scheduling the interchange with neighboring control areas. Uncertainties associated with various sources are incorporated. The proposed method is implemented in the dynamic interchange adjustment (DINA) tool developed by Pacific Northwest National Laboratory (PNNL) for ISO New England. Simulation results are used to validate the effectiveness of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Etingov, Pavel V.; Makarov, Yuri V.; Wu, Di
The document describes detailed uncertainty quantification (UQ) methodology developed by PNNL to estimate secure ranges of potential dynamic intra-hour interchange adjustments in the ISO-NE system and provides description of the dynamic interchange adjustment (DINA) tool developed under the same contract. The overall system ramping up and down capability, spinning reserve requirements, interchange schedules, load variations and uncertainties from various sources that are relevant to the ISO-NE system are incorporated into the methodology and the tool. The DINA tool has been tested by PNNL and ISO-NE staff engineers using ISO-NE data.
Gain-Scheduled Complementary Filter Design for a MEMS Based Attitude and Heading Reference System
Yoo, Tae Suk; Hong, Sung Kyung; Yoon, Hyok Min; Park, Sungsu
2011-01-01
This paper describes a robust and simple algorithm for an attitude and heading reference system (AHRS) based on low-cost MEMS inertial and magnetic sensors. The proposed approach relies on a gain-scheduled complementary filter, augmented by an acceleration-based switching architecture to yield robust performance, even when the vehicle is subject to strong accelerations. Experimental results are provided for a road captive test during which the vehicle dynamics are in high-acceleration mode and the performance of the proposed filter is evaluated against the output from a conventional linear complementary filter. PMID:22163824
Reconfigurable manufacturing execution system for pipe cutting
NASA Astrophysics Data System (ADS)
Yin, Y. H.; Xie, J. Y.
2011-08-01
This article presents a reconfigurable manufacturing execution system (RMES) filling the gap between enterprise resource planning and resource layer for pipe-cutting production with mass customisation and rapid adaptation to dynamic market, which consists of planning and scheduling layer and executive control layer. Starting from customer's task and process requirements, the cutting trajectories are planned under generalised mathematical model able to reconfigure in accordance with various intersecting types' joint, and all tasks are scheduled by nesting algorithm to maximise the utilisation rate of rough material. This RMES for pipe cutting has been effectively implemented in more than 100 companies.
Production scheduling and rescheduling with genetic algorithms.
Bierwirth, C; Mattfeld, D C
1999-01-01
A general model for job shop scheduling is described which applies to static, dynamic and non-deterministic production environments. Next, a Genetic Algorithm is presented which solves the job shop scheduling problem. This algorithm is tested in a dynamic environment under different workload situations. Thereby, a highly efficient decoding procedure is proposed which strongly improves the quality of schedules. Finally, this technique is tested for scheduling and rescheduling in a non-deterministic environment. It is shown by experiment that conventional methods of production control are clearly outperformed at reasonable run-time costs.
Interleaved Observation Execution and Rescheduling on Earth Observing Systems
NASA Technical Reports Server (NTRS)
Khatib, Lina; Frank, Jeremy; Smith, David; Morris, Robert; Dungan, Jennifer
2003-01-01
Observation scheduling for Earth orbiting satellites solves the following problem: given a set of requests for images of the Earth, a set of instruments for acquiring those images distributed on a collecting of orbiting satellites, and a set of temporal and resource constraints, generate a set of assignments of instruments and viewing times to those requests that satisfy those constraints. Observation scheduling is often construed as a constrained optimization problem with the objective of maximizing the overall utility of the science data acquired. The utility of an image is typically based on the intrinsic importance of acquiring it (for example, its importance in meeting a mission or science campaign objective) as well as the expected value of the data given current viewing conditions (for example, if the image is occluded by clouds, its value is usually diminished). Currently, science observation scheduling for Earth Observing Systems is done on the ground, for periods covering a day or more. Schedules are uplinked to the satellites and are executed rigorously. An alternative to this scenario is to do some of the decision-making about what images are to be acquired on-board. The principal argument for this capability is that the desirability of making an observation can change dynamically, because of changes in meteorological conditions (e.g. cloud cover), unforeseen events such as fires, floods, or volcanic eruptions, or un-expected changes in satellite or ground station capability. Furthermore, since satellites can only communicate with the ground between 5% to 10% of the time, it may be infeasible to make the desired changes to the schedule on the ground, and uplink the revisions in time for the on-board system to execute them. Examples of scenarios that motivate an on-board capability for revising schedules include the following. First, if a desired visual scene is completely obscured by clouds, then there is little point in taking it. In this case, satellite resources, such as power and storage space can be better utilized taking another image that is higher quality. Second, if an unexpected but important event occurs (such as a fire, flood, or volcanic eruption), there may be good reason to take images of it, instead of expending satellite resources on some of the lower priority scheduled observations. Finally, if there is unexpected loss of capability, it may be impossible to carry out the schedule of planned observations. For example, if a ground station goes down temporarily, a satellite may not be able to free up enough storage space to continue with the remaining schedule of observations. This paper describes an approach for interleaving execution of observation schedules with dynamic schedule revision based on changes to the expected utility of the acquired images. We describe the problem in detail, formulate an algorithm for interleaving schedule revision and execution, and discuss refinements to the algorithm based on the need for search efficiency. We summarize with a brief discussion of the tests performed on the system.
Information Dynamics as Foundation for Network Management
2014-12-04
developed to adapt to channel dynamics in a mobile network environment. We devise a low- complexity online scheduling algorithm integrated with the...has been accepted for the Journal on Network and Systems Management in 2014. - RINC programmable platform for Infrastructure -as-a-Service public... backend servers. Rather than implementing load balancing in dedicated appliances, commodity SDN switches can perform this function. We design
Cameron, Courtney M.; Wightman, R. Mark; Carelli, Regina M.
2014-01-01
Electrophysiological studies show that distinct subsets of nucleus accumbens (NAc) neurons differentially encode information about goal-directed behaviors for intravenous cocaine versus natural (food/water) rewards. Further, NAc rapid dopamine signaling occurs on a timescale similar to phasic cell firing during cocaine and natural reward-seeking behaviors. However, it is not known whether dopamine signaling is reinforcer specific (i.e., is released during responding for only one type of reinforcer) within discrete NAc locations, similar to neural firing dynamics. Here, fast-scan cyclic voltammetry (FSCV) was used to measure rapid dopamine release during multiple schedules involving sucrose reward and cocaine self-administration (n=8 rats) and, in a separate group of rats (n = 6), during a sucrose/food multiple schedule. During the sucrose/cocaine multiple schedule, dopamine increased within seconds of operant responding for both reinforcers. Although dopamine release was not reinforcer specific, more subtle differences were observed in peak dopamine concentration [DA] across reinforcer conditions. Specifically, peak [DA] was higher during the first phase of the multiple schedule, regardless of reinforcer type. Further, the time to reach peak [DA] was delayed during cocaine-responding compared to sucrose. During the sucrose/food multiple schedule, increases in dopamine release were also observed relative to operant responding for both natural rewards. However, peak [DA] was higher relative to responding for sucrose than food, regardless of reinforcer order. Overall, the results reveal the dynamics of rapid dopamine signaling in discrete locations in the NAc across reward conditions, and provide novel insight into the functional role of this system in reward-seeking behaviors. PMID:25174553
Rajalingham, Rishi; Stacey, Richard Greg; Tsoulfas, Georgios
2014-01-01
To restore movements to paralyzed patients, neural prosthetic systems must accurately decode patients' intentions from neural signals. Despite significant advancements, current systems are unable to restore complex movements. Decoding reward-related signals from the medial intraparietal area (MIP) could enhance prosthetic performance. However, the dynamics of reward sensitivity in MIP is not known. Furthermore, reward-related modulation in premotor areas has been attributed to behavioral confounds. Here we investigated the stability of reward encoding in MIP by assessing the effect of reward history on reward sensitivity. We recorded from neurons in MIP while monkeys performed a delayed-reach task under two reward schedules. In the variable schedule, an equal number of small- and large-rewards trials were randomly interleaved. In the constant schedule, one reward size was delivered for a block of trials. The memory period firing rate of most neurons in response to identical rewards varied according to schedule. Using systems identification tools, we attributed the schedule sensitivity to the dependence of neural activity on the history of reward. We did not find schedule-dependent behavioral changes, suggesting that reward modulates neural activity in MIP. Neural discrimination between rewards was less in the variable than in the constant schedule, degrading our ability to decode reach target and reward simultaneously. The effect of schedule was mitigated by adding Haar wavelet coefficients to the decoding model. This raises the possibility of multiple encoding schemes at different timescales and reinforces the potential utility of reward information for prosthetic performance. PMID:25008408
Rajalingham, Rishi; Stacey, Richard Greg; Tsoulfas, Georgios; Musallam, Sam
2014-10-01
To restore movements to paralyzed patients, neural prosthetic systems must accurately decode patients' intentions from neural signals. Despite significant advancements, current systems are unable to restore complex movements. Decoding reward-related signals from the medial intraparietal area (MIP) could enhance prosthetic performance. However, the dynamics of reward sensitivity in MIP is not known. Furthermore, reward-related modulation in premotor areas has been attributed to behavioral confounds. Here we investigated the stability of reward encoding in MIP by assessing the effect of reward history on reward sensitivity. We recorded from neurons in MIP while monkeys performed a delayed-reach task under two reward schedules. In the variable schedule, an equal number of small- and large-rewards trials were randomly interleaved. In the constant schedule, one reward size was delivered for a block of trials. The memory period firing rate of most neurons in response to identical rewards varied according to schedule. Using systems identification tools, we attributed the schedule sensitivity to the dependence of neural activity on the history of reward. We did not find schedule-dependent behavioral changes, suggesting that reward modulates neural activity in MIP. Neural discrimination between rewards was less in the variable than in the constant schedule, degrading our ability to decode reach target and reward simultaneously. The effect of schedule was mitigated by adding Haar wavelet coefficients to the decoding model. This raises the possibility of multiple encoding schemes at different timescales and reinforces the potential utility of reward information for prosthetic performance. Copyright © 2014 the American Physiological Society.
Realization of planning design of mechanical manufacturing system by Petri net simulation model
NASA Astrophysics Data System (ADS)
Wu, Yanfang; Wan, Xin; Shi, Weixiang
1991-09-01
Planning design is to work out a more overall long-term plan. In order to guarantee a mechanical manufacturing system (MMS) designed to obtain maximum economical benefit, it is necessary to carry out a reasonable planning design for the system. First, some principles on planning design for MMS are introduced. Problems of production scheduling and their decision rules for computer simulation are presented. Realizable method of each production scheduling decision rule in Petri net model is discussed. Second, the solution of conflict rules for conflict problems during running Petri net is given. Third, based on the Petri net model of MMS which includes part flow and tool flow, according to the principle of minimum event time advance, a computer dynamic simulation of the Petri net model, that is, a computer dynamic simulation of MMS, is realized. Finally, the simulation program is applied to a simulation exmple, so the scheme of a planning design for MMS can be evaluated effectively.
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Chen, C. L.
1989-01-01
Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.
Space power system automation approaches at the George C. Marshall Space Flight Center
NASA Technical Reports Server (NTRS)
Weeks, D. J.
1987-01-01
This paper discusses the automation approaches employed in various electrical power system breadboards at the Marshall Space Flight Center. Of particular interest is the application of knowledge-based systems to fault management and dynamic payload scheduling. A description of each major breadboard and the automation approach taken for each is given.
NASA Astrophysics Data System (ADS)
Li, Guoliang; Xing, Lining; Chen, Yingwu
2017-11-01
The autonomicity of self-scheduling on Earth observation satellite and the increasing scale of satellite network attract much attention from researchers in the last decades. In reality, the limited onboard computational resource presents challenge for the online scheduling algorithm. This study considered online scheduling problem for a single autonomous Earth observation satellite within satellite network environment. It especially addressed that the urgent tasks arrive stochastically during the scheduling horizon. We described the problem and proposed a hybrid online scheduling mechanism with revision and progressive techniques to solve this problem. The mechanism includes two decision policies, a when-to-schedule policy combining periodic scheduling and critical cumulative number-based event-driven rescheduling, and a how-to-schedule policy combining progressive and revision approaches to accommodate two categories of task: normal tasks and urgent tasks. Thus, we developed two heuristic (re)scheduling algorithms and compared them with other generally used techniques. Computational experiments indicated that the into-scheduling percentage of urgent tasks in the proposed mechanism is much higher than that in periodic scheduling mechanism, and the specific performance is highly dependent on some mechanism-relevant and task-relevant factors. For the online scheduling, the modified weighted shortest imaging time first and dynamic profit system benefit heuristics outperformed the others on total profit and the percentage of successfully scheduled urgent tasks.
Towards Dynamic Service Level Agreement Negotiation:An Approach Based on WS-Agreement
NASA Astrophysics Data System (ADS)
Pichot, Antoine; Wäldrich, Oliver; Ziegler, Wolfgang; Wieder, Philipp
In Grid, e-Science and e-Business environments, Service Level Agreements are often used to establish frameworks for the delivery of services between service providers and the organisations hosting the researchers. While this high level SLAs define the overall quality of the services, it is desirable for the end-user to have dedicated service quality also for individual services like the orchestration of resources necessary for composed services. Grid level scheduling services typically are responsible for the orchestration and co-ordination of resources in the Grid. Co-allocation e.g. requires the Grid level scheduler to co-ordinate resource management systems located in different domains. As the site autonomy has to be respected negotiation is the only way to achieve the intended co-ordination. SLAs emerged as a new way to negotiate and manage usage of resources in the Grid and are already adopted by a number of management systems. Therefore, it is natural to look for ways to adopt SLAs for Grid level scheduling. In order to do this, efficient and flexible protocols are needed, which support dynamic negotiation and creation of SLAs. In this paper we propose and discuss extensions to the WS-Agreement protocol addressing these issues.
Optimized maritime emergency resource allocation under dynamic demand.
Zhang, Wenfen; Yan, Xinping; Yang, Jiaqi
2017-01-01
Emergency resource is important for people evacuation and property rescue when accident occurs. The relief efforts could be promoted by a reasonable emergency resource allocation schedule in advance. As the marine environment is complicated and changeful, the place, type, severity of maritime accident is uncertain and stochastic, bringing about dynamic demand of emergency resource. Considering dynamic demand, how to make a reasonable emergency resource allocation schedule is challenging. The key problem is to determine the optimal stock of emergency resource for supplier centers to improve relief efforts. This paper studies the dynamic demand, and which is defined as a set. Then a maritime emergency resource allocation model with uncertain data is presented. Afterwards, a robust approach is developed and used to make sure that the resource allocation schedule performs well with dynamic demand. Finally, a case study shows that the proposed methodology is feasible in maritime emergency resource allocation. The findings could help emergency manager to schedule the emergency resource allocation more flexibly in terms of dynamic demand.
Optimized maritime emergency resource allocation under dynamic demand
Yan, Xinping; Yang, Jiaqi
2017-01-01
Emergency resource is important for people evacuation and property rescue when accident occurs. The relief efforts could be promoted by a reasonable emergency resource allocation schedule in advance. As the marine environment is complicated and changeful, the place, type, severity of maritime accident is uncertain and stochastic, bringing about dynamic demand of emergency resource. Considering dynamic demand, how to make a reasonable emergency resource allocation schedule is challenging. The key problem is to determine the optimal stock of emergency resource for supplier centers to improve relief efforts. This paper studies the dynamic demand, and which is defined as a set. Then a maritime emergency resource allocation model with uncertain data is presented. Afterwards, a robust approach is developed and used to make sure that the resource allocation schedule performs well with dynamic demand. Finally, a case study shows that the proposed methodology is feasible in maritime emergency resource allocation. The findings could help emergency manager to schedule the emergency resource allocation more flexibly in terms of dynamic demand. PMID:29240792
NASA Astrophysics Data System (ADS)
Buchner, Johannes
2011-12-01
Scheduling, the task of producing a time table for resources and tasks, is well-known to be a difficult problem the more resources are involved (a NP-hard problem). This is about to become an issue in Radio astronomy as observatories consisting of hundreds to thousands of telescopes are planned and operated. The Square Kilometre Array (SKA), which Australia and New Zealand bid to host, is aiming for scales where current approaches -- in construction, operation but also scheduling -- are insufficent. Although manual scheduling is common today, the problem is becoming complicated by the demand for (1) independent sub-arrays doing simultaneous observations, which requires the scheduler to plan parallel observations and (2) dynamic re-scheduling on changed conditions. Both of these requirements apply to the SKA, especially in the construction phase. We review the scheduling approaches taken in the astronomy literature, as well as investigate techniques from human schedulers and today's observatories. The scheduling problem is specified in general for scientific observations and in particular on radio telescope arrays. Also taken into account is the fact that the observatory may be oversubscribed, requiring the scheduling problem to be integrated with a planning process. We solve this long-term scheduling problem using a time-based encoding that works in the very general case of observation scheduling. This research then compares algorithms from various approaches, including fast heuristics from CPU scheduling, Linear Integer Programming and Genetic algorithms, Branch-and-Bound enumeration schemes. Measures include not only goodness of the solution, but also scalability and re-scheduling capabilities. In conclusion, we have identified a fast and good scheduling approach that allows (re-)scheduling difficult and changing problems by combining heuristics with a Genetic algorithm using block-wise mutation operations. We are able to explain and eradicate two problems in the literature: The inability of a GA to properly improve schedules and the generation of schedules with frequent interruptions. Finally, we demonstrate the scheduling framework for several operating telescopes: (1) Dynamic re-scheduling with the AUT Warkworth 12m telescope, (2) Scheduling for the Australian Mopra 22m telescope and scheduling for the Allen Telescope Array. Furthermore, we discuss the applicability of the presented scheduling framework to the Atacama Large Millimeter/submillimeter Array (ALMA, in construction) and the SKA. In particular, during the development phase of the SKA, this dynamic, scalable scheduling framework can accommodate changing conditions.
NASA Astrophysics Data System (ADS)
Sharma, Pankaj; Jain, Ajai
2014-12-01
Stochastic dynamic job shop scheduling problem with consideration of sequence-dependent setup times are among the most difficult classes of scheduling problems. This paper assesses the performance of nine dispatching rules in such shop from makespan, mean flow time, maximum flow time, mean tardiness, maximum tardiness, number of tardy jobs, total setups and mean setup time performance measures viewpoint. A discrete event simulation model of a stochastic dynamic job shop manufacturing system is developed for investigation purpose. Nine dispatching rules identified from literature are incorporated in the simulation model. The simulation experiments are conducted under due date tightness factor of 3, shop utilization percentage of 90% and setup times less than processing times. Results indicate that shortest setup time (SIMSET) rule provides the best performance for mean flow time and number of tardy jobs measures. The job with similar setup and modified earliest due date (JMEDD) rule provides the best performance for makespan, maximum flow time, mean tardiness, maximum tardiness, total setups and mean setup time measures.
Task Decomposition Model for Dispatchers in Dynamic Scheduling of Demand Responsive Transit Systems
DOT National Transportation Integrated Search
2000-06-01
Since the passage of ADA, the demand for paratransit service is steadily increasing. Paratransit companies are relying on computer automation to streamline dispatch operations, increase productivity and reduce operator stress and error. Little resear...
Automatic programming via iterated local search for dynamic job shop scheduling.
Nguyen, Su; Zhang, Mengjie; Johnston, Mark; Tan, Kay Chen
2015-01-01
Dispatching rules have been commonly used in practice for making sequencing and scheduling decisions. Due to specific characteristics of each manufacturing system, there is no universal dispatching rule that can dominate in all situations. Therefore, it is important to design specialized dispatching rules to enhance the scheduling performance for each manufacturing environment. Evolutionary computation approaches such as tree-based genetic programming (TGP) and gene expression programming (GEP) have been proposed to facilitate the design task through automatic design of dispatching rules. However, these methods are still limited by their high computational cost and low exploitation ability. To overcome this problem, we develop a new approach to automatic programming via iterated local search (APRILS) for dynamic job shop scheduling. The key idea of APRILS is to perform multiple local searches started with programs modified from the best obtained programs so far. The experiments show that APRILS outperforms TGP and GEP in most simulation scenarios in terms of effectiveness and efficiency. The analysis also shows that programs generated by APRILS are more compact than those obtained by genetic programming. An investigation of the behavior of APRILS suggests that the good performance of APRILS comes from the balance between exploration and exploitation in its search mechanism.
Solar array flight dynamic experiment
NASA Technical Reports Server (NTRS)
Schock, R. W.
1986-01-01
The purpose of the Solar Array Flight Dynamic Experiment (SAFDE) is to demonstrate the feasibility of on-orbit measurement and ground processing of large space structures dynamic characteristics. Test definition or verification provides the dynamic characteristic accuracy required for control systems use. An illumination/measurement system was developed to fly on space shuttle flight STS-31D. The system was designed to dynamically evaluate a large solar array called the Solar Array Flight Experiment (SAFE) that had been scheduled for this flight. The SAFDE system consisted of a set of laser diode illuminators, retroreflective targets, an intelligent star tracker receiver and the associated equipment to power, condition, and record the results. In six tests on STS-41D, data was successfully acquired from 18 retroreflector targets and ground processed, post flight, to define the solar array's dynamic characteristic. The flight experiment proved the viability of on-orbit test definition of large space structures dynamic characteristics. Future large space structures controllability should be greatly enhanced by this capability.
Solar array flight dynamic experiment
NASA Technical Reports Server (NTRS)
Schock, Richard W.
1986-01-01
The purpose of the Solar Array Flight Dynamic Experiment (SAFDE) is to demonstrate the feasibility of on-orbit measurement and ground processing of large space structures dynamic characteristics. Test definition or verification provides the dynamic characteristic accuracy required for control systems use. An illumination/measurement system was developed to fly on Space Shuttle flight STS-31D. The system was designed to dynamically evaluate a large solar array called the Solar Array Flight Experiment (SAFE) that had been scheduled for this flight. The SAFDE system consisted of a set of laser diode illuminators, retroreflective targets, an intelligent star tracker receiver and the associated equipment to power, condition, and record the results. In six tests on STS-41D, data was successfully acquired from 18 retroreflector targets and ground processed, post flight, to define the solar array's dynamic characteristic. The flight experiment proved the viability of on-orbit test definition of large space structures dynamic characteristics. Future large space structures controllability should be greatly enhanced by this capability.
Solar array flight dynamic experiment
NASA Technical Reports Server (NTRS)
Schock, Richard W.
1987-01-01
The purpose of the Solar Array Flight Dynamic Experiment (SAFDE) is to demonstrate the feasibility of on-orbit measurement and ground processing of large space structures' dynamic characteristics. Test definition or verification provides the dynamic characteristic accuracy required for control systems use. An illumination/measurement system was developed to fly on space shuttle flight STS-41D. The system was designed to dynamically evaluate a large solar array called the Solar Array Flight Experiment (SAFE) that had been scheduled for this flight. The SAFDE system consisted of a set of laser diode illuminators, retroreflective targets, an intelligent star tracker receiver and the associated equipment to power, condition, and record the results. In six tests on STS-41D, data was successfully acquired from 18 retroreflector targets and ground processed, post flight, to define the solar array's dynamic characteristic. The flight experiment proved the viability of on-orbit test definition of large space structures dynamic characteristics. Future large space structures controllability should be greatly enhanced by this capability.
Exploiting Vector and Multicore Parallelsim for Recursive, Data- and Task-Parallel Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Bin; Krishnamoorthy, Sriram; Agrawal, Kunal
Modern hardware contains parallel execution resources that are well-suited for data-parallelism-vector units-and task parallelism-multicores. However, most work on parallel scheduling focuses on one type of hardware or the other. In this work, we present a scheduling framework that allows for a unified treatment of task- and data-parallelism. Our key insight is an abstraction, task blocks, that uniformly handles data-parallel iterations and task-parallel tasks, allowing them to be scheduled on vector units or executed independently as multicores. Our framework allows us to define schedulers that can dynamically select between executing task- blocks on vector units or multicores. We show that thesemore » schedulers are asymptotically optimal, and deliver the maximum amount of parallelism available in computation trees. To evaluate our schedulers, we develop program transformations that can convert mixed data- and task-parallel pro- grams into task block-based programs. Using a prototype instantiation of our scheduling framework, we show that, on an 8-core system, we can simultaneously exploit vector and multicore parallelism to achieve 14×-108× speedup over sequential baselines.« less
NASA Astrophysics Data System (ADS)
Zhou, J.; Zeng, X.; Mo, L.; Chen, L.; Jiang, Z.; Feng, Z.; Yuan, L.; He, Z.
2017-12-01
Generally, the adaptive utilization and regulation of runoff in the source region of China's southwest rivers is classified as a typical multi-objective collaborative optimization problem. There are grim competitions and incidence relation in the subsystems of water supply, electricity generation and environment, which leads to a series of complex problems represented by hydrological process variation, blocked electricity output and water environment risk. Mathematically, the difficulties of multi-objective collaborative optimization focus on the description of reciprocal relationships and the establishment of evolving model of adaptive systems. Thus, based on the theory of complex systems science, this project tries to carry out the research from the following aspects: the changing trend of coupled water resource, the covariant factor and driving mechanism, the dynamic evolution law of mutual feedback dynamic process in the supply-generation-environment coupled system, the environmental response and influence mechanism of coupled mutual feedback water resource system, the relationship between leading risk factor and multiple risk based on evolutionary stability and dynamic balance, the transfer mechanism of multiple risk response with the variation of the leading risk factor, the multidimensional coupled feedback system of multiple risk assessment index system and optimized decision theory. Based on the above-mentioned research results, the dynamic method balancing the efficiency of multiple objectives in the coupled feedback system and optimized regulation model of water resources is proposed, and the adaptive scheduling mode considering the internal characteristics and external response of coupled mutual feedback system of water resource is established. In this way, the project can make a contribution to the optimal scheduling theory and methodology of water resource management under uncertainty in the source region of Southwest River.
NASA Technical Reports Server (NTRS)
Hornstein, Rhoda S.; Wunderlich, Dana A.; Willoughby, John K.
1992-01-01
New and innovative software technology is presented that provides a cost effective bridge for smoothly transitioning prototype software, in the field of planning and scheduling, into an operational environment. Specifically, this technology mixes the flexibility and human design efficiency of dynamic data typing with the rigor and run-time efficiencies of static data typing. This new technology provides a very valuable tool for conducting the extensive, up-front system prototyping that leads to specifying the correct system and producing a reliable, efficient version that will be operationally effective and will be accepted by the intended users.
Compilation time analysis to minimize run-time overhead in preemptive scheduling on multiprocessors
NASA Astrophysics Data System (ADS)
Wauters, Piet; Lauwereins, Rudy; Peperstraete, J.
1994-10-01
This paper describes a scheduling method for hard real-time Digital Signal Processing (DSP) applications, implemented on a multi-processor. Due to the very high operating frequencies of DSP applications (typically hundreds of kHz) runtime overhead should be kept as small as possible. Because static scheduling introduces very little run-time overhead it is used as much as possible. Dynamic pre-emption of tasks is allowed if and only if it leads to better performance in spite of the extra run-time overhead. We essentially combine static scheduling with dynamic pre-emption using static priorities. Since we are dealing with hard real-time applications we must be able to guarantee at compile-time that all timing requirements will be satisfied at run-time. We will show that our method performs at least as good as any static scheduling method. It also reduces the total amount of dynamic pre-emptions compared with run time methods like deadline monotonic scheduling.
NASA Astrophysics Data System (ADS)
Tang, Li; Liu, Jing-Ning; Feng, Dan; Tong, Wei
2008-12-01
Existing security solutions in network storage environment perform poorly because cryptographic operations (encryption and decryption) implemented in software can dramatically reduce system performance. In this paper we propose a cryptographic hardware accelerator on dynamically reconfigurable platform for the security of high performance network storage system. We employ a dynamic reconfigurable platform based on a FPGA to implement a PowerPCbased embedded system, which executes cryptographic algorithms. To reduce the reconfiguration latency, we apply prefetch scheduling. Moreover, the processing elements could be dynamically configured to support different cryptographic algorithms according to the request received by the accelerator. In the experiment, we have implemented AES (Rijndael) and 3DES cryptographic algorithms in the reconfigurable accelerator. Our proposed reconfigurable cryptographic accelerator could dramatically increase the performance comparing with the traditional software-based network storage systems.
The LSST Scheduler from design to construction
NASA Astrophysics Data System (ADS)
Delgado, Francisco; Reuter, Michael A.
2016-07-01
The Large Synoptic Survey Telescope (LSST) will be a highly robotic facility, demanding a very high efficiency during its operation. To achieve this, the LSST Scheduler has been envisioned as an autonomous software component of the Observatory Control System (OCS), that selects the sequence of targets in real time. The Scheduler will drive the survey using optimization of a dynamic cost function of more than 200 parameters. Multiple science programs produce thousands of candidate targets for each observation, and multiple telemetry measurements are received to evaluate the external and the internal conditions of the observatory. The design of the LSST Scheduler started early in the project supported by Model Based Systems Engineering, detailed prototyping and scientific validation of the survey capabilities required. In order to build such a critical component, an agile development path in incremental releases is presented, integrated to the development plan of the Operations Simulator (OpSim) to allow constant testing, integration and validation in a simulated OCS environment. The final product is a Scheduler that is also capable of running 2000 times faster than real time in simulation mode for survey studies and scientific validation during commissioning and operations.
Cameron, Courtney M; Wightman, R Mark; Carelli, Regina M
2014-11-01
Electrophysiological studies show that distinct subsets of nucleus accumbens (NAc) neurons differentially encode information about goal-directed behaviors for intravenous cocaine versus natural (food/water) rewards. Further, NAc rapid dopamine signaling occurs on a timescale similar to phasic cell firing during cocaine and natural reward-seeking behaviors. However, it is not known whether dopamine signaling is reinforcer specific (i.e., is released during responding for only one type of reinforcer) within discrete NAc locations, similar to neural firing dynamics. Here, fast-scan cyclic voltammetry (FSCV) was used to measure rapid dopamine release during multiple schedules involving sucrose reward and cocaine self-administration (n = 8 rats) and, in a separate group of rats (n = 6), during a sucrose/food multiple schedule. During the sucrose/cocaine multiple schedule, dopamine increased within seconds of operant responding for both reinforcers. Although dopamine release was not reinforcer specific, more subtle differences were observed in peak dopamine concentration [DA] across reinforcer conditions. Specifically, peak [DA] was higher during the first phase of the multiple schedule, regardless of reinforcer type. Further, the time to reach peak [DA] was delayed during cocaine-responding compared to sucrose. During the sucrose/food multiple schedule, increases in dopamine release were also observed relative to operant responding for both natural rewards. However, peak [DA] was higher relative to responding for sucrose than food, regardless of reinforcer order. Overall, the results reveal the dynamics of rapid dopamine signaling in discrete locations in the NAc across reward conditions, and provide novel insight into the functional role of this system in reward-seeking behaviors. Copyright © 2014 Elsevier Ltd. All rights reserved.
Scheduling Operations for Massive Heterogeneous Clusters
NASA Technical Reports Server (NTRS)
Humphrey, John; Spagnoli, Kyle
2013-01-01
High-performance computing (HPC) programming has become increasingly difficult with the advent of hybrid supercomputers consisting of multicore CPUs and accelerator boards such as the GPU. Manual tuning of software to achieve high performance on this type of machine has been performed by programmers. This is needlessly difficult and prone to being invalidated by new hardware, new software, or changes in the underlying code. A system was developed for task-based representation of programs, which when coupled with a scheduler and runtime system, allows for many benefits, including higher performance and utilization of computational resources, easier programming and porting, and adaptations of code during runtime. The system consists of a method of representing computer algorithms as a series of data-dependent tasks. The series forms a graph, which can be scheduled for execution on many nodes of a supercomputer efficiently by a computer algorithm. The schedule is executed by a dispatch component, which is tailored to understand all of the hardware types that may be available within the system. The scheduler is informed by a cluster mapping tool, which generates a topology of available resources and their strengths and communication costs. Software is decoupled from its hardware, which aids in porting to future architectures. A computer algorithm schedules all operations, which for systems of high complexity (i.e., most NASA codes), cannot be performed optimally by a human. The system aids in reducing repetitive code, such as communication code, and aids in the reduction of redundant code across projects. It adds new features to code automatically, such as recovering from a lost node or the ability to modify the code while running. In this project, the innovators at the time of this reporting intend to develop two distinct technologies that build upon each other and both of which serve as building blocks for more efficient HPC usage. First is the scheduling and dynamic execution framework, and the second is scalable linear algebra libraries that are built directly on the former.
Application of decentralized cooperative problem solving in dynamic flexible scheduling
NASA Astrophysics Data System (ADS)
Guan, Zai-Lin; Lei, Ming; Wu, Bo; Wu, Ya; Yang, Shuzi
1995-08-01
The object of this study is to discuss an intelligent solution to the problem of task-allocation in shop floor scheduling. For this purpose, the technique of distributed artificial intelligence (DAI) is applied. Intelligent agents (IAs) are used to realize decentralized cooperation, and negotiation is realized by using message passing based on the contract net model. Multiple agents, such as manager agents, workcell agents, and workstation agents, make game-like decisions based on multiple criteria evaluations. This procedure of decentralized cooperative problem solving makes local scheduling possible. And by integrating such multiple local schedules, dynamic flexible scheduling for the whole shop floor production can be realized.
Ant colony optimization and event-based dynamic task scheduling and staffing for software projects
NASA Astrophysics Data System (ADS)
Ellappan, Vijayan; Ashwini, J.
2017-11-01
In programming change organizations from medium to inconceivable scale broadens, the issue of wander orchestrating is amazingly unusual and testing undertaking despite considering it a manual system. Programming wander-organizing requirements to deal with the issue of undertaking arranging and in addition the issue of human resource portion (also called staffing) in light of the way that most of the advantages in programming ventures are individuals. We propose a machine learning approach with finds respond in due order regarding booking by taking in the present arranging courses of action and an event based scheduler revives the endeavour arranging system moulded by the learning computation in perspective of the conformity in event like the begin with the Ander, the instant at what time possessions be free starting to ended errands, and the time when delegates stick together otherwise depart the wander inside the item change plan. The route toward invigorating the timetable structure by the even based scheduler makes the arranging method dynamic. It uses structure components to exhibit the interrelated surges of endeavours, slip-ups and singular all through different progression organizes and is adjusted to mechanical data. It increases past programming wander movement ask about by taking a gander at a survey based process with a one of a kind model, organizing it with the data based system for peril assessment and cost estimation, and using a choice showing stage.
Computer-aided software development process design
NASA Technical Reports Server (NTRS)
Lin, Chi Y.; Levary, Reuven R.
1989-01-01
The authors describe an intelligent tool designed to aid managers of software development projects in planning, managing, and controlling the development process of medium- to large-scale software projects. Its purpose is to reduce uncertainties in the budget, personnel, and schedule planning of software development projects. It is based on dynamic model for the software development and maintenance life-cycle process. This dynamic process is composed of a number of time-varying, interacting developmental phases, each characterized by its intended functions and requirements. System dynamics is used as a modeling methodology. The resulting Software LIfe-Cycle Simulator (SLICS) and the hybrid expert simulation system of which it is a subsystem are described.
Autonomous Agents for Dynamic Process Planning in the Flexible Manufacturing System
NASA Astrophysics Data System (ADS)
Nik Nejad, Hossein Tehrani; Sugimura, Nobuhiro; Iwamura, Koji; Tanimizu, Yoshitaka
Rapid changes of market demands and pressures of competition require manufacturers to maintain highly flexible manufacturing systems to cope with a complex manufacturing environment. This paper deals with development of an agent-based architecture of dynamic systems for incremental process planning in the manufacturing systems. In consideration of alternative manufacturing processes and machine tools, the process plans and the schedules of the manufacturing resources are generated incrementally and dynamically. A negotiation protocol is discussed, in this paper, to generate suitable process plans for the target products real-timely and dynamically, based on the alternative manufacturing processes. The alternative manufacturing processes are presented by the process plan networks discussed in the previous paper, and the suitable process plans are searched and generated to cope with both the dynamic changes of the product specifications and the disturbances of the manufacturing resources. We initiatively combine the heuristic search algorithms of the process plan networks with the negotiation protocols, in order to generate suitable process plans in the dynamic manufacturing environment.
NASA Technical Reports Server (NTRS)
Mclean, David R.; Tuchman, Alan; Potter, William J.
1991-01-01
A C-based artificial intelligence (AI) development effort which is based on a software tools approach is discussed with emphasis on reusability and maintainability of code. The discussion starts with simple examples of how list processing can easily be implemented in C and then proceeds to the implementations of frames and objects which use dynamic memory allocation. The implementation of procedures which use depth first search, constraint propagation, context switching, and blackboard-like simulation environment are described. Techniques for managing the complexity of C-based AI software are noted, especially the object-oriented techniques of data encapsulation and incremental development. Finally, all these concepts are put together by describing the components of planning software called the Planning And Resource Reasoning (PARR) Shell. This shell was successfully utilized for scheduling services of the Tracking and Data Relay Satellite System for the Earth Radiation Budget Satellite since May of 1987 and will be used for operations scheduling of the Explorer Platform in Nov. of 1991.
A Decentralized Scheduling Policy for a Dynamically Reconfigurable Production System
NASA Astrophysics Data System (ADS)
Giordani, Stefano; Lujak, Marin; Martinelli, Francesco
In this paper, the static layout of a traditional multi-machine factory producing a set of distinct goods is integrated with a set of mobile production units - robots. The robots dynamically change their work position to increment the product rate of the different typologies of products in respect to the fluctuations of the demands and production costs during a given time horizon. Assuming that the planning time horizon is subdivided into a finite number of time periods, this particularly flexible layout requires the definition and the solution of a complex scheduling problem, involving for each period of the planning time horizon, the determination of the position of the robots, i.e., the assignment to the respective tasks in order to minimize production costs given the product demand rates during the planning time horizon.
Hybrid robust predictive optimization method of power system dispatch
Chandra, Ramu Sharat [Niskayuna, NY; Liu, Yan [Ballston Lake, NY; Bose, Sumit [Niskayuna, NY; de Bedout, Juan Manuel [West Glenville, NY
2011-08-02
A method of power system dispatch control solves power system dispatch problems by integrating a larger variety of generation, load and storage assets, including without limitation, combined heat and power (CHP) units, renewable generation with forecasting, controllable loads, electric, thermal and water energy storage. The method employs a predictive algorithm to dynamically schedule different assets in order to achieve global optimization and maintain the system normal operation.
Scheduling Randomly-Deployed Heterogeneous Video Sensor Nodes for Reduced Intrusion Detection Time
NASA Astrophysics Data System (ADS)
Pham, Congduc
This paper proposes to use video sensor nodes to provide an efficient intrusion detection system. We use a scheduling mechanism that takes into account the criticality of the surveillance application and present a performance study of various cover set construction strategies that take into account cameras with heterogeneous angle of view and those with very small angle of view. We show by simulation how a dynamic criticality management scheme can provide fast event detection for mission-critical surveillance applications by increasing the network lifetime and providing low stealth time of intrusions.
Automatic generation of efficient orderings of events for scheduling applications
NASA Technical Reports Server (NTRS)
Morris, Robert A.
1994-01-01
In scheduling a set of tasks, it is often not known with certainty how long a given event will take. We call this duration uncertainty. Duration uncertainty is a primary obstacle to the successful completion of a schedule. If a duration of one task is longer than expected, the remaining tasks are delayed. The delay may result in the abandonment of the schedule itself, a phenomenon known as schedule breakage. One response to schedule breakage is on-line, dynamic rescheduling. A more recent alternative is called proactive rescheduling. This method uses statistical data about the durations of events in order to anticipate the locations in the schedule where breakage is likely prior to the execution of the schedule. It generates alternative schedules at such sensitive points, which can be then applied by the scheduler at execution time, without the delay incurred by dynamic rescheduling. This paper proposes a technique for making proactive error management more effective. The technique is based on applying a similarity-based method of clustering to the problem of identifying similar events in a set of events.
Software-Engineering Process Simulation (SEPS) model
NASA Technical Reports Server (NTRS)
Lin, C. Y.; Abdel-Hamid, T.; Sherif, J. S.
1992-01-01
The Software Engineering Process Simulation (SEPS) model is described which was developed at JPL. SEPS is a dynamic simulation model of the software project development process. It uses the feedback principles of system dynamics to simulate the dynamic interactions among various software life cycle development activities and management decision making processes. The model is designed to be a planning tool to examine tradeoffs of cost, schedule, and functionality, and to test the implications of different managerial policies on a project's outcome. Furthermore, SEPS will enable software managers to gain a better understanding of the dynamics of software project development and perform postmodern assessments.
Approximate dynamic programming approaches for appointment scheduling with patient preferences.
Li, Xin; Wang, Jin; Fung, Richard Y K
2018-04-01
During the appointment booking process in out-patient departments, the level of patient satisfaction can be affected by whether or not their preferences can be met, including the choice of physicians and preferred time slot. In addition, because the appointments are sequential, considering future possible requests is also necessary for a successful appointment system. This paper proposes a Markov decision process model for optimizing the scheduling of sequential appointments with patient preferences. In contrast to existing models, the evaluation of a booking decision in this model focuses on the extent to which preferences are satisfied. Characteristics of the model are analysed to develop a system for formulating booking policies. Based on these characteristics, two types of approximate dynamic programming algorithms are developed to avoid the curse of dimensionality. Experimental results suggest directions for further fine-tuning of the model, as well as improving the efficiency of the two proposed algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.
Collaborative Resource Allocation
NASA Technical Reports Server (NTRS)
Wang, Yeou-Fang; Wax, Allan; Lam, Raymond; Baldwin, John; Borden, Chester
2007-01-01
Collaborative Resource Allocation Networking Environment (CRANE) Version 0.5 is a prototype created to prove the newest concept of using a distributed environment to schedule Deep Space Network (DSN) antenna times in a collaborative fashion. This program is for all space-flight and terrestrial science project users and DSN schedulers to perform scheduling activities and conflict resolution, both synchronously and asynchronously. Project schedulers can, for the first time, participate directly in scheduling their tracking times into the official DSN schedule, and negotiate directly with other projects in an integrated scheduling system. A master schedule covers long-range, mid-range, near-real-time, and real-time scheduling time frames all in one, rather than the current method of separate functions that are supported by different processes and tools. CRANE also provides private workspaces (both dynamic and static), data sharing, scenario management, user control, rapid messaging (based on Java Message Service), data/time synchronization, workflow management, notification (including emails), conflict checking, and a linkage to a schedule generation engine. The data structure with corresponding database design combines object trees with multiple associated mortal instances and relational database to provide unprecedented traceability and simplify the existing DSN XML schedule representation. These technologies are used to provide traceability, schedule negotiation, conflict resolution, and load forecasting from real-time operations to long-range loading analysis up to 20 years in the future. CRANE includes a database, a stored procedure layer, an agent-based middle tier, a Web service wrapper, a Windows Integrated Analysis Environment (IAE), a Java application, and a Web page interface.
Smart monitoring system based on adaptive current control for superconducting cable test.
Arpaia, Pasquale; Ballarino, Amalia; Daponte, Vincenzo; Montenero, Giuseppe; Svelto, Cesare
2014-12-01
A smart monitoring system for superconducting cable test is proposed with an adaptive current control of a superconducting transformer secondary. The design, based on Fuzzy Gain Scheduling, allows the controller parameters to adapt continuously, and finely, to the working variations arising from transformer nonlinear dynamics. The control system is integrated in a fully digital control loop, with all the related benefits, i.e., high noise rejection, ease of implementation/modification, and so on. In particular, an accurate model of the system, controlled by a Fuzzy Gain Scheduler of the superconducting transformer, was achieved by an experimental campaign through the working domain at several current ramp rates. The model performance was characterized by simulation, under all the main operating conditions, in order to guide the controller design. Finally, the proposed monitoring system was experimentally validated at European Organization for Nuclear Research (CERN) in comparison to the state-of-the-art control system [P. Arpaia, L. Bottura, G. Montenero, and S. Le Naour, "Performance improvement of a measurement station for superconducting cable test," Rev. Sci. Instrum. 83, 095111 (2012)] of the Facility for the Research on Superconducting Cables, achieving a significant performance improvement: a reduction in the system overshoot by 50%, with a related attenuation of the corresponding dynamic residual error (both absolute and RMS) up to 52%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Guodong; Ollis, Thomas B.; Xiao, Bailu
Here, this paper proposes a Mixed Integer Conic Programming (MICP) model for community microgrids considering the network operational constraints and building thermal dynamics. The proposed optimization model optimizes not only the operating cost, including fuel cost, purchasing cost, battery degradation cost, voluntary load shedding cost and the cost associated with customer discomfort due to room temperature deviation from the set point, but also several performance indices, including voltage deviation, network power loss and power factor at the Point of Common Coupling (PCC). In particular, the detailed thermal dynamic model of buildings is integrated into the distribution optimal power flow (D-OPF)more » model for the optimal operation of community microgrids. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of the proposed model and significant saving in electricity cost could be achieved with network operational constraints satisfied.« less
Liu, Guodong; Ollis, Thomas B.; Xiao, Bailu; ...
2017-10-10
Here, this paper proposes a Mixed Integer Conic Programming (MICP) model for community microgrids considering the network operational constraints and building thermal dynamics. The proposed optimization model optimizes not only the operating cost, including fuel cost, purchasing cost, battery degradation cost, voluntary load shedding cost and the cost associated with customer discomfort due to room temperature deviation from the set point, but also several performance indices, including voltage deviation, network power loss and power factor at the Point of Common Coupling (PCC). In particular, the detailed thermal dynamic model of buildings is integrated into the distribution optimal power flow (D-OPF)more » model for the optimal operation of community microgrids. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of the proposed model and significant saving in electricity cost could be achieved with network operational constraints satisfied.« less
User interface issues in supporting human-computer integrated scheduling
NASA Technical Reports Server (NTRS)
Cooper, Lynne P.; Biefeld, Eric W.
1991-01-01
The topics are presented in view graph form and include the following: characteristics of Operations Mission Planner (OMP) schedule domain; OMP architecture; definition of a schedule; user interface dimensions; functional distribution; types of users; interpreting user interaction; dynamic overlays; reactive scheduling; and transitioning the interface.
NASA Technical Reports Server (NTRS)
Borse, John E.; Owens, Christopher C.
1992-01-01
Our research focuses on the problem of recovering from perturbations in large-scale schedules, specifically on the ability of a human-machine partnership to dynamically modify an airline schedule in response to unanticipated disruptions. This task is characterized by massive interdependencies and a large space of possible actions. Our approach is to apply the following: qualitative, knowledge-intensive techniques relying on a memory of stereotypical failures and appropriate recoveries; and quantitative techniques drawn from the Operations Research community's work on scheduling. Our main scientific challenge is to represent schedules, failures, and repairs so as to make both sets of techniques applicable to the same data. This paper outlines ongoing research in which we are cooperating with United Airlines to develop our understanding of the scientific issues underlying the practicalities of dynamic, real-time schedule repair.
Wind Power Ramping Product for Increasing Power System Flexibility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Mingjian; Zhang, Jie; Wu, Hongyu
With increasing penetrations of wind power, system operators are concerned about a potential lack of system flexibility and ramping capacity in real-time dispatch stages. In this paper, a modified dispatch formulation is proposed considering the wind power ramping product (WPRP). A swinging door algorithm (SDA) and dynamic programming are combined and used to detect WPRPs in the next scheduling periods. The detected WPRPs are included in the unit commitment (UC) formulation considering ramping capacity limits, active power limits, and flexible ramping requirements. The modified formulation is solved by mixed integer linear programming. Numerical simulations on a modified PJM 5-bus Systemmore » show the effectiveness of the model considering WPRP, which not only reduces the production cost but also does not affect the generation schedules of thermal units.« less
Cloud computing task scheduling strategy based on improved differential evolution algorithm
NASA Astrophysics Data System (ADS)
Ge, Junwei; He, Qian; Fang, Yiqiu
2017-04-01
In order to optimize the cloud computing task scheduling scheme, an improved differential evolution algorithm for cloud computing task scheduling is proposed. Firstly, the cloud computing task scheduling model, according to the model of the fitness function, and then used improved optimization calculation of the fitness function of the evolutionary algorithm, according to the evolution of generation of dynamic selection strategy through dynamic mutation strategy to ensure the global and local search ability. The performance test experiment was carried out in the CloudSim simulation platform, the experimental results show that the improved differential evolution algorithm can reduce the cloud computing task execution time and user cost saving, good implementation of the optimal scheduling of cloud computing tasks.
An application of different dioids in public key cryptography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Durcheva, Mariana I., E-mail: mdurcheva66@gmail.com
2014-11-18
Dioids provide a natural framework for analyzing a broad class of discrete event dynamical systems such as the design and analysis of bus and railway timetables, scheduling of high-throughput industrial processes, solution of combinatorial optimization problems, the analysis and improvement of flow systems in communication networks. They have appeared in several branches of mathematics such as functional analysis, optimization, stochastic systems and dynamic programming, tropical geometry, fuzzy logic. In this paper we show how to involve dioids in public key cryptography. The main goal is to create key – exchange protocols based on dioids. Additionally the digital signature scheme ismore » presented.« less
Magnetospheric MultiScale (MMS) System Manager
NASA Technical Reports Server (NTRS)
Schiff, Conrad; Maher, Francis Alfred; Henely, Sean Philip; Rand, David
2014-01-01
The Magnetospheric MultiScale (MMS) mission is an ambitious NASA space science mission in which 4 spacecraft are flown in tight formation about a highly elliptical orbit. Each spacecraft has multiple instruments that measure particle and field compositions in the Earths magnetosphere. By controlling the members relative motion, MMS can distinguish temporal and spatial fluctuations in a way that a single spacecraft cannot.To achieve this control, 2 sets of four maneuvers, distributed evenly across the spacecraft must be performed approximately every 14 days. Performing a single maneuver on an individual spacecraft is usually labor intensive and the complexity becomes clearly increases with four. As a result, the MMS flight dynamics team turned to the System Manager to put the routine or error-prone under machine control freeing the analysts for activities that require human judgment.The System Manager is an expert system that is capable of handling operations activities associated with performing MMS maneuvers. As an expert system, it can work off a known schedule, launching jobs based on a one-time occurrence or on a set reoccurring schedule. It is also able to detect situational changes and use event-driven programming to change schedules, adapt activities, or call for help.
Systemic delay propagation in the US airport network
Fleurquin, Pablo; Ramasco, José J.; Eguiluz, Victor M.
2013-01-01
Technologically driven transport systems are characterized by a networked structure connecting operation centers and by a dynamics ruled by pre-established schedules. Schedules impose serious constraints on the timing of the operations, condition the allocation of resources and define a baseline to assess system performance. Here we study the performance of an air transportation system in terms of delays. Technical, operational or meteorological issues affecting some flights give rise to primary delays. When operations continue, such delays can propagate, magnify and eventually involve a significant part of the network. We define metrics able to quantify the level of network congestion and introduce a model that reproduces the delay propagation patterns observed in the U.S. performance data. Our results indicate that there is a non-negligible risk of systemic instability even under normal operating conditions. We also identify passenger and crew connectivity as the most relevant internal factor contributing to delay spreading. PMID:23362459
Learning to Control Advanced Life Support Systems
NASA Technical Reports Server (NTRS)
Subramanian, Devika
2004-01-01
Advanced life support systems have many interacting processes and limited resources. Controlling and optimizing advanced life support systems presents unique challenges. In particular, advanced life support systems are nonlinear coupled dynamical systems and it is difficult for humans to take all interactions into account to design an effective control strategy. In this project. we developed several reinforcement learning controllers that actively explore the space of possible control strategies, guided by rewards from a user specified long term objective function. We evaluated these controllers using a discrete event simulation of an advanced life support system. This simulation, called BioSim, designed by Nasa scientists David Kortenkamp and Scott Bell has multiple, interacting life support modules including crew, food production, air revitalization, water recovery, solid waste incineration and power. They are implemented in a consumer/producer relationship in which certain modules produce resources that are consumed by other modules. Stores hold resources between modules. Control of this simulation is via adjusting flows of resources between modules and into/out of stores. We developed adaptive algorithms that control the flow of resources in BioSim. Our learning algorithms discovered several ingenious strategies for maximizing mission length by controlling the air and water recycling systems as well as crop planting schedules. By exploiting non-linearities in the overall system dynamics, the learned controllers easily out- performed controllers written by human experts. In sum, we accomplished three goals. We (1) developed foundations for learning models of coupled dynamical systems by active exploration of the state space, (2) developed and tested algorithms that learn to efficiently control air and water recycling processes as well as crop scheduling in Biosim, and (3) developed an understanding of the role machine learning in designing control systems for advanced life support.
The use of cluster analysis techniques in spaceflight project cost risk estimation
NASA Technical Reports Server (NTRS)
Fox, G.; Ebbeler, D.; Jorgensen, E.
2003-01-01
Project cost risk is the uncertainty in final project cost, contingent on initial budget, requirements and schedule. For a proposed mission, a dynamic simulation model relying for some of its input on a simple risk elicitation is used to identify and quantify systemic cost risk.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-13
... of Labor, Employment and Training Administration, Office of Workforce Security, 200 Constitution... with reemployment and training services through the workforce investment system by linking them to... understand program dynamics, and to gather data to report on REAs, including the number of scheduled in...
A novel LTE scheduling algorithm for green technology in smart grid.
Hindia, Mohammad Nour; Reza, Ahmed Wasif; Noordin, Kamarul Ariffin; Chayon, Muhammad Hasibur Rashid
2015-01-01
Smart grid (SG) application is being used nowadays to meet the demand of increasing power consumption. SG application is considered as a perfect solution for combining renewable energy resources and electrical grid by means of creating a bidirectional communication channel between the two systems. In this paper, three SG applications applicable to renewable energy system, namely, distribution automation (DA), distributed energy system-storage (DER) and electrical vehicle (EV), are investigated in order to study their suitability in Long Term Evolution (LTE) network. To compensate the weakness in the existing scheduling algorithms, a novel bandwidth estimation and allocation technique and a new scheduling algorithm are proposed. The technique allocates available network resources based on application's priority, whereas the algorithm makes scheduling decision based on dynamic weighting factors of multi-criteria to satisfy the demands (delay, past average throughput and instantaneous transmission rate) of quality of service. Finally, the simulation results demonstrate that the proposed mechanism achieves higher throughput, lower delay and lower packet loss rate for DA and DER as well as provide a degree of service for EV. In terms of fairness, the proposed algorithm shows 3%, 7 % and 9% better performance compared to exponential rule (EXP-Rule), modified-largest weighted delay first (M-LWDF) and exponential/PF (EXP/PF), respectively.
A Novel LTE Scheduling Algorithm for Green Technology in Smart Grid
Hindia, Mohammad Nour; Reza, Ahmed Wasif; Noordin, Kamarul Ariffin; Chayon, Muhammad Hasibur Rashid
2015-01-01
Smart grid (SG) application is being used nowadays to meet the demand of increasing power consumption. SG application is considered as a perfect solution for combining renewable energy resources and electrical grid by means of creating a bidirectional communication channel between the two systems. In this paper, three SG applications applicable to renewable energy system, namely, distribution automation (DA), distributed energy system-storage (DER) and electrical vehicle (EV), are investigated in order to study their suitability in Long Term Evolution (LTE) network. To compensate the weakness in the existing scheduling algorithms, a novel bandwidth estimation and allocation technique and a new scheduling algorithm are proposed. The technique allocates available network resources based on application’s priority, whereas the algorithm makes scheduling decision based on dynamic weighting factors of multi-criteria to satisfy the demands (delay, past average throughput and instantaneous transmission rate) of quality of service. Finally, the simulation results demonstrate that the proposed mechanism achieves higher throughput, lower delay and lower packet loss rate for DA and DER as well as provide a degree of service for EV. In terms of fairness, the proposed algorithm shows 3%, 7 % and 9% better performance compared to exponential rule (EXP-Rule), modified-largest weighted delay first (M-LWDF) and exponential/PF (EXP/PF), respectively. PMID:25830703
Towards a dynamical scheduler for ALMA: a science - software collaboration
NASA Astrophysics Data System (ADS)
Avarias, Jorge; Toledo, Ignacio; Espada, Daniel; Hibbard, John; Nyman, Lars-Ake; Hiriart, Rafael
2016-07-01
State-of-the art astronomical facilities are costly to build and operate, hence it is essential that these facilities must be operated as much efficiently as possible, trying to maximize the scientific output and at the same time minimizing overhead times. Over the latest decades the scheduling problem has drawn attention of research because new facilities have been demonstrated that is unfeasible to try to schedule observations manually, due the complexity to satisfy the astronomical and instrumental constraints and the number of scientific proposals to be reviewed and evaluated in near real-time. In addition, the dynamic nature of some constraints make this problem even more difficult. The Atacama Large Millimeter/submillimeter Array (ALMA) is a major collaboration effort between European (ESO), North American (NRAO) and East Asian countries (NAOJ), under operations on the Chilean Chajnantor plateau, at 5.000 meters of altitude. During normal operations at least two independent arrays are available, aiming to achieve different types of science. Since ALMA does not observe in the visible spectrum, observations are not limited to night time only, thus a 24/7 operation with little downtime as possible is expected when full operations state will have been reached. However, during preliminary operations (early-science) ALMA has been operated on tied schedules using around half of the whole day-time to conduct scientific observations. The purpose of this paper is to explain how the observation scheduling and its optimization is done within ALMA, giving details about the problem complexity, its similarities and differences with traditional scheduling problems found in the literature. The paper delves into the current recommendation system implementation and the difficulties found during the road to its deployment in production.
NASA Astrophysics Data System (ADS)
Barreiro, F. H.; Borodin, M.; De, K.; Golubkov, D.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Padolski, S.; Wenaus, T.; ATLAS Collaboration
2017-10-01
The second generation of the ATLAS Production System called ProdSys2 is a distributed workload manager that runs daily hundreds of thousands of jobs, from dozens of different ATLAS specific workflows, across more than hundred heterogeneous sites. It achieves high utilization by combining dynamic job definition based on many criteria, such as input and output size, memory requirements and CPU consumption, with manageable scheduling policies and by supporting different kind of computational resources, such as GRID, clouds, supercomputers and volunteer-computers. The system dynamically assigns a group of jobs (task) to a group of geographically distributed computing resources. Dynamic assignment and resources utilization is one of the major features of the system, it didn’t exist in the earliest versions of the production system where Grid resources topology was predefined using national or/and geographical pattern. Production System has a sophisticated job fault-recovery mechanism, which efficiently allows to run multi-Terabyte tasks without human intervention. We have implemented “train” model and open-ended production which allow to submit tasks automatically as soon as new set of data is available and to chain physics groups data processing and analysis with central production by the experiment. We present an overview of the ATLAS Production System and its major components features and architecture: task definition, web user interface and monitoring. We describe the important design decisions and lessons learned from an operational experience during the first year of LHC Run2. We also report the performance of the designed system and how various workflows, such as data (re)processing, Monte-Carlo and physics group production, users analysis, are scheduled and executed within one production system on heterogeneous computing resources.
Design of the Protocol Processor for the ROBUS-2 Communication System
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Miner, Paul S.
2005-01-01
The ROBUS-2 Protocol Processor (RPP) is a custom-designed hardware component implementing the functionality of the ROBUS-2 fault-tolerant communication system. The Reliable Optical Bus (ROBUS) is the core communication system of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER), a general-purpose fault tolerant integrated modular architecture currently under development at NASA Langley Research Center. ROBUS is a time-division multiple access (TDMA) broadcast communication system with medium access control by means of time-indexed communication schedule. ROBUS-2 is a developmental version of the ROBUS providing guaranteed fault-tolerant services to the attached processing elements (PEs), in the presence of a bounded number of faults. These services include message broadcast (Byzantine Agreement), dynamic communication schedule update, time reference (clock synchronization), and distributed diagnosis (group membership). ROBUS also features fault-tolerant startup and restart capabilities. ROBUS-2 tolerates internal as well as PE faults, and incorporates a dynamic self-reconfiguration capability driven by the internal diagnostic system. ROBUS consists of RPPs connected to each other by a lower-level physical communication network. The RPP has a pipelined architecture and the design is parameterized in the behavioral and structural domains. The design of the RPP enables the bus to achieve a PE-message throughput that approaches the available bandwidth at the physical layer.
Coordinated Scheduling for Interdependent Electric Power and Natural Gas Infrastructures
Zlotnik, Anatoly; Roald, Line; Backhaus, Scott; ...
2016-03-24
The extensive installation of gas-fired power plants in many parts of the world has led electric systems to depend heavily on reliable gas supplies. The use of gas-fired generators for peak load and reserve provision causes high intraday variability in withdrawals from high-pressure gas transmission systems. Such variability can lead to gas price fluctuations and supply disruptions that affect electric generator dispatch, electricity prices, and threaten the security of power systems and gas pipelines. These infrastructures function on vastly different spatio-temporal scales, which prevents current practices for separate operations and market clearing from being coordinated. Here in this article, wemore » apply new techniques for control of dynamic gas flows on pipeline networks to examine day-ahead scheduling of electric generator dispatch and gas compressor operation for different levels of integration, spanning from separate forecasting, and simulation to combined optimal control. We formulate multiple coordination scenarios and develop tractable physically accurate computational implementations. These scenarios are compared using an integrated model of test networks for power and gas systems with 24 nodes and 24 pipes, respectively, which are coupled through gas-fired generators. The analysis quantifies the economic efficiency and security benefits of gas-electric coordination and dynamic gas system operation.« less
Population-based learning of load balancing policies for a distributed computer system
NASA Technical Reports Server (NTRS)
Mehra, Pankaj; Wah, Benjamin W.
1993-01-01
Effective load-balancing policies use dynamic resource information to schedule tasks in a distributed computer system. We present a novel method for automatically learning such policies. At each site in our system, we use a comparator neural network to predict the relative speedup of an incoming task using only the resource-utilization patterns obtained prior to the task's arrival. Outputs of these comparator networks are broadcast periodically over the distributed system, and the resource schedulers at each site use these values to determine the best site for executing an incoming task. The delays incurred in propagating workload information and tasks from one site to another, as well as the dynamic and unpredictable nature of workloads in multiprogrammed multiprocessors, may cause the workload pattern at the time of execution to differ from patterns prevailing at the times of load-index computation and decision making. Our load-balancing policy accommodates this uncertainty by using certain tunable parameters. We present a population-based machine-learning algorithm that adjusts these parameters in order to achieve high average speedups with respect to local execution. Our results show that our load-balancing policy, when combined with the comparator neural network for workload characterization, is effective in exploiting idle resources in a distributed computer system.
Adaptations in Electronic Structure Calculations in Heterogeneous Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talamudupula, Sai
Modern quantum chemistry deals with electronic structure calculations of unprecedented complexity and accuracy. They demand full power of high-performance computing and must be in tune with the given architecture for superior e ciency. To make such applications resourceaware, it is desirable to enable their static and dynamic adaptations using some external software (middleware), which may monitor both system availability and application needs, rather than mix science with system-related calls inside the application. The present work investigates scienti c application interlinking with middleware based on the example of the computational chemistry package GAMESS and middleware NICAN. The existing synchronous model ismore » limited by the possible delays due to the middleware processing time under the sustainable runtime system conditions. Proposed asynchronous and hybrid models aim at overcoming this limitation. When linked with NICAN, the fragment molecular orbital (FMO) method is capable of adapting statically and dynamically its fragment scheduling policy based on the computing platform conditions. Signi cant execution time and throughput gains have been obtained due to such static adaptations when the compute nodes have very di erent core counts. Dynamic adaptations are based on the main memory availability at run time. NICAN prompts FMO to postpone scheduling certain fragments, if there is not enough memory for their immediate execution. Hence, FMO may be able to complete the calculations whereas without such adaptations it aborts.« less
Automated Planning for a Deep Space Communications Station
NASA Technical Reports Server (NTRS)
Estlin, Tara; Fisher, Forest; Mutz, Darren; Chien, Steve
1999-01-01
This paper describes the application of Artificial Intelligence planning techniques to the problem of antenna track plan generation for a NASA Deep Space Communications Station. Me described system enables an antenna communications station to automatically respond to a set of tracking goals by correctly configuring the appropriate hardware and software to provide the requested communication services. To perform this task, the Automated Scheduling and Planning Environment (ASPEN) has been applied to automatically produce antenna trucking plans that are tailored to support a set of input goals. In this paper, we describe the antenna automation problem, the ASPEN planning and scheduling system, how ASPEN is used to generate antenna track plans, the results of several technology demonstrations, and future work utilizing dynamic planning technology.
Performance and control study of a low-pressure-ratio turbojet engine for a drone aircraft
NASA Technical Reports Server (NTRS)
Seldner, K.; Geyser, L. C.; Gold, H.; Walker, D.; Burgner, G.
1972-01-01
The results of analog and digital computer studies of a low-pressure-ratio turbojet engine system for use in a drone vehicle are presented. The turbojet engine consists of a four-stage axial compressor, single-stage turbine, and a fixed area exhaust nozzle. Three simplified fuel schedules and a generalized parameter fuel control for the engine system are presented and evaluated. The evaluation is based on the performance of each schedule or control during engine acceleration from a windmill start at Mach 0.8 and 6100 meters to 100 percent corrected speed. It was found that, because of the higher acceleration margin permitted by the control, the generalized parameter control exhibited the best dynamic performance.
Quasi-dynamic earthquake fault systems with rheological heterogeneity
NASA Astrophysics Data System (ADS)
Brietzke, G. B.; Hainzl, S.; Zoeller, G.; Holschneider, M.
2009-12-01
Seismic risk and hazard estimates mostly use pure empirical, stochastic models of earthquake fault systems tuned specifically to the vulnerable areas of interest. Although such models allow for reasonable risk estimates, such models cannot allow for physical statements of the described seismicity. In contrary such empirical stochastic models, physics based earthquake fault systems models allow for a physical reasoning and interpretation of the produced seismicity and system dynamics. Recently different fault system earthquake simulators based on frictional stick-slip behavior have been used to study effects of stress heterogeneity, rheological heterogeneity, or geometrical complexity on earthquake occurrence, spatial and temporal clustering of earthquakes, and system dynamics. Here we present a comparison of characteristics of synthetic earthquake catalogs produced by two different formulations of quasi-dynamic fault system earthquake simulators. Both models are based on discretized frictional faults embedded in an elastic half-space. While one (1) is governed by rate- and state-dependent friction with allowing three evolutionary stages of independent fault patches, the other (2) is governed by instantaneous frictional weakening with scheduled (and therefore causal) stress transfer. We analyze spatial and temporal clustering of events and characteristics of system dynamics by means of physical parameters of the two approaches.
Description of waste pretreatment and interfacing systems dynamic simulation model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garbrick, D.J.; Zimmerman, B.D.
1995-05-01
The Waste Pretreatment and Interfacing Systems Dynamic Simulation Model was created to investigate the required pretreatment facility processing rates for both high level and low level waste so that the vitrification of tank waste can be completed according to the milestones defined in the Tri-Party Agreement (TPA). In order to achieve this objective, the processes upstream and downstream of the pretreatment facilities must also be included. The simulation model starts with retrieval of tank waste and ends with vitrification for both low level and high level wastes. This report describes the results of three simulation cases: one based on suggestedmore » average facility processing rates, one with facility rates determined so that approximately 6 new DSTs are required, and one with facility rates determined so that approximately no new DSTs are required. It appears, based on the simulation results, that reasonable facility processing rates can be selected so that no new DSTs are required by the TWRS program. However, this conclusion must be viewed with respect to the modeling assumptions, described in detail in the report. Also included in the report, in an appendix, are results of two sensitivity cases: one with glass plant water recycle steams recycled versus not recycled, and one employing the TPA SST retrieval schedule versus a more uniform SST retrieval schedule. Both recycling and retrieval schedule appear to have a significant impact on overall tank usage.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramamurthy, Byravamurthy
2014-05-05
In this project, developed scheduling frameworks for dynamic bandwidth demands for large-scale science applications. In particular, we developed scheduling algorithms for dynamic bandwidth demands in this project. Apart from theoretical approaches such as Integer Linear Programming, Tabu Search and Genetic Algorithm heuristics, we have utilized practical data from ESnet OSCARS project (from our DOE lab partners) to conduct realistic simulations of our approaches. We have disseminated our work through conference paper presentations and journal papers and a book chapter. In this project we addressed the problem of scheduling of lightpaths over optical wavelength division multiplexed (WDM) networks. We published severalmore » conference papers and journal papers on this topic. We also addressed the problems of joint allocation of computing, storage and networking resources in Grid/Cloud networks and proposed energy-efficient mechanisms for operatin optical WDM networks.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reynolds, John; Jankovsky, Zachary; Metzroth, Kyle G
2018-04-04
The purpose of the ADAPT code is to generate Dynamic Event Trees (DET) using a user specified set of simulators. ADAPT can utilize any simulation tool which meets a minimal set of requirements. ADAPT is based on the concept of DET which uses explicit modeling of the deterministic dynamic processes that take place during a nuclear reactor plant system (or other complex system) evolution along with stochastic modeling. When DET are used to model various aspects of Probabilistic Risk Assessment (PRA), all accident progression scenarios starting from an initiating event are considered simultaneously. The DET branching occurs at user specifiedmore » times and/or when an action is required by the system and/or the operator. These outcomes then decide how the dynamic system variables will evolve in time for each DET branch. Since two different outcomes at a DET branching may lead to completely different paths for system evolution, the next branching for these paths may occur not only at separate times, but can be based on different branching criteria. The computational infrastructure allows for flexibility in ADAPT to link with different system simulation codes, parallel processing of the scenarios under consideration, on-line scenario management (initiation as well as termination), analysis of results, and user friendly graphical capabilities. The ADAPT system is designed for a distributed computing environment; the scheduler can track multiple concurrent branches simultaneously. The scheduler is modularized so that the DET branching strategy can be modified (e.g. biasing towards the worst-case scenario/event). Independent database systems store data from the simulation tasks and the DET structure so that the event tree can be constructed and analyzed later. ADAPT is provided with a user-friendly client which can easily sort through and display the results of an experiment, precluding the need for the user to manually inspect individual simulator runs.« less
NASA Technical Reports Server (NTRS)
Hicks, John W.; Moulton, Bryan J.
1988-01-01
The camber control loop of the X-29A FSW aircraft was designed to furnish the optimum L/D for trimmed, stabilized flight. A marked difference was noted between automatic wing camber control loop behavior in dynamic maneuvers and in stabilized flight conditions, which in turn affected subsonic aerodynamic performance. The degree of drag level increase was a direct function of maneuver rate. Attention is given to the aircraft flight drag polar effects of maneuver dynamics in light of wing camber control loop schedule. The effect of changing camber scheduling to better track the optimum automatic camber control L/D schedule is discussed.
NASA Technical Reports Server (NTRS)
Chapman, K. B.; Cox, C. M.; Thomas, C. W.; Cuevas, O. O.; Beckman, R. M.
1994-01-01
The Flight Dynamics Facility (FDF) at the NASA Goddard Space Flight Center (GSFC) generates numerous products for NASA-supported spacecraft, including the Tracking and Data Relay Satellites (TDRS's), the Hubble Space Telescope (HST), the Extreme Ultraviolet Explorer (EUVE), and the space shuttle. These products include orbit determination data, acquisition data, event scheduling data, and attitude data. In most cases, product generation involves repetitive execution of many programs. The increasing number of missions supported by the FDF has necessitated the use of automated systems to schedule, execute, and quality assure these products. This automation allows the delivery of accurate products in a timely and cost-efficient manner. To be effective, these systems must automate as many repetitive operations as possible and must be flexible enough to meet changing support requirements. The FDF Orbit Determination Task (ODT) has implemented several systems that automate product generation and quality assurance (QA). These systems include the Orbit Production Automation System (OPAS), the New Enhanced Operations Log (NEOLOG), and the Quality Assurance Automation Software (QA Tool). Implementation of these systems has resulted in a significant reduction in required manpower, elimination of shift work and most weekend support, and improved support quality, while incurring minimal development cost. This paper will present an overview of the concepts used and experiences gained from the implementation of these automation systems.
Scheduling algorithms for rapid imaging using agile Cubesat constellations
NASA Astrophysics Data System (ADS)
Nag, Sreeja; Li, Alan S.; Merrick, James H.
2018-02-01
Distributed Space Missions such as formation flight and constellations, are being recognized as important Earth Observation solutions to increase measurement samples over space and time. Cubesats are increasing in size (27U, ∼40 kg in development) with increasing capabilities to host imager payloads. Given the precise attitude control systems emerging in the commercial market, Cubesats now have the ability to slew and capture images within short notice. We propose a modular framework that combines orbital mechanics, attitude control and scheduling optimization to plan the time-varying, full-body orientation of agile Cubesats in a constellation such that they maximize the number of observed images and observation time, within the constraints of Cubesat hardware specifications. The attitude control strategy combines bang-bang and PD control, with constraints such as power consumption, response time, and stability factored into the optimality computations and a possible extension to PID control to account for disturbances. Schedule optimization is performed using dynamic programming with two levels of heuristics, verified and improved upon using mixed integer linear programming. The automated scheduler is expected to run on ground station resources and the resultant schedules uplinked to the satellites for execution, however it can be adapted for onboard scheduling, contingent on Cubesat hardware and software upgrades. The framework is generalizable over small steerable spacecraft, sensor specifications, imaging objectives and regions of interest, and is demonstrated using multiple 20 kg satellites in Low Earth Orbit for two case studies - rapid imaging of Landsat's land and coastal images and extended imaging of global, warm water coral reefs. The proposed algorithm captures up to 161% more Landsat images than nadir-pointing sensors with the same field of view, on a 2-satellite constellation over a 12-h simulation. Integer programming was able to verify that optimality of the dynamic programming solution for single satellites was within 10%, and find up to 5% more optimal solutions. The optimality gap for constellations was found to be 22% at worst, but the dynamic programming schedules were found at nearly four orders of magnitude better computational speed than integer programming. The algorithm can include cloud cover predictions, ground downlink windows or any other spatial, temporal or angular constraints into the orbital module and be integrated into planning tools for agile constellations.
Optimizing CMS build infrastructure via Apache Mesos
NASA Astrophysics Data System (ADS)
Abdurachmanov, David; Degano, Alessandro; Elmer, Peter; Eulisse, Giulio; Mendez, David; Muzaffar, Shahzad
2015-12-01
The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux. Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora, and other applications on a dynamically shared pool of nodes. We present how we migrated our continuous integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.
Quasi-Linear Parameter Varying Representation of General Aircraft Dynamics Over Non-Trim Region
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob
2007-01-01
For applying linear parameter varying (LPV) control synthesis and analysis to a nonlinear system, it is required that a nonlinear system be represented in the form of an LPV model. In this paper, a new representation method is developed to construct an LPV model from a nonlinear mathematical model without the restriction that an operating point must be in the neighborhood of equilibrium points. An LPV model constructed by the new method preserves local stabilities of the original nonlinear system at "frozen" scheduling parameters and also represents the original nonlinear dynamics of a system over a non-trim region. An LPV model of the motion of FASER (Free-flying Aircraft for Subscale Experimental Research) is constructed by the new method.
Dynamic Hierarchical Sleep Scheduling for Wireless Ad-Hoc Sensor Networks
Wen, Chih-Yu; Chen, Ying-Chih
2009-01-01
This paper presents two scheduling management schemes for wireless sensor networks, which manage the sensors by utilizing the hierarchical network structure and allocate network resources efficiently. A local criterion is used to simultaneously establish the sensing coverage and connectivity such that dynamic cluster-based sleep scheduling can be achieved. The proposed schemes are simulated and analyzed to abstract the network behaviors in a number of settings. The experimental results show that the proposed algorithms provide efficient network power control and can achieve high scalability in wireless sensor networks. PMID:22412343
Dynamic hierarchical sleep scheduling for wireless ad-hoc sensor networks.
Wen, Chih-Yu; Chen, Ying-Chih
2009-01-01
This paper presents two scheduling management schemes for wireless sensor networks, which manage the sensors by utilizing the hierarchical network structure and allocate network resources efficiently. A local criterion is used to simultaneously establish the sensing coverage and connectivity such that dynamic cluster-based sleep scheduling can be achieved. The proposed schemes are simulated and analyzed to abstract the network behaviors in a number of settings. The experimental results show that the proposed algorithms provide efficient network power control and can achieve high scalability in wireless sensor networks.
Sellers and Fossum on the end of the OBSS during EVA1 on STS-121 / Expedition 13 joint operations
2006-07-08
STS121-323-011 (8 July 2006) --- Astronauts Piers J. Sellers and Michael E. Fossum, STS-121 mission specialists, work in tandem on Space Shuttle Discovery's Remote Manipulator System/Orbiter Boom Sensor System (RMS/OBSS) during the mission's first scheduled session of extravehicular activity (EVA). Also visible on the OBSS are the Laser Dynamic Range Imager (LDRI), Intensified Television Camera (ITVC) and Laser Camera System (LCS).
Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Xu, Jun; Yang, Jianfeng; Zhou, Haiying; Shi, Hongling; Zhou, Peng
2015-01-01
Memory and energy optimization strategies are essential for the resource-constrained wireless sensor network (WSN) nodes. In this article, a new memory-optimized and energy-optimized multithreaded WSN operating system (OS) LiveOS is designed and implemented. Memory cost of LiveOS is optimized by using the stack-shifting hybrid scheduling approach. Different from the traditional multithreaded OS in which thread stacks are allocated statically by the pre-reservation, thread stacks in LiveOS are allocated dynamically by using the stack-shifting technique. As a result, memory waste problems caused by the static pre-reservation can be avoided. In addition to the stack-shifting dynamic allocation approach, the hybrid scheduling mechanism which can decrease both the thread scheduling overhead and the thread stack number is also implemented in LiveOS. With these mechanisms, the stack memory cost of LiveOS can be reduced more than 50% if compared to that of a traditional multithreaded OS. Not is memory cost optimized, but also the energy cost is optimized in LiveOS, and this is achieved by using the multi-core “context aware” and multi-core “power-off/wakeup” energy conservation approaches. By using these approaches, energy cost of LiveOS can be reduced more than 30% when compared to the single-core WSN system. Memory and energy optimization strategies in LiveOS not only prolong the lifetime of WSN nodes, but also make the multithreaded OS feasible to run on the memory-constrained WSN nodes. PMID:25545264
NASA Astrophysics Data System (ADS)
Xiong, Lu; Yu, Zhuoping; Wang, Yang; Yang, Chen; Meng, Yufeng
2012-06-01
This paper focuses on the vehicle dynamic control system for a four in-wheel motor drive electric vehicle, aiming at improving vehicle stability under critical driving conditions. The vehicle dynamics controller is composed of three modules, i.e. motion following control, control allocation and vehicle state estimation. Considering the strong nonlinearity of the tyres under critical driving conditions, the yaw motion of the vehicle is regulated by gain scheduling control based on the linear quadratic regulator theory. The feed-forward and feedback gains of the controller are updated in real-time by online estimation of the tyre cornering stiffness, so as to ensure the control robustness against environmental disturbances as well as parameter uncertainty. The control allocation module allocates the calculated generalised force requirements to each in-wheel motor based on quadratic programming theory while taking the tyre longitudinal/lateral force coupling characteristic into consideration. Simulations under a variety of driving conditions are carried out to verify the control algorithm. Simulation results indicate that the proposed vehicle stability controller can effectively stabilise the vehicle motion under critical driving conditions.
PRAIS: Distributed, real-time knowledge-based systems made easy
NASA Technical Reports Server (NTRS)
Goldstein, David G.
1990-01-01
This paper discusses an architecture for real-time, distributed (parallel) knowledge-based systems called the Parallel Real-time Artificial Intelligence System (PRAIS). PRAIS strives for transparently parallelizing production (rule-based) systems, even when under real-time constraints. PRAIS accomplishes these goals by incorporating a dynamic task scheduler, operating system extensions for fact handling, and message-passing among multiple copies of CLIPS executing on a virtual blackboard. This distributed knowledge-based system tool uses the portability of CLIPS and common message-passing protocols to operate over a heterogeneous network of processors.
Multi-Satellite Observation Scheduling for Large Area Disaster Emergency Response
NASA Astrophysics Data System (ADS)
Niu, X. N.; Tang, H.; Wu, L. X.
2018-04-01
an optimal imaging plan, plays a key role in coordinating multiple satellites to monitor the disaster area. In the paper, to generate imaging plan dynamically according to the disaster relief, we propose a dynamic satellite task scheduling method for large area disaster response. First, an initial robust scheduling scheme is generated by a robust satellite scheduling model in which both the profit and the robustness of the schedule are simultaneously maximized. Then, we use a multi-objective optimization model to obtain a series of decomposing schemes. Based on the initial imaging plan, we propose a mixed optimizing algorithm named HA_NSGA-II to allocate the decomposing results thus to obtain an adjusted imaging schedule. A real disaster scenario, i.e., 2008 Wenchuan earthquake, is revisited in terms of rapid response using satellite resources and used to evaluate the performance of the proposed method with state-of-the-art approaches. We conclude that our satellite scheduling model can optimize the usage of satellite resources so as to obtain images in disaster response in a more timely and efficient manner.
On-the-fly scheduling as a manifestation of partial-order planning and dynamic task values.
Hannah, Samuel D; Neal, Andrew
2014-09-01
The aim of this study was to develop a computational account of the spontaneous task ordering that occurs within jobs as work unfolds ("on-the-fly task scheduling"). Air traffic control is an example of work in which operators have to schedule their tasks as a partially predictable work flow emerges. To date, little attention has been paid to such on-the-fly scheduling situations. We present a series of discrete-event models fit to conflict resolution decision data collected from experienced controllers operating in a high-fidelity simulation. Our simulations reveal air traffic controllers' scheduling decisions as examples of the partial-order planning approach of Hayes-Roth and Hayes-Roth. The most successful model uses opportunistic first-come-first-served scheduling to select tasks from a queue. Tasks with short deadlines are executed immediately. Tasks with long deadlines are evaluated to assess whether they need to be executed immediately or deferred. On-the-fly task scheduling is computationally tractable despite its surface complexity and understandable as an example of both the partial-order planning strategy and the dynamic-value approach to prioritization.
Yu, Rong; Zhong, Weifeng; Xie, Shengli; Zhang, Yan; Zhang, Yun
2016-02-01
As the next-generation power grid, smart grid will be integrated with a variety of novel communication technologies to support the explosive data traffic and the diverse requirements of quality of service (QoS). Cognitive radio (CR), which has the favorable ability to improve the spectrum utilization, provides an efficient and reliable solution for smart grid communications networks. In this paper, we study the QoS differential scheduling problem in the CR-based smart grid communications networks. The scheduler is responsible for managing the spectrum resources and arranging the data transmissions of smart grid users (SGUs). To guarantee the differential QoS, the SGUs are assigned to have different priorities according to their roles and their current situations in the smart grid. Based on the QoS-aware priority policy, the scheduler adjusts the channels allocation to minimize the transmission delay of SGUs. The entire transmission scheduling problem is formulated as a semi-Markov decision process and solved by the methodology of adaptive dynamic programming. A heuristic dynamic programming (HDP) architecture is established for the scheduling problem. By the online network training, the HDP can learn from the activities of primary users and SGUs, and adjust the scheduling decision to achieve the purpose of transmission delay minimization. Simulation results illustrate that the proposed priority policy ensures the low transmission delay of high priority SGUs. In addition, the emergency data transmission delay is also reduced to a significantly low level, guaranteeing the differential QoS in smart grid.
NASA Astrophysics Data System (ADS)
Wang, Li-Chih; Chen, Yin-Yann; Chen, Tzu-Li; Cheng, Chen-Yang; Chang, Chin-Wei
2014-10-01
This paper studies a solar cell industry scheduling problem, which is similar to traditional hybrid flowshop scheduling (HFS). In a typical HFS problem, the allocation of machine resources for each order should be scheduled in advance. However, the challenge in solar cell manufacturing is the number of machines that can be adjusted dynamically to complete the job. An optimal production scheduling model is developed to explore these issues, considering the practical characteristics, such as hybrid flowshop, parallel machine system, dedicated machines, sequence independent job setup times and sequence dependent job setup times. The objective of this model is to minimise the makespan and to decide the processing sequence of the orders/lots in each stage, lot-splitting decisions for the orders and the number of machines used to satisfy the demands in each stage. From the experimental results, lot-splitting has significant effect on shortening the makespan, and the improvement effect is influenced by the processing time and the setup time of orders. Therefore, the threshold point to improve the makespan can be identified. In addition, the model also indicates that more lot-splitting approaches, that is, the flexibility of allocating orders/lots to machines is larger, will result in a better scheduling performance.
1992-09-01
finding an inverse plant such as was done by Bertrand [BD91] and by Levin, Gewirtzman and Inbar in a binary type inverse controller [LGI91], to self tuning...gain robust control. 2) Self oscillating adaptive controller. 3) Gain scheduling. 4) Self tuning. 5) Model-reference adaptive systems. Although the...of multidimensional systems (CS881 as well as aircraft [HG90]. The self oscillating method is also a feedback based mechanism, utilizing a relay in the
Predictive Scheduling for Electric Vehicles Considering Uncertainty of Load and User Behaviors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Bin; Huang, Rui; Wang, Yubo
2016-05-02
Un-coordinated Electric Vehicle (EV) charging can create unexpected load in local distribution grid, which may degrade the power quality and system reliability. The uncertainty of EV load, user behaviors and other baseload in distribution grid, is one of challenges that impedes optimal control for EV charging problem. Previous researches did not fully solve this problem due to lack of real-world EV charging data and proper stochastic model to describe these behaviors. In this paper, we propose a new predictive EV scheduling algorithm (PESA) inspired by Model Predictive Control (MPC), which includes a dynamic load estimation module and a predictive optimizationmore » module. The user-related EV load and base load are dynamically estimated based on the historical data. At each time interval, the predictive optimization program will be computed for optimal schedules given the estimated parameters. Only the first element from the algorithm outputs will be implemented according to MPC paradigm. Current-multiplexing function in each Electric Vehicle Supply Equipment (EVSE) is considered and accordingly a virtual load is modeled to handle the uncertainties of future EV energy demands. This system is validated by the real-world EV charging data collected on UCLA campus and the experimental results indicate that our proposed model not only reduces load variation up to 40% but also maintains a high level of robustness. Finally, IEC 61850 standard is utilized to standardize the data models involved, which brings significance to more reliable and large-scale implementation.« less
Robustness Analysis of Integrated LPV-FDI Filters and LTI-FTC System for a Transport Aircraft
NASA Technical Reports Server (NTRS)
Khong, Thuan H.; Shin, Jong-Yeob
2007-01-01
This paper proposes an analysis framework for robustness analysis of a nonlinear dynamics system that can be represented by a polynomial linear parameter varying (PLPV) system with constant bounded uncertainty. The proposed analysis framework contains three key tools: 1) a function substitution method which can convert a nonlinear system in polynomial form into a PLPV system, 2) a matrix-based linear fractional transformation (LFT) modeling approach, which can convert a PLPV system into an LFT system with the delta block that includes key uncertainty and scheduling parameters, 3) micro-analysis, which is a well known robust analysis tool for linear systems. The proposed analysis framework is applied to evaluating the performance of the LPV-fault detection and isolation (FDI) filters of the closed-loop system of a transport aircraft in the presence of unmodeled actuator dynamics and sensor gain uncertainty. The robustness analysis results are compared with nonlinear time simulations.
A systems engineering management approach to resource management applications
NASA Technical Reports Server (NTRS)
Hornstein, Rhoda Shaller
1989-01-01
The author presents a program management response to the following question: How can the traditional practice of systems engineering management, including requirements specification, be adapted, enhanced, or modified to build future planning and scheduling systems for effective operations? The systems engineering management process, as traditionally practiced, is examined. Extensible resource management systems are discussed. It is concluded that extensible systems are a partial solution to problems presented by requirements that are incomplete, partially immeasurable, and often dynamic. There are positive indications that resource management systems have been characterized and modeled sufficiently to allow their implementation as extensible systems.
The internal dynamics of slowly rotating biological systems
NASA Technical Reports Server (NTRS)
Kessler, John O.
1992-01-01
The structure and the dynamics of biological systems are complex. Steady gravitational forces that act on organisms cause hydrostatic pressure gradients, stress in solid components, and ordering of movable subsystems according to density. Rotation induces internal motion; it also stresses and or deforms regions of attachment and containment. The disrupted gravitationally ordered layers of movable entities are replaced by their orbital movements. New ordering geometries may arise also, especially if fluids of various densities occur. One novel result obtained concerns the application of scheduled variation of clinostat rotation rates to the management of intracellular particle trajectories. Rotation and its consequences are discussed in terms of scaling factors for parameters such as time, derived from mathematical models for simple rotating mechanical systems.
NASA Astrophysics Data System (ADS)
Zhang, Guoguang; Yu, Zitian; Wang, Junmin
2017-03-01
Yaw rate is a crucial signal for the motion control systems of ground vehicles. Yet it may be contaminated by sensor bias. In order to correct the contaminated yaw rate signal and estimate the sensor bias, a robust gain-scheduling observer is proposed in this paper. First of all, a two-degree-of-freedom (2DOF) vehicle lateral and yaw dynamic model is presented, and then a Luenberger-like observer is proposed. To make the observer more applicable to real vehicle driving operations, a 2DOF vehicle model with uncertainties on the coefficients of tire cornering stiffness is employed. Further, a gain-scheduling approach and a robustness enhancement are introduced, leading to a robust gain-scheduling observer. Sensor bias detection mechanism is also designed. Case studies are conducted using an electric ground vehicle to assess the performance of signal correction and sensor bias estimation under difference scenarios.
Real time simulation of computer-assisted sequencing of terminal area operations
NASA Technical Reports Server (NTRS)
Dear, R. G.
1981-01-01
A simulation was developed to investigate the utilization of computer assisted decision making for the task of sequencing and scheduling aircraft in a high density terminal area. The simulation incorporates a decision methodology termed Constrained Position Shifting. This methodology accounts for aircraft velocity profiles, routes, and weight classes in dynamically sequencing and scheduling arriving aircraft. A sample demonstration of Constrained Position Shifting is presented where six aircraft types (including both light and heavy aircraft) are sequenced to land at Denver's Stapleton International Airport. A graphical display is utilized and Constrained Position Shifting with a maximum shift of four positions (rearward or forward) is compared to first come, first serve with respect to arrival at the runway. The implementation of computer assisted sequencing and scheduling methodologies is investigated. A time based control concept will be required and design considerations for such a system are discussed.
Optimal Power Scheduling for a Medium Voltage AC/DC Hybrid Distribution Network
Zhu, Zhenshan; Liu, Dichen; Liao, Qingfen; ...
2018-01-26
With the great increase of renewable generation as well as the DC loads in the distribution network; DC distribution technology is receiving more attention; since the DC distribution network can improve operating efficiency and power quality by reducing the energy conversion stages. This paper presents a new architecture for the medium voltage AC/DC hybrid distribution network; where the AC and DC subgrids are looped by normally closed AC soft open point (ACSOP) and DC soft open point (DCSOP); respectively. The proposed AC/DC hybrid distribution systems contain renewable generation (i.e., wind power and photovoltaic (PV) generation); energy storage systems (ESSs); softmore » open points (SOPs); and both AC and DC flexible demands. An energy management strategy for the hybrid system is presented based on the dynamic optimal power flow (DOPF) method. The main objective of the proposed power scheduling strategy is to minimize the operating cost and reduce the curtailment of renewable generation while meeting operational and technical constraints. The proposed approach is verified in five scenarios. The five scenarios are classified as pure AC system; hybrid AC/DC system; hybrid system with interlinking converter; hybrid system with DC flexible demand; and hybrid system with SOPs. Results show that the proposed scheduling method can successfully dispatch the controllable elements; and that the presented architecture for the AC/DC hybrid distribution system is beneficial for reducing operating cost and renewable generation curtailment.« less
Optimal Power Scheduling for a Medium Voltage AC/DC Hybrid Distribution Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Zhenshan; Liu, Dichen; Liao, Qingfen
With the great increase of renewable generation as well as the DC loads in the distribution network; DC distribution technology is receiving more attention; since the DC distribution network can improve operating efficiency and power quality by reducing the energy conversion stages. This paper presents a new architecture for the medium voltage AC/DC hybrid distribution network; where the AC and DC subgrids are looped by normally closed AC soft open point (ACSOP) and DC soft open point (DCSOP); respectively. The proposed AC/DC hybrid distribution systems contain renewable generation (i.e., wind power and photovoltaic (PV) generation); energy storage systems (ESSs); softmore » open points (SOPs); and both AC and DC flexible demands. An energy management strategy for the hybrid system is presented based on the dynamic optimal power flow (DOPF) method. The main objective of the proposed power scheduling strategy is to minimize the operating cost and reduce the curtailment of renewable generation while meeting operational and technical constraints. The proposed approach is verified in five scenarios. The five scenarios are classified as pure AC system; hybrid AC/DC system; hybrid system with interlinking converter; hybrid system with DC flexible demand; and hybrid system with SOPs. Results show that the proposed scheduling method can successfully dispatch the controllable elements; and that the presented architecture for the AC/DC hybrid distribution system is beneficial for reducing operating cost and renewable generation curtailment.« less
Wiener-Hopf optimal control of a hydraulic canal prototype with fractional order dynamics.
Feliu-Batlle, Vicente; Feliu-Talegón, Daniel; San-Millan, Andres; Rivas-Pérez, Raúl
2017-06-26
This article addresses the control of a laboratory hydraulic canal prototype that has fractional order dynamics and a time delay. Controlling this prototype is relevant since its dynamics closely resembles the dynamics of real main irrigation canals. Moreover, the dynamics of hydraulic canals vary largely when the operation regime changes since they are strongly nonlinear systems. All this makes difficult to design adequate controllers. The controller proposed in this article looks for a good time response to step commands. The design criterium for this controller is minimizing the integral performance index ISE. Then a new methodology to control fractional order processes with a time delay, based on the Wiener-Hopf control and the Padé approximation of the time delay, is developed. Moreover, in order to improve the robustness of the control system, a gain scheduling fractional order controller is proposed. Experiments show the adequate performance of the proposed controller. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
The Gain of Resource Delegation in Distributed Computing Environments
NASA Astrophysics Data System (ADS)
Fölling, Alexander; Grimme, Christian; Lepping, Joachim; Papaspyrou, Alexander
In this paper, we address job scheduling in Distributed Computing Infrastructures, that is a loosely coupled network of autonomous acting High Performance Computing systems. In contrast to the common approach of mutual workload exchange, we consider the more intuitive operator's viewpoint of load-dependent resource reconfiguration. In case of a site's over-utilization, the scheduling system is able to lease resources from other sites to keep up service quality for its local user community. Contrary, the granting of idle resources can increase utilization in times of low local workload and thus ensure higher efficiency. The evaluation considers real workload data and is done with respect to common service quality indicators. For two simple resource exchange policies and three basic setups we show the possible gain of this approach and analyze the dynamics in workload-adaptive reconfiguration behavior.
Dynamic advance reservation with delayed allocation
Vokkarane, Vinod; Somani, Arun
2014-12-02
A method of scheduling data transmissions from a source to a destination, includes the steps of: providing a communication system having a number of channels and a number of paths, each of the channels having a plurality of designated time slots; receiving two or more data transmission requests; provisioning the transmission of the data; receiving data corresponding to at least one of the two or more data transmission requests; waiting until an earliest requested start time T.sub.s; allocating at the current time each of the two or more data transmission requests; transmitting the data; and repeating the steps of waiting, allocating, and transmitting until each of the two or more data transmission requests that have been provisioned for a transmission of data is satisfied. A system to perform the method of scheduling data transmissions is also described.
DORCA II: Dynamic operations requirements and cost analysis program
NASA Technical Reports Server (NTRS)
1976-01-01
Program is written to handle logistics of acquisition and transport of personnel, equipment, and services and to determine costs, transport schedules, acquisition schedules, and fuel requirements of cargo transport.
Optimal pre-scheduling of problem remappings
NASA Technical Reports Server (NTRS)
Nicol, David M.; Saltz, Joel H.
1987-01-01
A large class of scientific computational problems can be characterized as a sequence of steps where a significant amount of computation occurs each step, but the work performed at each step is not necessarily identical. Two good examples of this type of computation are: (1) regridding methods which change the problem discretization during the course of the computation, and (2) methods for solving sparse triangular systems of linear equations. Recent work has investigated a means of mapping such computations onto parallel processors; the method defines a family of static mappings with differing degrees of importance placed on the conflicting goals of good load balance and low communication/synchronization overhead. The performance tradeoffs are controllable by adjusting the parameters of the mapping method. To achieve good performance it may be necessary to dynamically change these parameters at run-time, but such changes can impose additional costs. If the computation's behavior can be determined prior to its execution, it can be possible to construct an optimal parameter schedule using a low-order-polynomial-time dynamic programming algorithm. Since the latter can be expensive, the performance is studied of the effect of a linear-time scheduling heuristic on one of the model problems, and it is shown to be effective and nearly optimal.
NASA Astrophysics Data System (ADS)
Obara, Shin'ya
A micro-grid with the capacity for sustainable energy is expected to be a distributed energy system that exhibits quite a small environmental impact. In an independent micro-grid, “green energy,” which is typically thought of as unstable, can be utilized effectively by introducing a battery. In the past study, the production-of-electricity prediction algorithm (PAS) of the solar cell was developed. In PAS, a layered neural network is made to learn based on past weather data and the operation plan of the compound system of a solar cell and other energy systems was examined using this prediction algorithm. In this paper, a dynamic operational scheduling algorithm is developed using a neural network (PAS) and a genetic algorithm (GA) to provide predictions for solar cell power output. We also do a case study analysis in which we use this algorithm to plan the operation of a system that connects nine houses in Sapporo to a micro-grid composed of power equipment and a polycrystalline silicon solar cell. In this work, the relationship between the accuracy of output prediction of the solar cell and the operation plan of the micro-grid was clarified. Moreover, we found that operating the micro-grid according to the plan derived with PAS was far superior, in terms of equipment hours of operation, to that using past average weather data.
Adaptive critics for dynamic optimization.
Kulkarni, Raghavendra V; Venayagamoorthy, Ganesh Kumar
2010-06-01
A novel action-dependent adaptive critic design (ACD) is developed for dynamic optimization. The proposed combination of a particle swarm optimization-based actor and a neural network critic is demonstrated through dynamic sleep scheduling of wireless sensor motes for wildlife monitoring. The objective of the sleep scheduler is to dynamically adapt the sleep duration to node's battery capacity and movement pattern of animals in its environment in order to obtain snapshots of the animal on its trajectory uniformly. Simulation results show that the sleep time of the node determined by the actor critic yields superior quality of sensory data acquisition and enhanced node longevity. Copyright 2010 Elsevier Ltd. All rights reserved.
Scheduling Aircraft Landings under Constrained Position Shifting
NASA Technical Reports Server (NTRS)
Balakrishnan, Hamsa; Chandran, Bala
2006-01-01
Optimal scheduling of airport runway operations can play an important role in improving the safety and efficiency of the National Airspace System (NAS). Methods that compute the optimal landing sequence and landing times of aircraft must accommodate practical issues that affect the implementation of the schedule. One such practical consideration, known as Constrained Position Shifting (CPS), is the restriction that each aircraft must land within a pre-specified number of positions of its place in the First-Come-First-Served (FCFS) sequence. We consider the problem of scheduling landings of aircraft in a CPS environment in order to maximize runway throughput (minimize the completion time of the landing sequence), subject to operational constraints such as FAA-specified minimum inter-arrival spacing restrictions, precedence relationships among aircraft that arise either from airline preferences or air traffic control procedures that prevent overtaking, and time windows (representing possible control actions) during which each aircraft landing can occur. We present a Dynamic Programming-based approach that scales linearly in the number of aircraft, and describe our computational experience with a prototype implementation on realistic data for Denver International Airport.
Hybrid Rendering with Scheduling under Uncertainty
Tamm, Georg; Krüger, Jens
2014-01-01
As scientific data of increasing size is generated by today’s simulations and measurements, utilizing dedicated server resources to process the visualization pipeline becomes necessary. In a purely server-based approach, requirements on the client-side are minimal as the client only displays results received from the server. However, the client may have a considerable amount of hardware available, which is left idle. Further, the visualization is put at the whim of possibly unreliable server and network conditions. Server load, bandwidth and latency may substantially affect the response time on the client. In this paper, we describe a hybrid method, where visualization workload is assigned to server and client. A capable client can produce images independently. The goal is to determine a workload schedule that enables a synergy between the two sides to provide rendering results to the user as fast as possible. The schedule is determined based on processing and transfer timings obtained at runtime. Our probabilistic scheduler adapts to changing conditions by shifting workload between server and client, and accounts for the performance variability in the dynamic system. PMID:25309115
Training and Operations Integrated Calendar Scheduler - TROPICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
J.E. Oppenlander; A.J. Levy; V.A. Arbige
2003-01-27
TROPICS is a rule-based scheduling system that optimizes the training experience for students in a power (note this change should be everywhere, i.e. Not reactor) plant environment. The problem is complicated by the condition that plant resources and users' time must be simultaneously scheduled to make best use of both. The training facility is highly constrained in how it is used, and, as in many similar environments, subject to dynamic change with little or no advance notice. The flexibility required extends to changes resulting from students' actions such as absences. Even though the problem is highly constrained by plant usagemore » and student objectives, the large number of possible schedules is a concern. TROPICS employs a control strategy for rule firing to prune the possibility tree and avoid combinatorial explosion. The application has been in use since 1996, first as a prototype for testing and then in production. Training Coordinators have a philosophical aspect to teaching students that has made the rule-based approach much more verifiable and satisfying to the domain experts than other forms of capturing expertise.« less
A Case Study in Web 2.0 Application Development
NASA Astrophysics Data System (ADS)
Marganian, P.; Clark, M.; Shelton, A.; McCarty, M.; Sessoms, E.
2010-12-01
Recent web technologies focusing on languages, frameworks, and tools are discussed, using the Robert C. Byrd Green Bank Telescopes (GBT) new Dynamic Scheduling System as the primary example. Within that example, we use a popular Python web framework, Django, to build the extensive web services for our users. We also use a second complimentary server, written in Haskell, to incorporate the core scheduling algorithms. We provide a desktop-quality experience across all the popular browsers for our users with the Google Web Toolkit and judicious use of JQuery in Django templates. Single sign-on and authentication throughout all NRAO web services is accomplished via the Central Authentication Service protocol, or CAS.
Research on a Method of Geographical Information Service Load Balancing
NASA Astrophysics Data System (ADS)
Li, Heyuan; Li, Yongxing; Xue, Zhiyong; Feng, Tao
2018-05-01
With the development of geographical information service technologies, how to achieve the intelligent scheduling and high concurrent access of geographical information service resources based on load balancing is a focal point of current study. This paper presents an algorithm of dynamic load balancing. In the algorithm, types of geographical information service are matched with the corresponding server group, then the RED algorithm is combined with the method of double threshold effectively to judge the load state of serve node, finally the service is scheduled based on weighted probabilistic in a certain period. At the last, an experiment system is built based on cluster server, which proves the effectiveness of the method presented in this paper.
Karakashian, A N; Lepeshkina, T R; Ratushnaia, A N; Glushchenko, S S; Zakharenko, M I; Lastovchenko, V B; Diordichuk, T I
1993-01-01
Weight, tension and harmfulness of professional activity, peculiarities of labour conditions and characteristics of work, shift dynamics of operative personnel's working capacity were studied in the course of 8-hour working day currently accepted at hydroelectric power stations (HEPS) and experimental 12-hour schedule. Working conditions classified as "admissible", positive dynamics of operators' state, their social and material contentment were a basis for 12-hour two-shift schedule to be recommended as more appropriate. At the same time, problem of optimal shift schedules for operative personnel of HEPS remains unsolved and needs to be further explored.
Optimizing CMS build infrastructure via Apache Mesos
Abdurachmanov, David; Degano, Alessandro; Elmer, Peter; ...
2015-12-23
The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux.Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora,more » and other applications on a dynamically shared pool of nodes. Lastly, we present how we migrated our continuous integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.« less
Optimizing CMS build infrastructure via Apache Mesos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdurachmanov, David; Degano, Alessandro; Elmer, Peter
The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux.Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora,more » and other applications on a dynamically shared pool of nodes. Lastly, we present how we migrated our continuous integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.« less
Evolution of CMS workload management towards multicore job support
NASA Astrophysics Data System (ADS)
Pérez-Calero Yzquierdo, A.; Hernández, J. M.; Khan, F. A.; Letts, J.; Majewski, K.; Rodrigues, A. M.; McCrea, A.; Vaandering, E.
2015-12-01
The successful exploitation of multicore processor architectures is a key element of the LHC distributed computing system in the coming era of the LHC Run 2. High-pileup complex-collision events represent a challenge for the traditional sequential programming in terms of memory and processing time budget. The CMS data production and processing framework is introducing the parallel execution of the reconstruction and simulation algorithms to overcome these limitations. CMS plans to execute multicore jobs while still supporting singlecore processing for other tasks difficult to parallelize, such as user analysis. The CMS strategy for job management thus aims at integrating single and multicore job scheduling across the Grid. This is accomplished by employing multicore pilots with internal dynamic partitioning of the allocated resources, capable of running payloads of various core counts simultaneously. An extensive test programme has been conducted to enable multicore scheduling with the various local batch systems available at CMS sites, with the focus on the Tier-0 and Tier-1s, responsible during 2015 of the prompt data reconstruction. Scale tests have been run to analyse the performance of this scheduling strategy and ensure an efficient use of the distributed resources. This paper presents the evolution of the CMS job management and resource provisioning systems in order to support this hybrid scheduling model, as well as its deployment and performance tests, which will enable CMS to transition to a multicore production model for the second LHC run.
Evolution of CMS Workload Management Towards Multicore Job Support
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez-Calero Yzquierdo, A.; Hernández, J. M.; Khan, F. A.
The successful exploitation of multicore processor architectures is a key element of the LHC distributed computing system in the coming era of the LHC Run 2. High-pileup complex-collision events represent a challenge for the traditional sequential programming in terms of memory and processing time budget. The CMS data production and processing framework is introducing the parallel execution of the reconstruction and simulation algorithms to overcome these limitations. CMS plans to execute multicore jobs while still supporting singlecore processing for other tasks difficult to parallelize, such as user analysis. The CMS strategy for job management thus aims at integrating single andmore » multicore job scheduling across the Grid. This is accomplished by employing multicore pilots with internal dynamic partitioning of the allocated resources, capable of running payloads of various core counts simultaneously. An extensive test programme has been conducted to enable multicore scheduling with the various local batch systems available at CMS sites, with the focus on the Tier-0 and Tier-1s, responsible during 2015 of the prompt data reconstruction. Scale tests have been run to analyse the performance of this scheduling strategy and ensure an efficient use of the distributed resources. This paper presents the evolution of the CMS job management and resource provisioning systems in order to support this hybrid scheduling model, as well as its deployment and performance tests, which will enable CMS to transition to a multicore production model for the second LHC run.« less
Behavior dynamics: One perspective
Marr, M. Jackson
1992-01-01
Behavior dynamics is a field devoted to analytic descriptions of behavior change. A principal source of both models and methods for these descriptions is found in physics. This approach is an extension of a long conceptual association between behavior analysis and physics. A theme common to both is the role of molar versus molecular events in description and prediction. Similarities and differences in how these events are treated are discussed. Two examples are presented that illustrate possible correspondence between mechanical and behavioral systems. The first demonstrates the use of a mechanical model to describe the molar properties of behavior under changing reinforcement conditions. The second, dealing with some features of concurrent schedules, focuses on the possible utility of nonlinear dynamical systems to the description of both molar and molecular behavioral events as the outcome of a deterministic, but chaotic, process. PMID:16812655
Rapid Prototyping of High Performance Signal Processing Applications
NASA Astrophysics Data System (ADS)
Sane, Nimish
Advances in embedded systems for digital signal processing (DSP) are enabling many scientific projects and commercial applications. At the same time, these applications are key to driving advances in many important kinds of computing platforms. In this region of high performance DSP, rapid prototyping is critical for faster time-to-market (e.g., in the wireless communications industry) or time-to-science (e.g., in radio astronomy). DSP system architectures have evolved from being based on application specific integrated circuits (ASICs) to incorporate reconfigurable off-the-shelf field programmable gate arrays (FPGAs), the latest multiprocessors such as graphics processing units (GPUs), or heterogeneous combinations of such devices. We, thus, have a vast design space to explore based on performance trade-offs, and expanded by the multitude of possibilities for target platforms. In order to allow systematic design space exploration, and develop scalable and portable prototypes, model based design tools are increasingly used in design and implementation of embedded systems. These tools allow scalable high-level representations, model based semantics for analysis and optimization, and portable implementations that can be verified at higher levels of abstractions and targeted toward multiple platforms for implementation. The designer can experiment using such tools at an early stage in the design cycle, and employ the latest hardware at later stages. In this thesis, we have focused on dataflow-based approaches for rapid DSP system prototyping. This thesis contributes to various aspects of dataflow-based design flows and tools as follows: 1. We have introduced the concept of topological patterns, which exploits commonly found repetitive patterns in DSP algorithms to allow scalable, concise, and parameterizable representations of large scale dataflow graphs in high-level languages. We have shown how an underlying design tool can systematically exploit a high-level application specification consisting of topological patterns in various aspects of the design flow. 2. We have formulated the core functional dataflow (CFDF) model of computation, which can be used to model a wide variety of deterministic dynamic dataflow behaviors. We have also presented key features of the CFDF model and tools based on these features. These tools provide support for heterogeneous dataflow behaviors, an intuitive and common framework for functional specification, support for functional simulation, portability from several existing dataflow models to CFDF, integrated emphasis on minimally-restricted specification of actor functionality, and support for efficient static, quasi-static, and dynamic scheduling techniques. 3. We have developed a generalized scheduling technique for CFDF graphs based on decomposition of a CFDF graph into static graphs that interact at run-time. Furthermore, we have refined this generalized scheduling technique using a new notion of "mode grouping," which better exposes the underlying static behavior. We have also developed a scheduling technique for a class of dynamic applications that generates parameterized looped schedules (PLSs), which can handle dynamic dataflow behavior without major limitations on compile-time predictability. 4. We have demonstrated the use of dataflow-based approaches for design and implementation of radio astronomy DSP systems using an application example of a tunable digital downconverter (TDD) for spectrometers. Design and implementation of this module has been an integral part of this thesis work. This thesis demonstrates a design flow that consists of a high-level software prototype, analysis, and simulation using the dataflow interchange format (DIF) tool, and integration of this design with the existing tool flow for the target implementation on an FPGA platform, called interconnect break-out board (IBOB). We have also explored the trade-off between low hardware cost for fixed configurations of digital downconverters and flexibility offered by TDD designs. 5. This thesis has contributed significantly to the development and release of the latest version of a graph package oriented toward models of computation (MoCGraph). Our enhancements to this package include support for tree data structures, and generalized schedule trees (GSTs), which provide a useful data structure for a wide variety of schedule representations. Our extensions to the MoCGraph package provided key support for the CFDF model, and functional simulation capabilities in the DIF package.
The TJO-OAdM Robotic Observatory: the scheduler
NASA Astrophysics Data System (ADS)
Colomé, Josep; Casteels, Kevin; Ribas, Ignasi; Francisco, Xavier
2010-07-01
The Joan Oró Telescope at the Montsec Astronomical Observatory (TJO - OAdM) is a small-class observatory working under completely unattended control, due to the isolation of the site. Robotic operation is mandatory for its routine use. The level of robotization of an observatory is given by its reliability in responding to environment changes and by the required human interaction due to possible alarms. These two points establish a level of human attendance to ensure low risk at any time. But there is another key point when deciding how the system performs as a robot: the capability to adapt the scheduled observation to actual conditions. The scheduler represents a fundamental element to fully achieve an intelligent response at any time. Its main task is the mid- and short-term time optimization and it has a direct effect on the scientific return achieved by the observatory. We present a description of the scheduler developed for the TJO - OAdM, which is separated in two parts. Firstly, a pre-scheduler that makes a temporary selection of objects from the available projects according to their possibility of observation. This process is carried out before the beginning of the night following different selection criteria. Secondly, a dynamic scheduler that is executed any time a target observation is complete and a new one must be scheduled. The latter enables the selection of the best target in real time according to actual environment conditions and the set of priorities.
ESTRACK Support for CCSDS Space Communication Cross Support Service Management
NASA Astrophysics Data System (ADS)
Dreihahn, H.; Unal, M.; Hoffmann, A.
2011-08-01
The CCSDS Recommended Standard for Space Communication Cross Support Service Management (SCCS SM) published as Blue Book in August 2009 is intended to provide standardised interfaces to negotiate, schedule, and manage the support of space missions by ground station network operators. ESA as a member of CCSDS has actively supported the development of the SCCS SM standard and is obviously interested in adopting it. Support of SCCS SM conforming interfaces and procedures includes:• Provision of SCCS SM conforming interfaces to non ESA missions;• Use of SCCS SM interfaces provided by other ground station operators to manage cross support of ESA missions;• In longer terms potentially use of SCCS SM interfaces and procedures also internally for support of ESA missions by ESTRACK.In the recent years ESOC has automated management and scheduling of ESA Tracking Network (ESTRACK) services by the specification, development, and deployment of the ESTRACK Management System (EMS), more specifically its planning and scheduling components ESTRACK Planning System and ESTRACK Scheduling System. While full support of the SCCS SM standard will involve also other elements of the ground segment operated by ESOC such as the Flight Dynamic System, EMS is at the core of service management and it is therefore appropriate to initially focus on the question to what extent EMS can support SCCS SM. This paper presents results of the initial analysis phase. After briefly presenting the SCCS SM standard and the relevant components of the ESTRACK management system, we will discuss the initial deployment options, open issues and a tentative roadmap for the way to proceed. Obviously the adoption of a cross support standard requires and discussion and coordination of the involved parties and agencies, especially in the light of the fact that the SCCS SM standard has many optional parts.
Developing a Telescope Simulator Towards a Global Autonomous Robotic Telescope Network
NASA Astrophysics Data System (ADS)
Giakoumidis, N.; Ioannou, Z.; Dong, H.; Mavridis, N.
2013-05-01
A robotic telescope network is a system that integrates a number of telescopes to observe a variety of astronomical targets without being operated by a human. This system autonomously selects and observes targets in accordance to an optimized target. It dynamically allocates telescope resources depending on the observation requests, specifications of the telescopes, target visibility, meteorological conditions, daylight, location restrictions and availability and many other factors. In this paper, we introduce a telescope simulator, which can control a telescope to a desired position in order to observe a specific object. The system includes a Client Module, a Server Module, and a Dynamic Scheduler module. We make use and integrate a number of open source software to simulate the movement of a robotic telescope, the telescope characteristics, the observational data and weather conditions in order to test and optimize our system.
NASA Astrophysics Data System (ADS)
Kim, Gi Young
The problem we investigate deals with an Image Intelligence (IMINT) sensor allocation schedule for High Altitude Long Endurance UAVs in a dynamic and Anti-Access Area Denial (A2AD) environment. The objective is to maximize the Situational Awareness (SA) of decision makers. The value of SA can be improved in two different ways. First, if a sensor allocated to an Areas of Interest (AOI) detects target activity, then the SA value will be increased. Second, the SA value increases if an AOI is monitored for a certain period of time, regardless of target detections. These values are functions of the sensor allocation time, sensor type and mode. Relatively few studies in the archival literature have been devoted to an analytic, detailed explanation of the target detection process, and AOI monitoring value dynamics. These two values are the fundamental criteria used to choose the most judicious sensor allocation schedule. This research presents mathematical expressions for target detection processes, and shows the monitoring value dynamics. Furthermore, the dynamics of target detection is the result of combined processes between belligerent behavior (target activity) and friendly behavior (sensor allocation). We investigate these combined processes and derive mathematical expressions for simplified cases. These closed form mathematical models can be used for Measures of Effectiveness (MOEs), i.e., target activity detection to evaluate sensor allocation schedules. We also verify these models with discrete event simulations which can also be used to describe more complex systems. We introduce several methodologies to achieve a judicious sensor allocation schedule focusing on the AOI monitoring value. The first methodology is a discrete time integer programming model which provides an optimal solution but is impractical for real world scenarios due to its computation time. Thus, it is necessary to trade off the quality of solution with computation time. The Myopic Greedy Procedure (MGP) is a heuristic which chooses the largest immediate unit time return at each decision epoch. This reduces computation time significantly, but the quality of the solution may be only 95% of optimal (for small size problems). Another alternative is a multi-start random constructive Hybrid Myopic Greedy Procedure (H-MGP), which incorporates stochastic variation in choosing an action at each stage, and repeats it a predetermined number of times (roughly 99.3% of optimal with 1000 repetitions). Finally, the One Stage Look Ahead (OSLA) procedure considers all the 'top choices' at each stage for a temporary time horizon and chooses the best action (roughly 98.8% of optimal with no repetition). Using OSLA procedure, we can have ameliorated solutions within a reasonable computation time. Other important issues discussed in this research are methodologies for the development of input parameters for real world applications.
Scheduling lessons learned from the Autonomous Power System
NASA Technical Reports Server (NTRS)
Ringer, Mark J.
1992-01-01
The Autonomous Power System (APS) project at NASA LeRC is designed to demonstrate the applications of integrated intelligent diagnosis, control, and scheduling techniques to space power distribution systems. The project consists of three elements: the Autonomous Power Expert System (APEX) for Fault Diagnosis, Isolation, and Recovery (FDIR); the Autonomous Intelligent Power Scheduler (AIPS) to efficiently assign activities start times and resources; and power hardware (Brassboard) to emulate a space-based power system. The AIPS scheduler was tested within the APS system. This scheduler is able to efficiently assign available power to the requesting activities and share this information with other software agents within the APS system in order to implement the generated schedule. The AIPS scheduler is also able to cooperatively recover from fault situations by rescheduling the affected loads on the Brassboard in conjunction with the APEX FDIR system. AIPS served as a learning tool and an initial scheduling testbed for the integration of FDIR and automated scheduling systems. Many lessons were learned from the AIPS scheduler and are now being integrated into a new scheduler called SCRAP (Scheduler for Continuous Resource Allocation and Planning). This paper will service three purposes: an overview of the AIPS implementation, lessons learned from the AIPS scheduler, and a brief section on how these lessons are being applied to the new SCRAP scheduler.
Integration of Openstack cloud resources in BES III computing cluster
NASA Astrophysics Data System (ADS)
Li, Haibo; Cheng, Yaodong; Huang, Qiulan; Cheng, Zhenjing; Shi, Jingyan
2017-10-01
Cloud computing provides a new technical means for data processing of high energy physics experiment. However, the resource of each queue is fixed and the usage of the resource is static in traditional job management system. In order to make it simple and transparent for physicist to use, we developed a virtual cluster system (vpmanager) to integrate IHEPCloud and different batch systems such as Torque and HTCondor. Vpmanager provides dynamic virtual machines scheduling according to the job queue. The BES III use case results show that resource efficiency is greatly improved.
Health system tests CRM data base. Community Health Network uses direct mail to boost physicians.
Botvin, Judith D
2003-01-01
A six-month pilot patient retention project for Community Health Network (CHN), Indianapolis, ran from July 2002 to January 2003. It was a direct mail campaign on behalf of some members of the group practices owned by CHN, designed to test the use of the system's CRM database. Patients of the physicians received personal, dynamically-generated cards reminding them to schedule appointments and tests. Each mailing cost $1.76, including production and mailing.
Spaceborne synthetic aperture radar pilot study
NASA Technical Reports Server (NTRS)
1974-01-01
A pilot study of a spaceborne sidelooking radar is summarized. The results of the system trade studies are given along with the electrical parameters for the proposed subsystems. The mechanical aspects, packaging, thermal control and dynamics of the proposed design are presented. Details of the data processor are given. A system is described that allows the data from a pass over the U. S. to be in hard copy form within two hours. Also included are the proposed schedule, work breakdown structure, and cost estimate.
NASA Technical Reports Server (NTRS)
Liebowitz, Jay; Krishnamurthy, Vijaya; Rodens, Ira; Houston, Chapman; Liebowitz, Alisa; Baek, Seung; Radko, Joe; Zeide, Janet
1996-01-01
Scheduling has become an increasingly important element in today's society and workplace. Within the NASA environment, scheduling is one of the most frequently performed and challenging functions. Towards meeting NASA's scheduling needs, a research version of a generic expert scheduling system architecture and toolkit has been developed. This final report describes the development and testing of GUESS (Generically Used Expert Scheduling System).
Analysis of tasks for dynamic man/machine load balancing in advanced helicopters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jorgensen, C.C.
1987-10-01
This report considers task allocation requirements imposed by advanced helicopter designs incorporating mixes of human pilots and intelligent machines. Specifically, it develops an analogy between load balancing using distributed non-homogeneous multiprocessors and human team functions. A taxonomy is presented which can be used to identify task combinations likely to cause overload for dynamic scheduling and process allocation mechanisms. Designer criteria are given for function decomposition, separation of control from data, and communication handling for dynamic tasks. Possible effects of n-p complete scheduling problems are noted and a class of combinatorial optimization methods are examined.
Influencing Trust for Human-Automation Collaborative Scheduling of Multiple Unmanned Vehicles.
Clare, Andrew S; Cummings, Mary L; Repenning, Nelson P
2015-11-01
We examined the impact of priming on operator trust and system performance when supervising a decentralized network of heterogeneous unmanned vehicles (UVs). Advances in autonomy have enabled a future vision of single-operator control of multiple heterogeneous UVs. Real-time scheduling for multiple UVs in uncertain environments requires the computational ability of optimization algorithms combined with the judgment and adaptability of human supervisors. Because of system and environmental uncertainty, appropriate operator trust will be instrumental to maintain high system performance and prevent cognitive overload. Three groups of operators experienced different levels of trust priming prior to conducting simulated missions in an existing, multiple-UV simulation environment. Participants who play computer and video games frequently were found to have a higher propensity to overtrust automation. By priming gamers to lower their initial trust to a more appropriate level, system performance was improved by 10% as compared to gamers who were primed to have higher trust in the automation. Priming was successful at adjusting the operator's initial and dynamic trust in the automated scheduling algorithm, which had a substantial impact on system performance. These results have important implications for personnel selection and training for futuristic multi-UV systems under human supervision. Although gamers may bring valuable skills, they may also be potentially prone to automation bias. Priming during training and regular priming throughout missions may be one potential method for overcoming this propensity to overtrust automation. © 2015, Human Factors and Ergonomics Society.
Fine grained event processing on HPCs with the ATLAS Yoda system
NASA Astrophysics Data System (ADS)
Calafiura, Paolo; De, Kaushik; Guan, Wen; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Panitkin, Sergey; Tsulaia, Vakhtang; Van Gemmeren, Peter; Wenaus, Torre
2015-12-01
High performance computing facilities present unique challenges and opportunities for HEP event processing. The massive scale of many HPC systems means that fractionally small utilization can yield large returns in processing throughput. Parallel applications which can dynamically and efficiently fill any scheduling opportunities the resource presents benefit both the facility (maximal utilization) and the (compute-limited) science. The ATLAS Yoda system provides this capability to HEP-like event processing applications by implementing event-level processing in an MPI-based master-client model that integrates seamlessly with the more broadly scoped ATLAS Event Service. Fine grained, event level work assignments are intelligently dispatched to parallel workers to sustain full utilization on all cores, with outputs streamed off to destination object stores in near real time with similarly fine granularity, such that processing can proceed until termination with full utilization. The system offers the efficiency and scheduling flexibility of preemption without requiring the application actually support or employ check-pointing. We will present the new Yoda system, its motivations, architecture, implementation, and applications in ATLAS data processing at several US HPC centers.
Tier 3 batch system data locality via managed caches
NASA Astrophysics Data System (ADS)
Fischer, Max; Giffels, Manuel; Jung, Christopher; Kühn, Eileen; Quast, Günter
2015-05-01
Modern data processing increasingly relies on data locality for performance and scalability, whereas the common HEP approaches aim for uniform resource pools with minimal locality, recently even across site boundaries. To combine advantages of both, the High- Performance Data Analysis (HPDA) Tier 3 concept opportunistically establishes data locality via coordinated caches. In accordance with HEP Tier 3 activities, the design incorporates two major assumptions: First, only a fraction of data is accessed regularly and thus the deciding factor for overall throughput. Second, data access may fallback to non-local, making permanent local data availability an inefficient resource usage strategy. Based on this, the HPDA design generically extends available storage hierarchies into the batch system. Using the batch system itself for scheduling file locality, an array of independent caches on the worker nodes is dynamically populated with high-profile data. Cache state information is exposed to the batch system both for managing caches and scheduling jobs. As a result, users directly work with a regular, adequately sized storage system. However, their automated batch processes are presented with local replications of data whenever possible.
A Two-Stage Probabilistic Approach to Manage Personal Worklist in Workflow Management Systems
NASA Astrophysics Data System (ADS)
Han, Rui; Liu, Yingbo; Wen, Lijie; Wang, Jianmin
The application of workflow scheduling in managing individual actor's personal worklist is one area that can bring great improvement to business process. However, current deterministic work cannot adapt to the dynamics and uncertainties in the management of personal worklist. For such an issue, this paper proposes a two-stage probabilistic approach which aims at assisting actors to flexibly manage their personal worklists. To be specific, the approach analyzes every activity instance's continuous probability of satisfying deadline at the first stage. Based on this stochastic analysis result, at the second stage, an innovative scheduling strategy is proposed to minimize the overall deadline violation cost for an actor's personal worklist. Simultaneously, the strategy recommends the actor a feasible worklist of activity instances which meet the required bottom line of successful execution. The effectiveness of our approach is evaluated in a real-world workflow management system and with large scale simulation experiments.
Modeling and Control of a Fixed Wing Tilt-Rotor Tri-Copter
NASA Astrophysics Data System (ADS)
Summers, Alexander
The following thesis considers modeling and control of a fixed wing tilt-rotor tri-copter. An emphasis of the conceptual design is made toward payload transport. Aerodynamic panel code and CAD design provide the base aerodynamic, geometric, mass, and inertia properties. A set of non-linear dynamics are created considering gravity, aerodynamics in vertical takeoff and landing (VTOL) and forward flight, and propulsion applied to a three degree of freedom system. A transition strategy, that removes trajectory planning by means of scheduled inputs, is theorized. Three discrete controllers, utilizing separate control techniques, are applied to ensure stability in the aerodynamic regions of VTOL, transition, and forward flight. The controller techniques include linear quadratic regulation, full state integral action, gain scheduling, and proportional integral derivative (PID) flight control. Simulation of the model control system for flight from forward to backward transition is completed with mass and center of gravity variation.
Genetic programming for evolving due-date assignment models in job shop environments.
Nguyen, Su; Zhang, Mengjie; Johnston, Mark; Tan, Kay Chen
2014-01-01
Due-date assignment plays an important role in scheduling systems and strongly influences the delivery performance of job shops. Because of the stochastic and dynamic nature of job shops, the development of general due-date assignment models (DDAMs) is complicated. In this study, two genetic programming (GP) methods are proposed to evolve DDAMs for job shop environments. The experimental results show that the evolved DDAMs can make more accurate estimates than other existing dynamic DDAMs with promising reusability. In addition, the evolved operation-based DDAMs show better performance than the evolved DDAMs employing aggregate information of jobs and machines.
Solution and reasoning reuse in space planning and scheduling applications
NASA Technical Reports Server (NTRS)
Verfaillie, Gerard; Schiex, Thomas
1994-01-01
In the space domain, as in other domains, the CSP (Constraint Satisfaction Problems) techniques are increasingly used to represent and solve planning and scheduling problems. But these techniques have been developed to solve CSP's which are composed of fixed sets of variables and constraints, whereas many planning and scheduling problems are dynamic. It is therefore important to develop methods which allow a new solution to be rapidly found, as close as possible to the previous one, when some variables or constraints are added or removed. After presenting some existing approaches, this paper proposes a simple and efficient method, which has been developed on the basis of the dynamic backtracking algorithm. This method allows previous solution and reasoning to be reused in the framework of a CSP which is close to the previous one. Some experimental results on general random CSPs and on operation scheduling problems for remote sensing satellites are given.
A hybrid dynamic harmony search algorithm for identical parallel machines scheduling
NASA Astrophysics Data System (ADS)
Chen, Jing; Pan, Quan-Ke; Wang, Ling; Li, Jun-Qing
2012-02-01
In this article, a dynamic harmony search (DHS) algorithm is proposed for the identical parallel machines scheduling problem with the objective to minimize makespan. First, an encoding scheme based on a list scheduling rule is developed to convert the continuous harmony vectors to discrete job assignments. Second, the whole harmony memory (HM) is divided into multiple small-sized sub-HMs, and each sub-HM performs evolution independently and exchanges information with others periodically by using a regrouping schedule. Third, a novel improvisation process is applied to generate a new harmony by making use of the information of harmony vectors in each sub-HM. Moreover, a local search strategy is presented and incorporated into the DHS algorithm to find promising solutions. Simulation results show that the hybrid DHS (DHS_LS) is very competitive in comparison to its competitors in terms of mean performance and average computational time.
Fault-tolerant dynamic task graph scheduling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurt, Mehmet C.; Krishnamoorthy, Sriram; Agrawal, Kunal
2014-11-16
In this paper, we present an approach to fault tolerant execution of dynamic task graphs scheduled using work stealing. In particular, we focus on selective and localized recovery of tasks in the presence of soft faults. We elicit from the user the basic task graph structure in terms of successor and predecessor relationships. The work stealing-based algorithm to schedule such a task graph is augmented to enable recovery when the data and meta-data associated with a task get corrupted. We use this redundancy, and the knowledge of the task graph structure, to selectively recover from faults with low space andmore » time overheads. We show that the fault tolerant design retains the essential properties of the underlying work stealing-based task scheduling algorithm, and that the fault tolerant execution is asymptotically optimal when task re-execution is taken into account. Experimental evaluation demonstrates the low cost of recovery under various fault scenarios.« less
An Aircraft Vortex Spacing System (AVOSS) for Dynamical Wake Vortex Spacing Criteria
NASA Technical Reports Server (NTRS)
Hinton, D. A.
1996-01-01
A concept is presented for the development and implementation of a prototype Aircraft Vortex Spacing System (AVOSS). The purpose of the AVOSS is to use current and short-term predictions of the atmospheric state in approach and departure corridors to provide, to ATC facilities, dynamical weather dependent separation criteria with adequate stability and lead time for use in establishing arrival scheduling. The AVOSS will accomplish this task through a combination of wake vortex transport and decay predictions, weather state knowledge, defined aircraft operational procedures and corridors, and wake vortex safety sensors. Work is currently underway to address the critical disciplines and knowledge needs so as to implement and demonstrate a prototype AVOSS in the 1999/2000 time frame.
Residential Consumption Scheduling Based on Dynamic User Profiling
NASA Astrophysics Data System (ADS)
Mangiatordi, Federica; Pallotti, Emiliano; Del Vecchio, Paolo; Capodiferro, Licia
Deployment of household appliances and of electric vehicles raises the electricity demand in the residential areas and the impact of the building's electrical power. The variations of electricity consumption across the day, may affect both the design of the electrical generation facilities and the electricity bill, mainly when a dynamic pricing is applied. This paper focuses on an energy management system able to control the day-ahead electricity demand in a residential area, taking into account both the variability of the energy production costs and the profiling of the users. The user's behavior is dynamically profiled on the basis of the tasks performed during the previous days and of the tasks foreseen for the current day. Depending on the size and on the flexibility in time of the user tasks, home inhabitants are grouped in, one over N, energy profiles, using a k-means algorithm. For a fixed energy generation cost, each energy profile is associated to a different hourly energy cost. The goal is to identify any bad user profile and to make it pay a highest bill. A bad profile example is when a user applies a lot of consumption tasks and low flexibility in task reallocation time. The proposed energy management system automatically schedules the tasks, solving a multi-objective optimization problem based on an MPSO strategy. The goals, when identifying bad users profiles, are to reduce the peak to average ratio in energy demand, and to minimize the energy costs, promoting virtuous behaviors.
NASA Technical Reports Server (NTRS)
Mclean, David R.; Tuchman, Alan; Potter, William J.
1991-01-01
Recently, many expert systems were developed in a LISP environment and then ported to the real world C environment before the final system is delivered. This situation may require that the entire system be completely rewritten in C and may actually result in a system which is put together as quickly as possible with little regard for maintainability and further evolution. With the introduction of high performance UNIX and X-windows based workstations, a great deal of the advantages of developing a first system in the LISP environment have become questionable. A C-based AI development effort is described which is based on a software tools approach with emphasis on reusability and maintainability of code. The discussion starts with simple examples of how list processing can easily be implemented in C and then proceeds to the implementations of frames and objects which use dynamic memory allocation. The implementation of procedures which use depth first search, constraint propagation, context switching and a blackboard-like simulation environment are described. Techniques for managing the complexity of C-based AI software are noted, especially the object-oriented techniques of data encapsulation and incremental development. Finally, all these concepts are put together by describing the components of planning software called the Planning And Resource Reasoning (PARR) shell. This shell was successfully utilized for scheduling services of the Tracking and Data Relay Satellite System for the Earth Radiation Budget Satellite since May 1987 and will be used for operations scheduling of the Explorer Platform in November 1991.
A Web-Remote/Robotic/Scheduled Astronomical Data Acquisition System
NASA Astrophysics Data System (ADS)
Denny, Robert
2011-03-01
Traditionally, remote/robotic observatory operating systems have been custom made for each observatory. While data reduction pipelines need to be tailored for each investigation, the data acquisition process (especially for stare-mode optical images) is often quite similar across investigations. Since 1999, DC-3 Dreams has focused on providing and supporting a remote/robotic observatory operating system which can be adapted to a wide variety of physical hardware and optics while achieving the highest practical observing efficiency and safe/secure web browser user controls. ACP Expert consists of three main subsystems: (1) a robotic list-driven data acquisition engine which controls all aspects of the observatory, (2) a constraint-driven dispatch scheduler with a long-term database of requests, and (3) a built-in "zero admin" web server and dynamic web pages which provide a remote capability for immediate execution and monitoring as well as entry and monitoring of dispatch-scheduled observing requests. No remote desktop login is necessary for observing, thus keeping the system safe and consistent. All routine operation is via the web browser. A wide variety of telescope mounts, CCD imagers, guiding sensors, filter selectors, focusers, instrument-package rotators, weather sensors, and dome control systems are supported via the ASCOM standardized device driver architecture. The system is most commonly employed on commercial 1-meter and smaller observatories used by universities and advanced amateurs for both science and art. One current project, the AAVSO Photometric All-Sky Survey (APASS), uses ACP Expert to acquire large volumes of data in dispatch-scheduled mode. In its first 18 months of operation (North then South), 40,307 sky images were acquired in 117 photometric nights, resulting in 12,107,135 stars detected two or more times. These stars had measures in 5 filters. The northern station covered 754 fields (6446 square degrees) at least twice, the southern station covered 951 fields (8500 square degrees) at least twice. The database of photometric calibrations is available from AAVSO. The paper will cover the ACP web interface, including the use of AJAX and JSON within a micro-content framework, as well as dispatch scheduler and acquisition engine operation.
Swarm satellite mission scheduling & planning using Hybrid Dynamic Mutation Genetic Algorithm
NASA Astrophysics Data System (ADS)
Zheng, Zixuan; Guo, Jian; Gill, Eberhard
2017-08-01
Space missions have traditionally been controlled by operators from a mission control center. Given the increasing number of satellites for some space missions, generating a command list for multiple satellites can be time-consuming and inefficient. Developing multi-satellite, onboard mission scheduling & planning techniques is, therefore, a key research field for future space mission operations. In this paper, an improved Genetic Algorithm (GA) using a new mutation strategy is proposed as a mission scheduling algorithm. This new mutation strategy, called Hybrid Dynamic Mutation (HDM), combines the advantages of both dynamic mutation strategy and adaptive mutation strategy, overcoming weaknesses such as early convergence and long computing time, which helps standard GA to be more efficient and accurate in dealing with complex missions. HDM-GA shows excellent performance in solving both unconstrained and constrained test functions. The experiments of using HDM-GA to simulate a multi-satellite, mission scheduling problem demonstrates that both the computation time and success rate mission requirements can be met. The results of a comparative test between HDM-GA and three other mutation strategies also show that HDM has outstanding performance in terms of speed and reliability.
2004-03-01
turned off. SLEEP Set the timer for 30 seconds before scheduled transmit time, then sleep the processor. WAKE When timer trips, power up the processor...slots where none of its neighbors are schedule to transmit. This allows the sensor nodes to perform a simple power man- agement scheme that puts the...routing This simple case study highlights the following crucial observation: optimal traffic scheduling in energy constrained networks requires future
User requirements for a patient scheduling system
NASA Technical Reports Server (NTRS)
Zimmerman, W.
1979-01-01
A rehabilitation institute's needs and wants from a scheduling system were established by (1) studying the existing scheduling system and the variables that affect patient scheduling, (2) conducting a human-factors study to establish the human interfaces that affect patients' meeting prescribed therapy schedules, and (3) developing and administering a questionnaire to the staff which pertains to the various interface problems in order to identify staff requirements to minimize scheduling problems and other factors that may limit the effectiveness of any new scheduling system.
Planning for rover opportunistic science
NASA Technical Reports Server (NTRS)
Gaines, Daniel M.; Estlin, Tara; Forest, Fisher; Chouinard, Caroline; Castano, Rebecca; Anderson, Robert C.
2004-01-01
The Mars Exploration Rover Spirit recently set a record for the furthest distance traveled in a single sol on Mars. Future planetary exploration missions are expected to use even longer drives to position rovers in areas of high scientific interest. This increase provides the potential for a large rise in the number of new science collection opportunities as the rover traverses the Martian surface. In this paper, we describe the OASIS system, which provides autonomous capabilities for dynamically identifying and pursuing these science opportunities during longrange traverses. OASIS uses machine learning and planning and scheduling techniques to address this goal. Machine learning techniques are applied to analyze data as it is collected and quickly determine new science gods and priorities on these goals. Planning and scheduling techniques are used to alter the behavior of the rover so that new science measurements can be performed while still obeying resource and other mission constraints. We will introduce OASIS and describe how planning and scheduling algorithms support opportunistic science.
Mixed Integer Programming and Heuristic Scheduling for Space Communication Networks
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming; Lee, Charles H.
2012-01-01
We developed framework and the mathematical formulation for optimizing communication network using mixed integer programming. The design yields a system that is much smaller, in search space size, when compared to the earlier approach. Our constrained network optimization takes into account the dynamics of link performance within the network along with mission and operation requirements. A unique penalty function is introduced to transform the mixed integer programming into the more manageable problem of searching in a continuous space. The constrained optimization problem was proposed to solve in two stages: first using the heuristic Particle Swarming Optimization algorithm to get a good initial starting point, and then feeding the result into the Sequential Quadratic Programming algorithm to achieve the final optimal schedule. We demonstrate the above planning and scheduling methodology with a scenario of 20 spacecraft and 3 ground stations of a Deep Space Network site. Our approach and framework have been simple and flexible so that problems with larger number of constraints and network can be easily adapted and solved.
Scheduling for Parallel Supercomputing: A Historical Perspective of Achievable Utilization
NASA Technical Reports Server (NTRS)
Jones, James Patton; Nitzberg, Bill
1999-01-01
The NAS facility has operated parallel supercomputers for the past 11 years, including the Intel iPSC/860, Intel Paragon, Thinking Machines CM-5, IBM SP-2, and Cray Origin 2000. Across this wide variety of machine architectures, across a span of 10 years, across a large number of different users, and through thousands of minor configuration and policy changes, the utilization of these machines shows three general trends: (1) scheduling using a naive FIFO first-fit policy results in 40-60% utilization, (2) switching to the more sophisticated dynamic backfilling scheduling algorithm improves utilization by about 15 percentage points (yielding about 70% utilization), and (3) reducing the maximum allowable job size further increases utilization. Most surprising is the consistency of these trends. Over the lifetime of the NAS parallel systems, we made hundreds, perhaps thousands, of small changes to hardware, software, and policy, yet, utilization was affected little. In particular these results show that the goal of achieving near 100% utilization while supporting a real parallel supercomputing workload is unrealistic.
An undergraduate course, and new textbook, on ``Physical Models of Living Systems''
NASA Astrophysics Data System (ADS)
Nelson, Philip
2015-03-01
I'll describe an intermediate-level course on ``Physical Models of Living Systems.'' The only prerequisite is first-year university physics and calculus. The course is a response to rapidly growing interest among undergraduates in several science and engineering departments. Students acquire several research skills that are often not addressed in traditional courses, including: basic modeling skills, probabilistic modeling skills, data analysis methods, computer programming using a general-purpose platform like MATLAB or Python, dynamical systems, particularly feedback control. These basic skills, which are relevant to nearly any field of science or engineering, are presented in the context of case studies from living systems, including: virus dynamics; bacterial genetics and evolution of drug resistance; statistical inference; superresolution microscopy; synthetic biology; naturally evolved cellular circuits. Publication of a new textbook by WH Freeman and Co. is scheduled for December 2014. Supported in part by EF-0928048 and DMR-0832802.
Prediction-based dynamic load-sharing heuristics
NASA Technical Reports Server (NTRS)
Goswami, Kumar K.; Devarakonda, Murthy; Iyer, Ravishankar K.
1993-01-01
The authors present dynamic load-sharing heuristics that use predicted resource requirements of processes to manage workloads in a distributed system. A previously developed statistical pattern-recognition method is employed for resource prediction. While nonprediction-based heuristics depend on a rapidly changing system status, the new heuristics depend on slowly changing program resource usage patterns. Furthermore, prediction-based heuristics can be more effective since they use future requirements rather than just the current system state. Four prediction-based heuristics, two centralized and two distributed, are presented. Using trace driven simulations, they are compared against random scheduling and two effective nonprediction based heuristics. Results show that the prediction-based centralized heuristics achieve up to 30 percent better response times than the nonprediction centralized heuristic, and that the prediction-based distributed heuristics achieve up to 50 percent improvements relative to their nonprediction counterpart.
Joiner, Wilsaan M.; Brayanov, Jordan B.
2013-01-01
The way that a motor adaptation is trained, for example, the manner in which it is introduced or the duration of the training period, can influence its internal representation. However, recent studies examining the gradual versus abrupt introduction of a novel environment have produced conflicting results. Here we examined how these effects determine the effector specificity of motor adaptation during visually guided reaching. After adaptation to velocity-dependent dynamics in the right arm, we estimated the amount of adaptation transferred to the left arm, using error-clamp measurement trials to directly measure changes in learned dynamics. We found that a small but significant amount of generalization to the untrained arm occurs under three different training schedules: a short-duration (15 trials) abrupt presentation, a long-duration (160 trials) abrupt presentation, and a long-duration gradual presentation of the novel dynamic environment. Remarkably, we found essentially no difference between the amount of interlimb generalization when comparing these schedules, with 9–12% transfer of the trained adaptation for all three. However, the duration of training had a pronounced effect on the stability of the interlimb transfer: The transfer elicited from short-duration training decayed rapidly, whereas the transfer from both long-duration training schedules was considerably more persistent (<50% vs. >90% retention over the first 20 trials). These results indicate that the amount of interlimb transfer is similar for gradual versus abrupt training and that interlimb transfer of learned dynamics can occur after even a brief training period but longer training is required for an enduring effect. PMID:23719204
Joiner, Wilsaan M; Brayanov, Jordan B; Smith, Maurice A
2013-08-01
The way that a motor adaptation is trained, for example, the manner in which it is introduced or the duration of the training period, can influence its internal representation. However, recent studies examining the gradual versus abrupt introduction of a novel environment have produced conflicting results. Here we examined how these effects determine the effector specificity of motor adaptation during visually guided reaching. After adaptation to velocity-dependent dynamics in the right arm, we estimated the amount of adaptation transferred to the left arm, using error-clamp measurement trials to directly measure changes in learned dynamics. We found that a small but significant amount of generalization to the untrained arm occurs under three different training schedules: a short-duration (15 trials) abrupt presentation, a long-duration (160 trials) abrupt presentation, and a long-duration gradual presentation of the novel dynamic environment. Remarkably, we found essentially no difference between the amount of interlimb generalization when comparing these schedules, with 9-12% transfer of the trained adaptation for all three. However, the duration of training had a pronounced effect on the stability of the interlimb transfer: The transfer elicited from short-duration training decayed rapidly, whereas the transfer from both long-duration training schedules was considerably more persistent (<50% vs. >90% retention over the first 20 trials). These results indicate that the amount of interlimb transfer is similar for gradual versus abrupt training and that interlimb transfer of learned dynamics can occur after even a brief training period but longer training is required for an enduring effect.
1991-09-01
SOFTWARE DEVELOPMENT by Richard W. Smith September, 1991 Thesis Advisor: Tarek K. Abdel-Hamid Approved for public release; distribution is unlimited...REPORT Approved for public release; distribution is unlimited. 2b DECLASSIFICATION/DOWNGRADING SCHEDULE 4 PERFORMING ORGANIZATION REPORT NUMBER(S) S...exhausted SECURITY CLASSIFICATION OF THIS P (it All other edttiois are obsotete U NCLASSIFIE) Approved for public release; distribution is unlimited
Conception of Self-Construction Production Scheduling System
NASA Astrophysics Data System (ADS)
Xue, Hai; Zhang, Xuerui; Shimizu, Yasuhiro; Fujimura, Shigeru
With the high speed innovation of information technology, many production scheduling systems have been developed. However, a lot of customization according to individual production environment is required, and then a large investment for development and maintenance is indispensable. Therefore now the direction to construct scheduling systems should be changed. The final objective of this research aims at developing a system which is built by it extracting the scheduling technique automatically through the daily production scheduling work, so that an investment will be reduced. This extraction mechanism should be applied for various production processes for the interoperability. Using the master information extracted by the system, production scheduling operators can be supported to accelerate the production scheduling work easily and accurately without any restriction of scheduling operations. By installing this extraction mechanism, it is easy to introduce scheduling system without a lot of expense for customization. In this paper, at first a model for expressing a scheduling problem is proposed. Then the guideline to extract the scheduling information and use the extracted information is shown and some applied functions are also proposed based on it.
Using a System Model for Irrigation Management
NASA Astrophysics Data System (ADS)
de Souza, Leonardo; de Miranda, Eu; Sánchez-Román, Rodrigo; Orellana-González, Alba
2014-05-01
When using Systems Thinking variables involved in any process have a dynamic behavior, according to nonstatic relationships with the environment. In this paper it is presented a system dynamics model developed to be used as an irrigation management tool. The model involves several parameters related to irrigation such as: soil characteristics, climate data and culture's physiological parameters. The water availability for plants in the soil is defined as a stock in the model, and this soil water content will define the right moment to irrigate and the water depth required to be applied. The crop water consumption will reduce soil water content; it is defined by the potential evapotranspiration (ET) that acts as an outflow from the stock (soil water content). ET can be estimated by three methods: a) FAO Penman-Monteith (ETPM), b) Hargreaves-Samani (ETHS) method, based on air temperature data and c) Class A pan (ETTCA) method. To validate the model were used data from the States of Ceará and Minas Gerais, Brazil, and the culture was bean. Keyword: System Dynamics, soil moisture content, agricultural water balance, irrigation scheduling.
Planning and Execution for an Autonomous Aerobot
NASA Technical Reports Server (NTRS)
Gaines, Daniel M.; Estlin, Tara A.; Schaffer, Steven R.; Chouinard, Caroline M.
2010-01-01
The Aerial Onboard Autonomous Science Investigation System (AerOASIS) system provides autonomous planning and execution capabilities for aerial vehicles (see figure). The system is capable of generating high-quality operations plans that integrate observation requests from ground planning teams, as well as opportunistic science events detected onboard the vehicle while respecting mission and resource constraints. AerOASIS allows an airborne planetary exploration vehicle to summarize and prioritize the most scientifically relevant data; identify and select high-value science sites for additional investigation; and dynamically plan, schedule, and monitor the various science activities being performed, even during extended communications blackout periods with Earth.
Landslide: Systematic Dynamic Race Detection in Kernel Space
2012-05-01
schedule_in_flight← true; CAUSE_TIMER_INTERRUPT(); end if end function Thread Scheduling Finally, the Landslide scheduler is responsible for managing ...child process vanish() simultaneously. • double_wait: Tests interactions of multiple waiters on a single child. • double_thread_fork: Tests for...conditions using Landslide. We describe them here. • Too many waiters allowed. Using the double_wait test case, Group 1 found a bug in which more threads
Cellular-V2X Communications for Platooning: Design and Evaluation
2018-01-01
Platooning is a cooperative driving application where autonomous/semi-autonomous vehicles move on the same lane in a train-like manner, keeping a small constant inter-vehicle distance, in order to reduce fuel consumption and gas emissions and to achieve safe and efficient transport. To this aim, they may exploit multiple on-board sensors (e.g., radars, LiDARs, positioning systems) and direct vehicle-to-vehicle communications to synchronize their manoeuvres. The main objective of this paper is to discuss the design choices and factors that determine the performance of a platooning application, when exploiting the emerging cellular vehicle-to-everything (C-V2X) communication technology and considering the scheduled mode, specified by 3GPP for communications over the sidelink assisted by the eNodeB. Since no resource management algorithm is currently mandated by 3GPP for this new challenging context, we focus on analyzing the feasibility and performance of the dynamic scheduling approach, with platoon members asking for radio resources on a per-packet basis. We consider two ways of implementing dynamic scheduling, currently unspecified by 3GPP: the sequential mode, that is somehow reminiscent of time division multiple access solutions based on IEEE 802.11p—till now the only investigated access technology for platooning—and the simultaneous mode with spatial frequency reuse enabled by the eNodeB. The evaluation conducted through system-level simulations provides helpful insights about the proposed configurations and C-V2X parameter settings that mainly affect the reliability and latency performance of data exchange in platoons, under different load settings. Achieved results show that the proposed simultaneous mode succeeds in reducing the latency in the update cycle in each vehicle’s controller, thus enabling future high-density platooning scenarios. PMID:29751690
Cellular-V2X Communications for Platooning: Design and Evaluation.
Nardini, Giovanni; Virdis, Antonio; Campolo, Claudia; Molinaro, Antonella; Stea, Giovanni
2018-05-11
Platooning is a cooperative driving application where autonomous/semi-autonomous vehicles move on the same lane in a train-like manner, keeping a small constant inter-vehicle distance, in order to reduce fuel consumption and gas emissions and to achieve safe and efficient transport. To this aim, they may exploit multiple on-board sensors (e.g., radars, LiDARs, positioning systems) and direct vehicle-to-vehicle communications to synchronize their manoeuvres. The main objective of this paper is to discuss the design choices and factors that determine the performance of a platooning application, when exploiting the emerging cellular vehicle-to-everything (C-V2X) communication technology and considering the scheduled mode, specified by 3GPP for communications over the sidelink assisted by the eNodeB. Since no resource management algorithm is currently mandated by 3GPP for this new challenging context, we focus on analyzing the feasibility and performance of the dynamic scheduling approach, with platoon members asking for radio resources on a per-packet basis. We consider two ways of implementing dynamic scheduling, currently unspecified by 3GPP: the sequential mode, that is somehow reminiscent of time division multiple access solutions based on IEEE 802.11p-till now the only investigated access technology for platooning-and the simultaneous mode with spatial frequency reuse enabled by the eNodeB. The evaluation conducted through system-level simulations provides helpful insights about the proposed configurations and C-V2X parameter settings that mainly affect the reliability and latency performance of data exchange in platoons, under different load settings. Achieved results show that the proposed simultaneous mode succeeds in reducing the latency in the update cycle in each vehicle's controller, thus enabling future high-density platooning scenarios.
NASA Technical Reports Server (NTRS)
1975-01-01
The trajectory simulation mode (SIMSEP) requires the namelist SIMSEP to follow TRAJ. The SIMSEP contains parameters which describe the scope of the simulation, expected dynamic errors, and cumulative statistics from previous SIMSEP runs. Following SIMSEP are a set of GUID namelists, one for each guidance correction maneuver. The GUID describes the strategy, knowledge or estimation uncertainties and cumulative statistics for that particular maneuver. The trajectory display mode (REFSEP) requires only the namelist TRAJ followed by scheduling cards, similar to those used in GODSEP. The fixed field schedule cards define: types of data displayed, span of interest, and frequency of printout. For those users who can vary the amount of blank common storage in their runs, a guideline to estimate the total MAPSEP core requirements is given. Blank common length is related directly to the dimension of the dynamic state (NDIM) used in transition matrix (STM) computation, and, the total augmented (knowledge) state (NAUG). The values of program and blank common must be added to compute the total decimal core for a CDC 6500. Other operating systems must scale these requirements appropriately.
Teddy, S D; Quek, C; Lai, E M-K; Cinar, A
2010-03-01
Therapeutically, the closed-loop blood glucose-insulin regulation paradigm via a controllable insulin pump offers a potential solution to the management of diabetes. However, the development of such a closed-loop regulatory system to date has been hampered by two main issues: 1) the limited knowledge on the complex human physiological process of glucose-insulin metabolism that prevents a precise modeling of the biological blood glucose control loop; and 2) the vast metabolic biodiversity of the diabetic population due to varying exogneous and endogenous disturbances such as food intake, exercise, stress, and hormonal factors, etc. In addition, current attempts of closed-loop glucose regulatory techniques generally require some form of prior meal announcement and this constitutes a severe limitation to the applicability of such systems. In this paper, we present a novel intelligent insulin schedule based on the pseudo self-evolving cerebellar model articulation controller (PSECMAC) associative learning memory model that emulates the healthy human insulin response to food ingestion. The proposed PSECMAC intelligent insulin schedule requires no prior meal announcement and delivers the necessary insulin dosage based only on the observed blood glucose fluctuations. Using a simulated healthy subject, the proposed PSECMAC insulin schedule is demonstrated to be able to accurately capture the complex human glucose-insulin dynamics and robustly addresses the intraperson metabolic variability. Subsequently, the PSECMAC intelligent insulin schedule is employed on a group of type-1 diabetic patients to regulate their impaired blood glucose levels. Preliminary simulation results are highly encouraging. The work reported in this paper represents a major paradigm shift in the management of diabetes where patient compliance is poor and the need for prior meal announcement under current treatment regimes poses a significant challenge to an active lifestyle.
Team formation and breakup in multiagent systems
NASA Astrophysics Data System (ADS)
Rao, Venkatesh Guru
The goal of this dissertation is to pose and solve problems involving team formation and breakup in two specific multiagent domains: formation travel and space-based interferometric observatories. The methodology employed comprises elements drawn from control theory, scheduling theory and artificial intelligence (AI). The original contribution of the work comprises three elements. The first contribution, the partitioned state-space approach is a technique for formulating and solving co-ordinated motion problem using calculus of variations techniques. The approach is applied to obtain optimal two-agent formation travel trajectories on graphs. The second contribution is the class of MixTeam algorithms, a class of team dispatchers that extends classical dispatching by accommodating team formation and breakup and exploration/exploitation learning. The algorithms are applied to observation scheduling and constellation geometry design for interferometric space telescopes. The use of feedback control for team scheduling is also demonstrated with these algorithms. The third contribution is the analysis of the optimality properties of greedy, or myopic, decision-making for a simple class of team dispatching problems. This analysis represents a first step towards the complete analysis of complex team schedulers such as the MixTeam algorithms. The contributions represent an extension to the literature on team dynamics in control theory. The broad conclusions that emerge from this research are that greedy or myopic decision-making strategies for teams perform well when specific parameters in the domain are weakly affected by an agent's actions, and that intelligent systems require a closer integration of domain knowledge in decision-making functions.
A new Self-Adaptive disPatching System for local clusters
NASA Astrophysics Data System (ADS)
Kan, Bowen; Shi, Jingyan; Lei, Xiaofeng
2015-12-01
The scheduler is one of the most important components of a high performance cluster. This paper introduces a self-adaptive dispatching system (SAPS) based on Torque[1] and Maui[2]. It promotes cluster resource utilization and improves the overall speed of tasks. It provides some extra functions for administrators and users. First of all, in order to allow the scheduling of GPUs, a GPU scheduling module based on Torque and Maui has been developed. Second, SAPS analyses the relationship between the number of queueing jobs and the idle job slots, and then tunes the priority of users’ jobs dynamically. This means more jobs run and fewer job slots are idle. Third, integrating with the monitoring function, SAPS excludes nodes in error states as detected by the monitor, and returns them to the cluster after the nodes have recovered. In addition, SAPS provides a series of function modules including a batch monitoring management module, a comprehensive scheduling accounting module and a real-time alarm module. The aim of SAPS is to enhance the reliability and stability of Torque and Maui. Currently, SAPS has been running stably on a local cluster at IHEP (Institute of High Energy Physics, Chinese Academy of Sciences), with more than 12,000 cpu cores and 50,000 jobs running each day. Monitoring has shown that resource utilization has been improved by more than 26%, and the management work for both administrator and users has been reduced greatly.
Dynamic Resource Allocation for IEEE802.16e
NASA Astrophysics Data System (ADS)
Nascimento, Alberto; Rodriguez, Jonathan
Mobile communications has witnessed an exponential increase in the amount of users, services and applications. New high bandwidth consuming applications are targeted for B3G networks raising more stringent requirements for Dynamic Resource Allocation (DRA) architectures and packet schedulers that must be spectrum efficient and deliver QoS for heterogeneous applications and services. In this paper we propose a new cross layer-based architecture framework embedded in a newly designed DRA architecture for the Mobile WiMAX standard. System level simulation results show that the proposed architecture can be considered a viable candidate solution for supporting mixed services in a cost-effective manner in contrast to existing approaches.
Schedule Dependence in Cancer Therapy: Intravenous Vitamin C and the Systemic Saturation Hypothesis
Miranda Massari, Jorge R.; Duconge, Jorge; Riordan, Neil H.; Ichim, Thomas
2013-01-01
Despite the significant number of in vitro and in vivo studies to assess vitamin C effects on cancer following the application of large doses and its extensive use by alternative medicine practitioners in the USA; the precise schedule for successful cancer therapy is still unknown. Based on interpretation of the available data, we postulate that the relationship between Vitamin C doses and plasma concentration x time, the capability of tissue stores upon distribution, and the saturable mechanism of urinary excretion are all important determinants to understand the physiology of high intravenous vitamin C dose administration and its effect on cancer. Practitioners should pay more attention to the cumulative vitamin C effect instead of the vitamin C concentrations to account for observed discrepancy in antitumor response. We suggest that multiple, intermittent, short-term intravenous infusions of vitamin C over a longer time period will correlate with greater antitumor effects than do single continuous IV doses of the same total exposure. This approach would be expected to minimize saturation of renal reabsorption, providing a continuous “dynamic flow” of vitamin C in the body for optimal systemic exposure and clinical outcomes. This prevents the “systemic saturation” phenomena, which may recycle vitamin C and render it less effective as an anticancer agent. Nonetheless, more pharmacokinetic and pharmacodynamic studies are needed to fully understand this schedule-dependence phenomenon. PMID:24860238
NASA Technical Reports Server (NTRS)
Burgin, G. H.; Eggleston, D. M.
1976-01-01
A flight control system for use in air-to-air combat simulation was designed. The input to the flight control system are commanded bank angle and angle of attack, the output are commands to the control surface actuators such that the commanded values will be achieved in near minimum time and sideslip is controlled to remain small. For the longitudinal direction, a conventional linear control system with gains scheduled as a function of dynamic pressure is employed. For the lateral direction, a novel control system, consisting of a linear portion for small bank angle errors and a bang-bang control system for large errors and error rates is employed.
Complex ambulatory settings demand scheduling systems.
Ross, K M
1998-01-01
Practice management systems are becoming more and more complex, as they are asked to integrate all aspects of patient and resource management. Although patient scheduling is a standard expectation in any ambulatory environment, facilities and equipment resource scheduling are additional functionalities of scheduling systems. Because these functions were not typically managed in manual patient scheduling, often the result was resource mismanagement, along with a potential negative impact on utilization, patient flow and provider productivity. As ambulatory organizations have become more seasoned users of practice management software, the value of resource scheduling has become apparent. Appointment scheduling within a fully integrated practice management system is recognized as an enhancement of scheduling itself and provides additional tools to manage other information needs. Scheduling, as one component of patient information management, provides additional tools in these areas.
Optimization-based manufacturing scheduling with multiple resources and setup requirements
NASA Astrophysics Data System (ADS)
Chen, Dong; Luh, Peter B.; Thakur, Lakshman S.; Moreno, Jack, Jr.
1998-10-01
The increasing demand for on-time delivery and low price forces manufacturer to seek effective schedules to improve coordination of multiple resources and to reduce product internal costs associated with labor, setup and inventory. This study describes the design and implementation of a scheduling system for J. M. Product Inc. whose manufacturing is characterized by the need to simultaneously consider machines and operators while an operator may attend several operations at the same time, and the presence of machines requiring significant setup times. The scheduling problem with these characteristics are typical for many manufacturers, very difficult to be handled, and have not been adequately addressed in the literature. In this study, both machine and operators are modeled as resources with finite capacities to obtain efficient coordination between them, and an operator's time can be shared by several operations at the same time to make full use of the operator. Setups are explicitly modeled following our previous work, with additional penalties on excessive setups to reduce setup costs and avoid possible scraps. An integer formulation with a separable structure is developed to maximize on-time delivery of products, low inventory and small number of setups. Within the Lagrangian relaxation framework, the problem is decomposed into individual subproblems that are effectively solved by using dynamic programming with additional penalties embedded in state transitions. Heuristics is then developed to obtain a feasible schedule following on our previous work with new mechanism to satisfy operator capacity constraints. The method has been implemented using the object-oriented programming language C++ with a user-friendly interface, and numerical testing shows that the method generates high quality schedules in a timely fashion. Through simultaneous consideration of machines and operators, machines and operators are well coordinated to facilitate the smooth flow of parts through the system. The explicit modeling of setups and the associated penalties let parts with same setup requirements clustered together to avoid excessive setups.
System Dynamics and Management Science Approaches Toward Increasing Acquisition Process Efficiency
2015-06-26
1-85908-475-5, The Association of Chartered Certified Accountants ( ACCA ), London, UK, February 2012 Wood 2012 Roy Wood: Schedule-Driven Costs in...services, improved quality, and the generation of additional revenues (European Commission 2003). Especially in a time of financial shortfalls and cuts in...Lyneis 2007, Garcia 2009, Sterman 2000). Figure 11: benefits of the project client and the financial aspects The next modeling step is to reflect the
Scheduling and Coordination of Multiple Dynamic Systems.
1979-12-01
Lemna 9. For C (.) defined in (39), .im C (D) -C (D ) exists V DE(.,D) (42) D-D and him4 C(D) C*(D+) exists V DE[D,D). (43) D-D Proof. For any DEi(,D] a...0[t0 ,1 ] where -to - [t,..., tK ’ (151) With this minor abuse of notation, the gradient of C[(t,V1 is to be K found with respect to t ER This
Information, Consistent Estimation and Dynamic System Identification.
1976-11-01
Washington,DC 232129 Tj-CUOSITORING AGENCY NAMIE 6 AOORESS(lI dittevmet Itroo CuooottaaII Offics) IS.- SECURITY CLASS. (of this *.part) SCHEDULE ’B...representative model from a given model set, applicable to infinite and even non-compact model sets. S-UNCLASSIFIED SECURITY CLASSIFICATION OF THIS PAOrj(*whe...ergodicity. For a thorough development of ergodic theory the reader is referred to, e.g., Doob [1953], Halmos [1956] and Chacon and Ornstein [1959
User-Assisted Store Recycling for Dynamic Task Graph Schedulers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurt, Mehmet Can; Krishnamoorthy, Sriram; Agrawal, Gagan
The emergence of the multi-core era has led to increased interest in designing effective yet practical parallel programming models. Models based on task graphs that operate on single-assignment data are attractive in several ways: they can support dynamic applications and precisely represent the available concurrency. However, they also require nuanced algorithms for scheduling and memory management for efficient execution. In this paper, we consider memory-efficient dynamic scheduling of task graphs. Specifically, we present a novel approach for dynamically recycling the memory locations assigned to data items as they are produced by tasks. We develop algorithms to identify memory-efficient store recyclingmore » functions by systematically evaluating the validity of a set of (user-provided or automatically generated) alternatives. Because recycling function can be input data-dependent, we have also developed support for continued correct execution of a task graph in the presence of a potentially incorrect store recycling function. Experimental evaluation demonstrates that our approach to automatic store recycling incurs little to no overheads, achieves memory usage comparable to the best manually derived solutions, often produces recycling functions valid across problem sizes and input parameters, and efficiently recovers from an incorrect choice of store recycling functions.« less
Energy Efficient Real-Time Scheduling Using DPM on Mobile Sensors with a Uniform Multi-Cores
Kim, Youngmin; Lee, Chan-Gun
2017-01-01
In wireless sensor networks (WSNs), sensor nodes are deployed for collecting and analyzing data. These nodes use limited energy batteries for easy deployment and low cost. The use of limited energy batteries is closely related to the lifetime of the sensor nodes when using wireless sensor networks. Efficient-energy management is important to extending the lifetime of the sensor nodes. Most effort for improving power efficiency in tiny sensor nodes has focused mainly on reducing the power consumed during data transmission. However, recent emergence of sensor nodes equipped with multi-cores strongly requires attention to be given to the problem of reducing power consumption in multi-cores. In this paper, we propose an energy efficient scheduling method for sensor nodes supporting a uniform multi-cores. We extend the proposed T-Ler plane based scheduling for global optimal scheduling of a uniform multi-cores and multi-processors to enable power management using dynamic power management. In the proposed approach, processor selection for a scheduling and mapping method between the tasks and processors is proposed to efficiently utilize dynamic power management. Experiments show the effectiveness of the proposed approach compared to other existing methods. PMID:29240695
Chemotherapy and treatment scheduling: the Johns Hopkins Oncology Center Outpatient Department.
Majidi, F.; Enterline, J. P.; Ashley, B.; Fowler, M. E.; Ogorzalek, L. L.; Gaudette, R.; Stuart, G. J.; Fulton, M.; Ettinger, D. S.
1993-01-01
The Chemotherapy and Treatment Scheduling System provides integrated appointment and facility scheduling for very complex procedures. It is fully integrated with other scheduling systems at The Johns Hopkins Oncology Center and is supported by the Oncology Clinical Information System (OCIS). It provides a combined visual and textual environment for the scheduling of events that have multiple dimensions and dependencies on other scheduled events. It is also fully integrated with other clinical decision support and ancillary systems within OCIS. The system has resulted in better patient flow through the ambulatory care areas of the Center. Implementing the system required changes in behavior among physicians, staff, and patients. This system provides a working example of building a sophisticated rule-based scheduling system using a relatively simple paradigm. It also is an example of what can be achieved when there is total integration between the operational and clinical components of patient care automation. PMID:8130453
A framework for service enterprise workflow simulation with multi-agents cooperation
NASA Astrophysics Data System (ADS)
Tan, Wenan; Xu, Wei; Yang, Fujun; Xu, Lida; Jiang, Chuanqun
2013-11-01
Process dynamic modelling for service business is the key technique for Service-Oriented information systems and service business management, and the workflow model of business processes is the core part of service systems. Service business workflow simulation is the prevalent approach to be used for analysis of service business process dynamically. Generic method for service business workflow simulation is based on the discrete event queuing theory, which is lack of flexibility and scalability. In this paper, we propose a service workflow-oriented framework for the process simulation of service businesses using multi-agent cooperation to address the above issues. Social rationality of agent is introduced into the proposed framework. Adopting rationality as one social factor for decision-making strategies, a flexible scheduling for activity instances has been implemented. A system prototype has been developed to validate the proposed simulation framework through a business case study.
Modeling and Analysis of Commercial Building Electrical Loads for Demand Side Management
NASA Astrophysics Data System (ADS)
Berardino, Jonathan
In recent years there has been a push in the electric power industry for more customer involvement in the electricity markets. Traditionally the end user has played a passive role in the planning and operation of the power grid. However, many energy markets have begun opening up opportunities to consumers who wish to commit a certain amount of their electrical load under various demand side management programs. The potential benefits of more demand participation include reduced operating costs and new revenue opportunities for the consumer, as well as more reliable and secure operations for the utilities. The management of these load resources creates challenges and opportunities to the end user that were not present in previous market structures. This work examines the behavior of commercial-type building electrical loads and their capacity for supporting demand side management actions. This work is motivated by the need for accurate and dynamic tools to aid in the advancement of demand side operations. A dynamic load model is proposed for capturing the response of controllable building loads. Building-specific load forecasting techniques are developed, with particular focus paid to the integration of building management system (BMS) information. These approaches are tested using Drexel University building data. The application of building-specific load forecasts and dynamic load modeling to the optimal scheduling of multi-building systems in the energy market is proposed. Sources of potential load uncertainty are introduced in the proposed energy management problem formulation in order to investigate the impact on the resulting load schedule.
NASA Technical Reports Server (NTRS)
Adair, Jerry R.
1994-01-01
This paper is a consolidated report on ten major planning and scheduling systems that have been developed by the National Aeronautics and Space Administration (NASA). A description of each system, its components, and how it could be potentially used in private industry is provided in this paper. The planning and scheduling technology represented by the systems ranges from activity based scheduling employing artificial intelligence (AI) techniques to constraint based, iterative repair scheduling. The space related application domains in which the systems have been deployed vary from Space Shuttle monitoring during launch countdown to long term Hubble Space Telescope (HST) scheduling. This paper also describes any correlation that may exist between the work done on different planning and scheduling systems. Finally, this paper documents the lessons learned from the work and research performed in planning and scheduling technology and describes the areas where future work will be conducted.
Schell, Greggory J; Lavieri, Mariel S; Helm, Jonathan E; Liu, Xiang; Musch, David C; Van Oyen, Mark P; Stein, Joshua D
2014-08-01
To determine whether dynamic and personalized schedules of visual field (VF) testing and intraocular pressure (IOP) measurements result in an improvement in disease progression detection compared with fixed interval schedules for performing these tests when evaluating patients with open-angle glaucoma (OAG). Secondary analyses using longitudinal data from 2 randomized controlled trials. A total of 571 participants from the Advanced Glaucoma Intervention Study (AGIS) and the Collaborative Initial Glaucoma Treatment Study (CIGTS). Perimetric and tonometric data were obtained for AGIS and CIGTS trial participants and used to parameterize and validate a Kalman filter model. The Kalman filter updates knowledge about each participant's disease dynamics as additional VF tests and IOP measurements are obtained. After incorporating the most recent VF and IOP measurements, the model forecasts each participant's disease dynamics into the future and characterizes the forecasting error. To determine personalized schedules for future VF tests and IOP measurements, we developed an algorithm by combining the Kalman filter for state estimation with the predictive power of logistic regression to identify OAG progression. The algorithm was compared with 1-, 1.5-, and 2-year fixed interval schedules of obtaining VF and IOP measurements. Length of diagnostic delay in detecting OAG progression, efficiency of detecting progression, and number of VF and IOP measurements needed to assess for progression. Participants were followed in the AGIS and CIGTS trials for a mean (standard deviation) of 6.5 (2.8) years. Our forecasting model achieved a 29% increased efficiency in identifying OAG progression (P<0.0001) and detected OAG progression 57% sooner (reduced diagnostic delay) (P = 0.02) than following a fixed yearly monitoring schedule, without increasing the number of VF tests and IOP measurements required. The model performed well for patients with mild and advanced disease. The model performed significantly more testing of patients who exhibited OAG progression than nonprogressing patients (1.3 vs. 1.0 tests per year; P<0.0001). Use of dynamic and personalized testing schedules can enhance the efficiency of OAG progression detection and reduce diagnostic delay compared with yearly fixed monitoring intervals. If further validation studies confirm these findings, such algorithms may be able to greatly enhance OAG management. Copyright © 2014 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Scheduling from the perspective of the application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berman, F.; Wolski, R.
1996-12-31
Metacomputing is the aggregation of distributed and high-performance resources on coordinated networks. With careful scheduling, resource-intensive applications can be implemented efficiently on metacomputing systems at the sizes of interest to developers and users. In this paper we focus on the problem of scheduling applications on metacomputing systems. We introduce the concept of application-centric scheduling in which everything about the system is evaluated in terms of its impact on the application. Application-centric scheduling is used by virtually all metacomputer programmers to achieve performance on metacomputing systems. We describe two successful metacomputing applications to illustrate this approach, and describe AppLeS scheduling agentsmore » which generalize the application-centric scheduling approach. Finally, we show preliminary results which compare AppLeS-derived schedules with conventional strip and blocked schedules for a two-dimensional Jacobi code.« less
Linear modeling of steady-state behavioral dynamics.
Palya, William L; Walter, Donald; Kessel, Robert; Lucke, Robert
2002-01-01
The observed steady-state behavioral dynamics supported by unsignaled periods of reinforcement within repeating 2,000-s trials were modeled with a linear transfer function. These experiments employed improved schedule forms and analytical methods to improve the precision of the measured transfer function, compared to previous work. The refinements include both the use of multiple reinforcement periods that improve spectral coverage and averaging of independently determined transfer functions. A linear analysis was then used to predict behavior observed for three different test schedules. The fidelity of these predictions was determined. PMID:11831782
[Toward a New Immunization Schedule in Spain, 2016 (Part 2)].
Navarro-Alonso, José Antonio; Taboada-Rodríguez, José Antonio; Limia-Sánchez, Aurora
2016-03-08
Immunization schedules are intrinsically dynamic in order to embed the immunologic and epidemiologic changes in any specific geographic Region. According to this, the current study addresses a proposal to modify the Childhood Immunization Schedule in Spain. In order to move from a three plus one schema to a two plus one, we undertake a review of the available literature to explore the immunological and clinical rationale behind this change, including an overview of the potential impact on this schedule of premature infants. Additionally, some recommendations are made regarding those Spanish regions which start hepatitis B vaccination at the newborn period.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, P.; Olson, R.; Wilkowski, O.G.
1997-06-01
This report presents the results from Subtask 1.3 of the International Piping Integrity Research Group (IPIRG) program. The objective of Subtask 1.3 is to develop data to assess analysis methodologies for characterizing the fracture behavior of circumferentially cracked pipe in a representative piping system under combined inertial and displacement-controlled stresses. A unique experimental facility was designed and constructed. The piping system evaluated is an expansion loop with over 30 meters of 16-inch diameter Schedule 100 pipe. The experimental facility is equipped with special hardware to ensure system boundary conditions could be appropriately modeled. The test matrix involved one uncracked andmore » five cracked dynamic pipe-system experiments. The uncracked experiment was conducted to evaluate piping system damping and natural frequency characteristics. The cracked-pipe experiments evaluated the fracture behavior, pipe system response, and stability characteristics of five different materials. All cracked-pipe experiments were conducted at PWR conditions. Material characterization efforts provided tensile and fracture toughness properties of the different pipe materials at various strain rates and temperatures. Results from all pipe-system experiments and material characterization efforts are presented. Results of fracture mechanics analyses, dynamic finite element stress analyses, and stability analyses are presented and compared with experimental results.« less
NASA Technical Reports Server (NTRS)
Conway, Sheila R.
2006-01-01
Simple agent-based models may be useful for investigating air traffic control strategies as a precursory screening for more costly, higher fidelity simulation. Of concern is the ability of the models to capture the essence of the system and provide insight into system behavior in a timely manner and without breaking the bank. The method is put to the test with the development of a model to address situations where capacity is overburdened and potential for propagation of the resultant delay though later flights is possible via flight dependencies. The resultant model includes primitive representations of principal air traffic system attributes, namely system capacity, demand, airline schedules and strategy, and aircraft capability. It affords a venue to explore their interdependence in a time-dependent, dynamic system simulation. The scope of the research question and the carefully-chosen modeling fidelity did allow for the development of an agent-based model in short order. The model predicted non-linear behavior given certain initial conditions and system control strategies. Additionally, a combination of the model and dimensionless techniques borrowed from fluid systems was demonstrated that can predict the system s dynamic behavior across a wide range of parametric settings.
Optimal Rate Schedules with Data Sharing in Energy Harvesting Communication Systems.
Wu, Weiwei; Li, Huafan; Shan, Feng; Zhao, Yingchao
2017-12-20
Despite the abundant research on energy-efficient rate scheduling polices in energy harvesting communication systems, few works have exploited data sharing among multiple applications to further enhance the energy utilization efficiency, considering that the harvested energy from environments is limited and unstable. In this paper, to overcome the energy shortage of wireless devices at transmitting data to a platform running multiple applications/requesters, we design rate scheduling policies to respond to data requests as soon as possible by encouraging data sharing among data requests and reducing the redundancy. We formulate the problem as a transmission completion time minimization problem under constraints of dynamical data requests and energy arrivals. We develop offline and online algorithms to solve this problem. For the offline setting, we discover the relationship between two problems: the completion time minimization problem and the energy consumption minimization problem with a given completion time. We first derive the optimal algorithm for the min-energy problem and then adopt it as a building block to compute the optimal solution for the min-completion-time problem. For the online setting without future information, we develop an event-driven online algorithm to complete the transmission as soon as possible. Simulation results validate the efficiency of the proposed algorithm.
Optimal Rate Schedules with Data Sharing in Energy Harvesting Communication Systems
Wu, Weiwei; Li, Huafan; Shan, Feng; Zhao, Yingchao
2017-01-01
Despite the abundant research on energy-efficient rate scheduling polices in energy harvesting communication systems, few works have exploited data sharing among multiple applications to further enhance the energy utilization efficiency, considering that the harvested energy from environments is limited and unstable. In this paper, to overcome the energy shortage of wireless devices at transmitting data to a platform running multiple applications/requesters, we design rate scheduling policies to respond to data requests as soon as possible by encouraging data sharing among data requests and reducing the redundancy. We formulate the problem as a transmission completion time minimization problem under constraints of dynamical data requests and energy arrivals. We develop offline and online algorithms to solve this problem. For the offline setting, we discover the relationship between two problems: the completion time minimization problem and the energy consumption minimization problem with a given completion time. We first derive the optimal algorithm for the min-energy problem and then adopt it as a building block to compute the optimal solution for the min-completion-time problem. For the online setting without future information, we develop an event-driven online algorithm to complete the transmission as soon as possible. Simulation results validate the efficiency of the proposed algorithm. PMID:29261135
Planning for the semiconductor manufacturer of the future
NASA Technical Reports Server (NTRS)
Fargher, Hugh E.; Smith, Richard A.
1992-01-01
Texas Instruments (TI) is currently contracted by the Air Force Wright Laboratory and the Defense Advanced Research Projects Agency (DARPA) to develop the next generation flexible semiconductor wafer fabrication system called Microelectronics Manufacturing Science & Technology (MMST). Several revolutionary concepts are being pioneered on MMST, including the following: new single-wafer rapid thermal processes, in-situ sensors, cluster equipment, and advanced Computer Integrated Manufacturing (CIM) software. The objective of the project is to develop a manufacturing system capable of achieving an order of magnitude improvement in almost all aspects of wafer fabrication. TI was awarded the contract in Oct., 1988, and will complete development with a fabrication facility demonstration in April, 1993. An important part of MMST is development of the CIM environment responsible for coordinating all parts of the system. The CIM architecture being developed is based on a distributed object oriented framework made of several cooperating subsystems. The software subsystems include the following: process control for dynamic control of factory processes; modular processing system for controlling the processing equipment; generic equipment model which provides an interface between processing equipment and the rest of the factory; specification system which maintains factory documents and product specifications; simulator for modelling the factory for analysis purposes; scheduler for scheduling work on the factory floor; and the planner for planning and monitoring of orders within the factory. This paper first outlines the division of responsibility between the planner, scheduler, and simulator subsystems. It then describes the approach to incremental planning and the way in which uncertainty is modelled within the plan representation. Finally, current status and initial results are described.
Novel Hybrid Scheduling Technique for Sensor Nodes with Mixed Criticality Tasks.
Micea, Mihai-Victor; Stangaciu, Cristina-Sorina; Stangaciu, Valentin; Curiac, Daniel-Ioan
2017-06-26
Sensor networks become increasingly a key technology for complex control applications. Their potential use in safety- and time-critical domains has raised the need for task scheduling mechanisms specially adapted to sensor node specific requirements, often materialized in predictable jitter-less execution of tasks characterized by different criticality levels. This paper offers an efficient scheduling solution, named Hybrid Hard Real-Time Scheduling (H²RTS), which combines a static, clock driven method with a dynamic, event driven scheduling technique, in order to provide high execution predictability, while keeping a high node Central Processing Unit (CPU) utilization factor. From the detailed, integrated schedulability analysis of the H²RTS, a set of sufficiency tests are introduced and demonstrated based on the processor demand and linear upper bound metrics. The performance and correct behavior of the proposed hybrid scheduling technique have been extensively evaluated and validated both on a simulator and on a sensor mote equipped with ARM7 microcontroller.
NASA Astrophysics Data System (ADS)
Lee, Junghyun; Kim, Heewon; Chung, Hyun; Kim, Haedong; Choi, Sujin; Jung, Okchul; Chung, Daewon; Ko, Kwanghee
2018-04-01
In this paper, we propose a method that uses a genetic algorithm for the dynamic schedule optimization of imaging missions for multiple satellites and ground systems. In particular, the visibility conflicts of communication and mission operation using satellite resources (electric power and onboard memory) are integrated in sequence. Resource consumption and restoration are considered in the optimization process. Image acquisition is an essential part of satellite missions and is performed via a series of subtasks such as command uplink, image capturing, image storing, and image downlink. An objective function for optimization is designed to maximize the usability by considering the following components: user-assigned priority, resource consumption, and image-acquisition time. For the simulation, a series of hypothetical imaging missions are allocated to a multi-satellite control system comprising five satellites and three ground stations having S- and X-band antennas. To demonstrate the performance of the proposed method, simulations are performed via three operation modes: general, commercial, and tactical.
The resource envelope as a basis for space station management system scheduling
NASA Technical Reports Server (NTRS)
Bush, Joy; Critchfield, Anna
1987-01-01
The Platform Management System (PMS) Resource Envelope Scheduling System (PRESS) expert system prototype developed for space station scheduling is described. The purpose of developing the prototype was too investigate the resource envelope concept in a practical scheduling application, using a commercially available expert system shell. PRESS is being developed on an IBM PC/AT using Teknowledge, Inc.'s M.1 expert system shell.
A New Engine for Schools: The Flexible Scheduling Paradigm
ERIC Educational Resources Information Center
Snyder, Yaakov; Herer, Yale T.; Moore, Michael
2012-01-01
We present a new approach for the organization of schools, which we call the flexible scheduling paradigm (FSP). FSP improves student learning by dynamically redeploying teachers and other pedagogical resources to provide students with customized learning conditions over shorter time periods called "mini-terms" instead of semesters or years. By…
Working Notes from the 1992 AAAI Spring Symposium on Practical Approaches to Scheduling and Planning
NASA Technical Reports Server (NTRS)
Drummond, Mark; Fox, Mark; Tate, Austin; Zweben, Monte
1992-01-01
The symposium presented issues involved in the development of scheduling systems that can deal with resource and time limitations. To qualify, a system must be implemented and tested to some degree on non-trivial problems (ideally, on real-world problems). However, a system need not be fully deployed to qualify. Systems that schedule actions in terms of metric time constraints typically represent and reason about an external numeric clock or calendar and can be contrasted with those systems that represent time purely symbolically. The following topics are discussed: integrating planning and scheduling; integrating symbolic goals and numerical utilities; managing uncertainty; incremental rescheduling; managing limited computation time; anytime scheduling and planning algorithms, systems; dependency analysis and schedule reuse; management of schedule and plan execution; and incorporation of discrete event techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, Harold C.; Ibanez, Daniel Alejandro
This report documents the ASC/ATDM Kokkos deliverable "Production Portable Dy- namic Task DAG Capability." This capability enables applications to create and execute a dynamic task DAG ; a collection of heterogeneous computational tasks with a directed acyclic graph (DAG) of "execute after" dependencies where tasks and their dependencies are dynamically created and destroyed as tasks execute. The Kokkos task scheduler executes the dynamic task DAG on the target execution resource; e.g. a multicore CPU, a manycore CPU such as Intel's Knights Landing (KNL), or an NVIDIA GPU. Several major technical challenges had to be addressed during development of Kokkos' Taskmore » DAG capability: (1) portability to a GPU with it's simplified hardware and micro- runtime, (2) thread-scalable memory allocation and deallocation from a bounded pool of memory, (3) thread-scalable scheduler for dynamic task DAG, (4) usability by applications.« less
NASA Technical Reports Server (NTRS)
Craft, R.; Dunn, C.; Mccord, J.; Simeone, L.
1980-01-01
A user guide and programmer documentation is provided for a system of PRIME 400 minicomputer programs. The system was designed to support loading analyses on the Tracking Data Relay Satellite System (TDRSS). The system is a scheduler for various types of data relays (including tape recorder dumps and real time relays) from orbiting payloads to the TDRSS. Several model options are available to statistically generate data relay requirements. TDRSS time lines (representing resources available for scheduling) and payload/TDRSS acquisition and loss of sight time lines are input to the scheduler from disk. Tabulated output from the interactive system includes a summary of the scheduler activities over time intervals specified by the user and overall summary of scheduler input and output information. A history file, which records every event generated by the scheduler, is written to disk to allow further scheduling on remaining resources and to provide data for graphic displays or additional statistical analysis.
Dynamics of assembly production flow
NASA Astrophysics Data System (ADS)
Ezaki, Takahiro; Yanagisawa, Daichi; Nishinari, Katsuhiro
2015-06-01
Despite recent developments in management theory, maintaining a manufacturing schedule remains difficult because of production delays and fluctuations in demand and supply of materials. The response of manufacturing systems to such disruptions to dynamic behavior has been rarely studied. To capture these responses, we investigate a process that models the assembly of parts into end products. The complete assembly process is represented by a directed tree, where the smallest parts are injected at leaves and the end products are removed at the root. A discrete assembly process, represented by a node on the network, integrates parts, which are then sent to the next downstream node as a single part. The model exhibits some intriguing phenomena, including overstock cascade, phase transition in terms of demand and supply fluctuations, nonmonotonic distribution of stockout in the network, and the formation of a stockout path and stockout chains. Surprisingly, these rich phenomena result from only the nature of distributed assembly processes. From a physical perspective, these phenomena provide insight into delay dynamics and inventory distributions in large-scale manufacturing systems.
A Dynamic Approach to Rebalancing Bike-Sharing Systems
2018-01-01
Bike-sharing services are flourishing in Smart Cities worldwide. They provide a low-cost and environment-friendly transportation alternative and help reduce traffic congestion. However, these new services are still under development, and several challenges need to be solved. A major problem is the management of rebalancing trucks in order to ensure that bikes and stalls in the docking stations are always available when needed, despite the fluctuations in the service demand. In this work, we propose a dynamic rebalancing strategy that exploits historical data to predict the network conditions and promptly act in case of necessity. We use Birth-Death Processes to model the stations’ occupancy and decide when to redistribute bikes, and graph theory to select the rebalancing path and the stations involved. We validate the proposed framework on the data provided by New York City’s bike-sharing system. The numerical simulations show that a dynamic strategy able to adapt to the fluctuating nature of the network outperforms rebalancing schemes based on a static schedule. PMID:29419771
MaGate Simulator: A Simulation Environment for a Decentralized Grid Scheduler
NASA Astrophysics Data System (ADS)
Huang, Ye; Brocco, Amos; Courant, Michele; Hirsbrunner, Beat; Kuonen, Pierre
This paper presents a simulator for of a decentralized modular grid scheduler named MaGate. MaGate’s design emphasizes scheduler interoperability by providing intelligent scheduling serving the grid community as a whole. Each MaGate scheduler instance is able to deal with dynamic scheduling conditions, with continuously arriving grid jobs. Received jobs are either allocated on local resources, or delegated to other MaGates for remote execution. The proposed MaGate simulator is based on GridSim toolkit and Alea simulator, and abstracts the features and behaviors of complex fundamental grid elements, such as grid jobs, grid resources, and grid users. Simulation of scheduling tasks is supported by a grid network overlay simulator executing distributed ant-based swarm intelligence algorithms to provide services such as group communication and resource discovery. For evaluation, a comparison of behaviors of different collaborative policies among a community of MaGates is provided. Results support the use of the proposed approach as a functional ready grid scheduler simulator.
Performance analysis of a large-grain dataflow scheduling paradigm
NASA Technical Reports Server (NTRS)
Young, Steven D.; Wills, Robert W.
1993-01-01
A paradigm for scheduling computations on a network of multiprocessors using large-grain data flow scheduling at run time is described and analyzed. The computations to be scheduled must follow a static flow graph, while the schedule itself will be dynamic (i.e., determined at run time). Many applications characterized by static flow exist, and they include real-time control and digital signal processing. With the advent of computer-aided software engineering (CASE) tools for capturing software designs in dataflow-like structures, macro-dataflow scheduling becomes increasingly attractive, if not necessary. For parallel implementations, using the macro-dataflow method allows the scheduling to be insulated from the application designer and enables the maximum utilization of available resources. Further, by allowing multitasking, processor utilizations can approach 100 percent while they maintain maximum speedup. Extensive simulation studies are performed on 4-, 8-, and 16-processor architectures that reflect the effects of communication delays, scheduling delays, algorithm class, and multitasking on performance and speedup gains.
XPRESS: eXascale PRogramming Environment and System Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brightwell, Ron; Sterling, Thomas; Koniges, Alice
The XPRESS Project is one of four major projects of the DOE Office of Science Advanced Scientific Computing Research X-stack Program initiated in September, 2012. The purpose of XPRESS is to devise an innovative system software stack to enable practical and useful exascale computing around the end of the decade with near-term contributions to efficient and scalable operation of trans-Petaflops performance systems in the next two to three years; both for DOE mission-critical applications. To this end, XPRESS directly addresses critical challenges in computing of efficiency, scalability, and programmability through introspective methods of dynamic adaptive resource management and task scheduling.
Evolution of Query Optimization Methods
NASA Astrophysics Data System (ADS)
Hameurlain, Abdelkader; Morvan, Franck
Query optimization is the most critical phase in query processing. In this paper, we try to describe synthetically the evolution of query optimization methods from uniprocessor relational database systems to data Grid systems through parallel, distributed and data integration systems. We point out a set of parameters to characterize and compare query optimization methods, mainly: (i) size of the search space, (ii) type of method (static or dynamic), (iii) modification types of execution plans (re-optimization or re-scheduling), (iv) level of modification (intra-operator and/or inter-operator), (v) type of event (estimation errors, delay, user preferences), and (vi) nature of decision-making (centralized or decentralized control).
NASA Astrophysics Data System (ADS)
Huang, Wei; Zhang, Xingnan; Li, Chenming; Wang, Jianying
Management of group decision-making is an important issue in water source management development. In order to overcome the defects in lacking of effective communication and cooperation in the existing decision-making models, this paper proposes a multi-layer dynamic model for coordination in water resource allocation and scheduling based group decision making. By introducing the scheme-recognized cooperative satisfaction index and scheme-adjusted rationality index, the proposed model can solve the problem of poor convergence of multi-round decision-making process in water resource allocation and scheduling. Furthermore, the problem about coordination of limited resources-based group decision-making process can be solved based on the effectiveness of distance-based group of conflict resolution. The simulation results show that the proposed model has better convergence than the existing models.
Towards Evolving Electronic Circuits for Autonomous Space Applications
NASA Technical Reports Server (NTRS)
Lohn, Jason D.; Haith, Gary L.; Colombano, Silvano P.; Stassinopoulos, Dimitris
2000-01-01
The relatively new field of Evolvable Hardware studies how simulated evolution can reconfigure, adapt, and design hardware structures in an automated manner. Space applications, especially those requiring autonomy, are potential beneficiaries of evolvable hardware. For example, robotic drilling from a mobile platform requires high-bandwidth controller circuits that are difficult to design. In this paper, we present automated design techniques based on evolutionary search that could potentially be used in such applications. First, we present a method of automatically generating analog circuit designs using evolutionary search and a circuit construction language. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. Using a parallel genetic algorithm, we present experimental results for five design tasks. Second, we investigate the use of coevolution in automated circuit design. We examine fitness evaluation by comparing the effectiveness of four fitness schedules. The results indicate that solution quality is highest with static and co-evolving fitness schedules as compared to the other two dynamic schedules. We discuss these results and offer two possible explanations for the observed behavior: retention of useful information, and alignment of problem difficulty with circuit proficiency.
SUMO: operation and maintenance management web tool for astronomical observatories
NASA Astrophysics Data System (ADS)
Mujica-Alvarez, Emma; Pérez-Calpena, Ana; García-Vargas, María. Luisa
2014-08-01
SUMO is an Operation and Maintenance Management web tool, which allows managing the operation and maintenance activities and resources required for the exploitation of a complex facility. SUMO main capabilities are: information repository, assets and stock control, tasks scheduler, executed tasks archive, configuration and anomalies control and notification and users management. The information needed to operate and maintain the system must be initially stored at the tool database. SUMO shall automatically schedule the periodical tasks and facilitates the searching and programming of the non-periodical tasks. Tasks planning can be visualized in different formats and dynamically edited to be adjusted to the available resources, anomalies, dates and other constrains that can arise during daily operation. SUMO shall provide warnings to the users notifying potential conflicts related to the required personal availability or the spare stock for the scheduled tasks. To conclude, SUMO has been designed as a tool to help during the operation management of a scientific facility, and in particular an astronomical observatory. This is done by controlling all operating parameters: personal, assets, spare and supply stocks, tasks and time constrains.
NASA Technical Reports Server (NTRS)
1993-01-01
The Marshall Space Flight Center is responsible for the development and management of advanced launch vehicle propulsion systems, including the Space Shuttle Main Engine (SSME), which is presently operational, and the Space Transportation Main Engine (STME) under development. The SSME's provide high performance within stringent constraints on size, weight, and reliability. Based on operational experience, continuous design improvement is in progress to enhance system durability and reliability. Specialized data analysis and interpretation is required in support of SSME and advanced propulsion system diagnostic evaluations. Comprehensive evaluation of the dynamic measurements obtained from test and flight operations is necessary to provide timely assessment of the vibrational characteristics indicating the operational status of turbomachinery and other critical engine components. Efficient performance of this effort is critical due to the significant impact of dynamic evaluation results on ground test and launch schedules, and requires direct familiarity with SSME and derivative systems, test data acquisition, and diagnostic software. Detailed analysis and evaluation of dynamic measurements obtained during SSME and advanced system ground test and flight operations was performed including analytical/statistical assessment of component dynamic behavior, and the development and implementation of analytical/statistical models to efficiently define nominal component dynamic characteristics, detect anomalous behavior, and assess machinery operational condition. In addition, the SSME and J-2 data will be applied to develop vibroacoustic environments for advanced propulsion system components, as required. This study will provide timely assessment of engine component operational status, identify probable causes of malfunction, and indicate feasible engineering solutions. This contract will be performed through accomplishment of negotiated task orders.
NASA Technical Reports Server (NTRS)
Davis, Randal; Thalman, Nancy
1993-01-01
The University of Colorado's Laboratory for Atmospheric and Space Physics (CU/LASP) along with the Goddard Space Flight Center (GSFC) and the Jet Propulsion Laboratory (JPL) designed, implemented, tested, and demonstrated a prototype of the distributed, hierarchical planning and scheduling system comtemplated for the Earth Observing System (EOS) project. The planning and scheduling prototype made use of existing systems: CU/LASP's Operations and Science Instrument Support Planning and Scheduling (OASIS-PS) software package; GSFC's Request Oriented Scheduling Engine (ROSE); and JPL's Plan Integrated Timeliner 2 (Plan-It-2). Using these tools, four scheduling nodes were implemented and tied together using a new communications protocol for scheduling applications called the Scheduling Applications Interface Language (SAIL). An extensive and realistic scenario of EOS satellite operations was then developed and the prototype scheduling system was tested and demonstrated using the scenario. Two demonstrations of the system were given to NASA personnel and EOS core system (ECS) contractor personnel. A comprehensive volume of lessons learned was generated and a meeting was held with NASA and ECS representatives to review these lessons learned. A paper and presentation on the project's final results was given at the American Institute of Aeronautics and Astronautics Computing in Aerospace 9 conference.
Application of decomposition techniques to the preliminary design of a transport aircraft
NASA Technical Reports Server (NTRS)
Rogan, J. E.; Mcelveen, R. P.; Kolb, M. A.
1986-01-01
A multifaceted decomposition of a nonlinear constrained optimization problem describing the preliminary design process for a transport aircraft has been made. Flight dynamics, flexible aircraft loads and deformations, and preliminary structural design subproblems appear prominently in the decomposition. The use of design process decomposition for scheduling design projects, a new system integration approach to configuration control, and the application of object-centered programming to a new generation of design tools are discussed.
The R-Shell approach - Using scheduling agents in complex distributed real-time systems
NASA Technical Reports Server (NTRS)
Natarajan, Swaminathan; Zhao, Wei; Goforth, Andre
1993-01-01
Large, complex real-time systems such as space and avionics systems are extremely demanding in their scheduling requirements. The current OS design approaches are quite limited in the capabilities they provide for task scheduling. Typically, they simply implement a particular uniprocessor scheduling strategy and do not provide any special support for network scheduling, overload handling, fault tolerance, distributed processing, etc. Our design of the R-Shell real-time environment fcilitates the implementation of a variety of sophisticated but efficient scheduling strategies, including incorporation of all these capabilities. This is accomplished by the use of scheduling agents which reside in the application run-time environment and are responsible for coordinating the scheduling of the application.
Automated wind load characterization of wind turbine structures by embedded model updating
NASA Astrophysics Data System (ADS)
Swartz, R. Andrew; Zimmerman, Andrew T.; Lynch, Jerome P.
2010-04-01
The continued development of renewable energy resources is for the nation to limit its carbon footprint and to enjoy independence in energy production. Key to that effort are reliable generators of renewable energy sources that are economically competitive with legacy sources. In the area of wind energy, a major contributor to the cost of implementation is large uncertainty regarding the condition of wind turbines in the field due to lack of information about loading, dynamic response, and fatigue life of the structure expended. Under favorable circumstances, this uncertainty leads to overly conservative designs and maintenance schedules. Under unfavorable circumstances, it leads to inadequate maintenance schedules, damage to electrical systems, or even structural failure. Low-cost wireless sensors can provide more certainty for stakeholders by measuring the dynamic response of the structure to loading, estimating the fatigue state of the structure, and extracting loading information from the structural response without the need of an upwind instrumentation tower. This study presents a method for using wireless sensor networks to estimate the spectral properties of a wind turbine tower loading based on its measured response and some rudimentary knowledge of its structure. Structural parameters are estimated via model-updating in the frequency domain to produce an identification of the system. The updated structural model and the measured output spectra are then used to estimate the input spectra. Laboratory results are presented indicating accurate load characterization.
Scheduling Real-Time Mixed-Criticality Jobs
NASA Astrophysics Data System (ADS)
Baruah, Sanjoy K.; Bonifaci, Vincenzo; D'Angelo, Gianlorenzo; Li, Haohan; Marchetti-Spaccamela, Alberto; Megow, Nicole; Stougie, Leen
Many safety-critical embedded systems are subject to certification requirements; some systems may be required to meet multiple sets of certification requirements, from different certification authorities. Certification requirements in such "mixed-criticality" systems give rise to interesting scheduling problems, that cannot be satisfactorily addressed using techniques from conventional scheduling theory. In this paper, we study a formal model for representing such mixed-criticality workloads. We demonstrate first the intractability of determining whether a system specified in this model can be scheduled to meet all its certification requirements, even for systems subject to two sets of certification requirements. Then we quantify, via the metric of processor speedup factor, the effectiveness of two techniques, reservation-based scheduling and priority-based scheduling, that are widely used in scheduling such mixed-criticality systems, showing that the latter of the two is superior to the former. We also show that the speedup factors are tight for these two techniques.
Non preemptive soft real time scheduler: High deadline meeting rate on overload
NASA Astrophysics Data System (ADS)
Khalib, Zahereel Ishwar Abdul; Ahmad, R. Badlishah; El-Shaikh, Mohamed
2015-05-01
While preemptive scheduling has gain more attention among researchers, current work in non preemptive scheduling had shown promising result in soft real time jobs scheduling. In this paper we present a non preemptive scheduling algorithm meant for soft real time applications, which is capable of producing better performance during overload while maintaining excellent performance during normal load. The approach taken by this algorithm has shown more promising results compared to other algorithms including its immediate predecessor. We will present the analysis made prior to inception of the algorithm as well as simulation results comparing our algorithm named gutEDF with EDF and gEDF. We are convinced that grouping jobs utilizing pure dynamic parameters would produce better performance.
Research on the ITOC based scheduling system for ship piping production
NASA Astrophysics Data System (ADS)
Li, Rui; Liu, Yu-Jun; Hamada, Kunihiro
2010-12-01
Manufacturing of ship piping systems is one of the major production activities in shipbuilding. The schedule of pipe production has an important impact on the master schedule of shipbuilding. In this research, the ITOC concept was introduced to solve the scheduling problems of a piping factory, and an intelligent scheduling system was developed. The system, in which a product model, an operation model, a factory model, and a knowledge database of piping production were integrated, automated the planning process and production scheduling. Details of the above points were discussed. Moreover, an application of the system in a piping factory, which achieved a higher level of performance as measured by tardiness, lead time, and inventory, was demonstrated.
New scheduling rules for a dynamic flexible flow line problem with sequence-dependent setup times
NASA Astrophysics Data System (ADS)
Kia, Hamidreza; Ghodsypour, Seyed Hassan; Davoudpour, Hamid
2017-09-01
In the literature, the application of multi-objective dynamic scheduling problem and simple priority rules are widely studied. Although these rules are not efficient enough due to simplicity and lack of general insight, composite dispatching rules have a very suitable performance because they result from experiments. In this paper, a dynamic flexible flow line problem with sequence-dependent setup times is studied. The objective of the problem is minimization of mean flow time and mean tardiness. A 0-1 mixed integer model of the problem is formulated. Since the problem is NP-hard, four new composite dispatching rules are proposed to solve it by applying genetic programming framework and choosing proper operators. Furthermore, a discrete-event simulation model is made to examine the performances of scheduling rules considering four new heuristic rules and the six adapted heuristic rules from the literature. It is clear from the experimental results that composite dispatching rules that are formed from genetic programming have a better performance in minimization of mean flow time and mean tardiness than others.
2007-12-01
except for the dive zero time which needed to be programmed during the cruise when the deployment schedule dates were confirmed. _ ACM - Aanderaa ACM...guards bolted on to complete the frame prior to deployment. Sound Source - Sound sources were scheduled to be redeployed. Sound sources were originally...battery voltages and a vacuum. A +27 second time drift was noted and the time was reset. The sound source was scheduled to go to full power on November
A model-based gain scheduling approach for controlling the common-rail system for GDI engines
NASA Astrophysics Data System (ADS)
di Gaeta, Alessandro; Montanaro, Umberto; Fiengo, Giovanni; Palladino, Angelo; Giglio, Veniero
2012-04-01
The progressive reduction in vehicle emission requirements have forced the automotive industry to invest in research for developing alternative and more efficient control strategies. All control features and resources are permanently active in an electronic control unit (ECU), ensuring the best performance with respect to emissions, fuel economy, driveability and diagnostics, independently from engine working point. In this article, a considerable step forward has been achieved by the common-rail technology which has made possible to vary the injection pressure over the entire engine speed range. As a consequence, the injection of a fixed amount of fuel is more precise and multiple injections in a combustion cycle can be made. In this article, a novel gain scheduling pressure controller for gasoline direct injection (GDI) engine is designed to stabilise the mean fuel pressure into the rail and to track demanded pressure trajectories. By exploiting a simple control-oriented model describing the mean pressure dynamics in the rail, the control structure turns to be simple enough to be effectively implemented in commercial ECUs. Experimental results in a wide range of operating points confirm the effectiveness of the proposed control method to tame efficiently the mean value pressure dynamics of the plant showing a good accuracy and robustness with respect to unavoidable parameters uncertainties, unmodelled dynamics, and hidden coupling terms.
Energy-Efficient Scheduling for Hybrid Tasks in Control Devices for the Internet of Things
Gao, Zhigang; Wu, Yifan; Dai, Guojun; Xia, Haixia
2012-01-01
In control devices for the Internet of Things (IoT), energy is one of the critical restriction factors. Dynamic voltage scaling (DVS) has been proved to be an effective method for reducing the energy consumption of processors. This paper proposes an energy-efficient scheduling algorithm for IoT control devices with hard real-time control tasks (HRCTs) and soft real-time tasks (SRTs). The main contribution of this paper includes two parts. First, it builds the Hybrid tasks with multi-subtasks of different function Weight (HoW) task model for IoT control devices. HoW describes the structure of HRCTs and SRTs, and their properties, e.g., deadlines, execution time, preemption properties, and energy-saving goals, etc. Second, it presents the Hybrid Tasks' Dynamic Voltage Scaling (HTDVS) algorithm. HTDVS first sets the slowdown factors of subtasks while meeting the different real-time requirements of HRCTs and SRTs, and then dynamically reclaims, reserves, and reuses the slack time of the subtasks to meet their ideal energy-saving goals. Experimental results show HTDVS can reduce energy consumption about 10%–80% while meeting the real-time requirements of HRCTs, HRCTs help to reduce the deadline miss ratio (DMR) of systems, and HTDVS has comparable performance with the greedy algorithm and is more favorable to keep the subtasks' ideal speeds. PMID:23112659
Exploring Machine Learning Techniques For Dynamic Modeling on Future Exascale Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Shuaiwen; Tallent, Nathan R.; Vishnu, Abhinav
2013-09-23
Future exascale systems must be optimized for both power and performance at scale in order to achieve DOE’s goal of a sustained petaflop within 20 Megawatts by 2022 [1]. Massive parallelism of the future systems combined with complex memory hierarchies will form a barrier to efficient application and architecture design. These challenges are exacerbated with emerging complex architectures such as GPGPUs and Intel Xeon Phi as parallelism increases orders of magnitude and system power consumption can easily triple or quadruple. Therefore, we need techniques that can reduce the search space for optimization, isolate power-performance bottlenecks, identify root causes for software/hardwaremore » inefficiency, and effectively direct runtime scheduling.« less
Experience and results of the 1991 MTLRS-1 USSR campaign
NASA Technical Reports Server (NTRS)
Sperber, Peter; Hauck, H.
1993-01-01
The year 1991 was a special year for the mobile laser ranging systems. Due to the scheduled upgrades of the Modular Transportable Laser Ranging Systems, MTLRS#1 and MTLRS#2, neither a WEGENER MEDLAS nor a Crustal Dynamics Project campaign was carried out in 1991. After the successful upgrade of MTLRS#1 in the first half of 1991 the system departed from Wettzell in August to make measurements at two sites in the USSR. In Riga/Latvia, we operated close to the fixed SLR system. In Simeiz/Ucrainea, the place for MTLRS#1 pad was choosen to collocate the two fixed SLR stations in Simeiz (300 m distance to MTLRS#1) and Kazivelli (about 3 km distance).
QoS-Oriented High Dynamic Resource Allocation in Vehicular Communication Networks
2014-01-01
Vehicular ad hoc networks (VANETs) are emerging as new research area and attracting an increasing attention from both industry and research communities. In this context, a dynamic resource allocation policy that maximizes the use of available resources and meets the quality of service (QoS) requirement of constraining applications is proposed. It is a combination of a fair packet scheduling policy and a new adaptive QoS oriented call admission control (CAC) scheme based on the vehicle density variation. This scheme decides whether the connection request is to be admitted into the system, while providing fair access and guaranteeing the desired throughput. The proposed algorithm showed good performance in testing in real world environment. PMID:24616639
Study on perception and control layer of mine CPS with mixed logic dynamic approach
NASA Astrophysics Data System (ADS)
Li, Jingzhao; Ren, Ping; Yang, Dayu
2017-01-01
Mine inclined roadway transportation system of mine cyber physical system is a hybrid system consisting of a continuous-time system and a discrete-time system, which can be divided into inclined roadway signal subsystem, error-proofing channel subsystems, anti-car subsystems, and frequency control subsystems. First, to ensure stable operation, improve efficiency and production safety, this hybrid system model with n inputs and m outputs is constructed and analyzed in detail, then its steady schedule state to be solved. Second, on the basis of the formal modeling for real-time systems, we use hybrid toolbox for system security verification. Third, the practical application of mine cyber physical system shows that the method for real-time simulation of mine cyber physical system is effective.
Dynamic Energy Management System for a Smart Microgrid.
Venayagamoorthy, Ganesh Kumar; Sharma, Ratnesh K; Gautam, Prajwal K; Ahmadi, Afshin
2016-08-01
This paper presents the development of an intelligent dynamic energy management system (I-DEMS) for a smart microgrid. An evolutionary adaptive dynamic programming and reinforcement learning framework is introduced for evolving the I-DEMS online. The I-DEMS is an optimal or near-optimal DEMS capable of performing grid-connected and islanded microgrid operations. The primary sources of energy are sustainable, green, and environmentally friendly renewable energy systems (RESs), e.g., wind and solar; however, these forms of energy are uncertain and nondispatchable. Backup battery energy storage and thermal generation were used to overcome these challenges. Using the I-DEMS to schedule dispatches allowed the RESs and energy storage devices to be utilized to their maximum in order to supply the critical load at all times. Based on the microgrid's system states, the I-DEMS generates energy dispatch control signals, while a forward-looking network evaluates the dispatched control signals over time. Typical results are presented for varying generation and load profiles, and the performance of I-DEMS is compared with that of a decision tree approach-based DEMS (D-DEMS). The robust performance of the I-DEMS was illustrated by examining microgrid operations under different battery energy storage conditions.
NASA Technical Reports Server (NTRS)
Johnson, Charles S.
1986-01-01
The embedded systems running real-time applications, for which Ada was designed, require their own mechanisms for the management of dynamically allocated storage. There is a need for packages which manage their own internalo structures to control their deallocation as well, due to the performance implications of garbage collection by the KAPSE. This places a requirement upon the design of generic packages which manage generically structured private types built-up from application-defined input types. These kinds of generic packages should figure greatly in the development of lower-level software such as operating systems, schedulers, controllers, and device driver; and will manage structures such as queues, stacks, link-lists, files, and binary multary (hierarchical) trees. Controlled to prevent inadvertent de-designation of dynamic elements, which is implicit in the assignment operation A study was made of the use of limited private type, in solving the problems of controlling the accumulation of anonymous, detached objects in running systems. The use of deallocator prodecures for run-down of application-defined input types during deallocation operations during satellites.
A distributed computing approach to mission operations support. [for spacecraft
NASA Technical Reports Server (NTRS)
Larsen, R. L.
1975-01-01
Computing mission operation support includes orbit determination, attitude processing, maneuver computation, resource scheduling, etc. The large-scale third-generation distributed computer network discussed is capable of fulfilling these dynamic requirements. It is shown that distribution of resources and control leads to increased reliability, and exhibits potential for incremental growth. Through functional specialization, a distributed system may be tuned to very specific operational requirements. Fundamental to the approach is the notion of process-to-process communication, which is effected through a high-bandwidth communications network. Both resource-sharing and load-sharing may be realized in the system.
Satellite image collection optimization
NASA Astrophysics Data System (ADS)
Martin, William
2002-09-01
Imaging satellite systems represent a high capital cost. Optimizing the collection of images is critical for both satisfying customer orders and building a sustainable satellite operations business. We describe the functions of an operational, multivariable, time dynamic optimization system that maximizes the daily collection of satellite images. A graphical user interface allows the operator to quickly see the results of what if adjustments to an image collection plan. Used for both long range planning and daily collection scheduling of Space Imaging's IKONOS satellite, the satellite control and tasking (SCT) software allows collection commands to be altered up to 10 min before upload to the satellite.
USDA-ARS?s Scientific Manuscript database
Treatment schedules to maintain low levels of Varroa mites in honey bee colonies were tested in hives started from either package bees or splits of larger colonies. The schedules were developed based on predictions of Varroa population growth generated from a mathematical model of honey bee colony ...
Energy-saving framework for passive optical networks with ONU sleep/doze mode.
Van, Dung Pham; Valcarenghi, Luca; Dias, Maluge Pubuduni Imali; Kondepu, Koteswararao; Castoldi, Piero; Wong, Elaine
2015-02-09
This paper proposes an energy-saving passive optical network framework (ESPON) that aims to incorporate optical network unit (ONU) sleep/doze mode into dynamic bandwidth allocation (DBA) algorithms to reduce ONU energy consumption. In the ESPON, the optical line terminal (OLT) schedules both downstream (DS) and upstream (US) transmissions in the same slot in an online and dynamic fashion whereas the ONU enters sleep mode outside the slot. The ONU sleep time is maximized based on both DS and US traffic. Moreover, during the slot, the ONU might enter doze mode when only its transmitter is idle to further improve energy efficiency. The scheduling order of data transmission, control message exchange, sleep period, and doze period defines an energy-efficient scheme under the ESPON. Three schemes are designed and evaluated in an extensive FPGA-based evaluation. Results show that whilst all the schemes significantly save ONU energy for different evaluation scenarios, the scheduling order has great impact on their performance. In addition, the ESPON allows for a scheduling order that saves ONU energy independently of the network reach.
Achieving reutilization of scheduling software through abstraction and generalization
NASA Technical Reports Server (NTRS)
Wilkinson, George J.; Monteleone, Richard A.; Weinstein, Stuart M.; Mohler, Michael G.; Zoch, David R.; Tong, G. Michael
1995-01-01
Reutilization of software is a difficult goal to achieve particularly in complex environments that require advanced software systems. The Request-Oriented Scheduling Engine (ROSE) was developed to create a reusable scheduling system for the diverse scheduling needs of the National Aeronautics and Space Administration (NASA). ROSE is a data-driven scheduler that accepts inputs such as user activities, available resources, timing contraints, and user-defined events, and then produces a conflict-free schedule. To support reutilization, ROSE is designed to be flexible, extensible, and portable. With these design features, applying ROSE to a new scheduling application does not require changing the core scheduling engine, even if the new application requires significantly larger or smaller data sets, customized scheduling algorithms, or software portability. This paper includes a ROSE scheduling system description emphasizing its general-purpose features, reutilization techniques, and tasks for which ROSE reuse provided a low-risk solution with significant cost savings and reduced software development time.
Generically Used Expert Scheduling System (GUESS): User's Guide Version 1.0
NASA Technical Reports Server (NTRS)
Liebowitz, Jay; Krishnamurthy, Vijaya; Rodens, Ira
1996-01-01
This user's guide contains instructions explaining how to best operate the program GUESS, a generic expert scheduling system. GUESS incorporates several important features for a generic scheduler, including automatic scheduling routines to generate a 'first' schedule for the user, a user interface that includes Gantt charts and enables the human scheduler to manipulate schedules manually, diagnostic report generators, and a variety of scheduling techniques. The current version of GUESS runs on an IBM PC or compatible in the Windows 3.1 or Windows '95 environment.
Distributed intelligent scheduling of FMS
NASA Astrophysics Data System (ADS)
Wu, Zuobao; Cheng, Yaodong; Pan, Xiaohong
1995-08-01
In this paper, a distributed scheduling approach of a flexible manufacturing system (FMS) is presented. A new class of Petri nets called networked time Petri nets (NTPN) for system modeling of networking environment is proposed. The distributed intelligent scheduling is implemented by three schedulers which combine NTPN models with expert system techniques. The simulation results are shown.
NASA Technical Reports Server (NTRS)
Krupp, Joseph C.
1991-01-01
The Electric Power Control System (EPCS) created by Decision-Science Applications, Inc. (DSA) for the Lewis Research Center is discussed. This system makes decisions on what to schedule and when to schedule it, including making choices among various options or ways of performing a task. The system is goal-directed and seeks to shape resource usage in an optimal manner using a value-driven approach. Discussed here are considerations governing what makes a good schedule, how to design a value function to find the best schedule, and how to design the algorithm that finds the schedule that maximizes this value function. Results are shown which demonstrate the usefulness of the techniques employed.
Automated Platform Management System Scheduling
NASA Technical Reports Server (NTRS)
Hull, Larry G.
1990-01-01
The Platform Management System was established to coordinate the operation of platform systems and instruments. The management functions are split between ground and space components. Since platforms are to be out of contact with the ground more than the manned base, the on-board functions are required to be more autonomous than those of the manned base. Under this concept, automated replanning and rescheduling, including on-board real-time schedule maintenance and schedule repair, are required to effectively and efficiently meet Space Station Freedom mission goals. In a FY88 study, we developed several promising alternatives for automated platform planning and scheduling. We recommended both a specific alternative and a phased approach to automated platform resource scheduling. Our recommended alternative was based upon use of exactly the same scheduling engine in both ground and space components of the platform management system. Our phased approach recommendation was based upon evolutionary development of the platform. In the past year, we developed platform scheduler requirements and implemented a rapid prototype of a baseline platform scheduler. Presently we are rehosting this platform scheduler rapid prototype and integrating the scheduler prototype into two Goddard Space Flight Center testbeds, as the ground scheduler in the Scheduling Concepts, Architectures, and Networks Testbed and as the on-board scheduler in the Platform Management System Testbed. Using these testbeds, we will investigate rescheduling issues, evaluate operational performance and enhance the platform scheduler prototype to demonstrate our evolutionary approach to automated platform scheduling. The work described in this paper was performed prior to Space Station Freedom rephasing, transfer of platform responsibility to Code E, and other recently discussed changes. We neither speculate on these changes nor attempt to predict the impact of the final decisions. As a consequence some of our work and results may be outdated when this paper is published.
A hybrid job-shop scheduling system
NASA Technical Reports Server (NTRS)
Hellingrath, Bernd; Robbach, Peter; Bayat-Sarmadi, Fahid; Marx, Andreas
1992-01-01
The intention of the scheduling system developed at the Fraunhofer-Institute for Material Flow and Logistics is the support of a scheduler working in a job-shop. Due to the existing requirements for a job-shop scheduling system the usage of flexible knowledge representation and processing techniques is necessary. Within this system the attempt was made to combine the advantages of symbolic AI-techniques with those of neural networks.
Job Scheduling Under the Portable Batch System
NASA Technical Reports Server (NTRS)
Henderson, Robert L.; Woodrow, Thomas S. (Technical Monitor)
1995-01-01
The typical batch queuing system schedules jobs for execution by a set of queue controls. The controls determine from which queues jobs may be selected. Within the queue, jobs are ordered first-in, first-run. This limits the set of scheduling policies available to a site. The Portable Batch System removes this limitation by providing an external scheduling module. This separate program has full knowledge of the available queued jobs, running jobs, and system resource usage. Sites are able to implement any policy expressible in one of several procedural language. Policies may range from "bet fit" to "fair share" to purely political. Scheduling decisions can be made over the full set of jobs regardless of queue or order. The scheduling policy can be changed to fit a wide variety of computing environments and scheduling goals. This is demonstrated by the use of PBS on an IBM SP-2 system at NASA Ames.
2007-06-01
introduces ASC-U’s approach for solving the dynamic UAV allocation problem. 26 Christopher J...18 Figure 6. Assignments Dynamics Example (after) .........................................................20 Figure 7. ASC-U Dynamic Cueing...decisions in order to respond to the dynamic environment they face. Thus, to succeed, the Army’s transformation cannot rely
T-L Plane Abstraction-Based Energy-Efficient Real-Time Scheduling for Multi-Core Wireless Sensors.
Kim, Youngmin; Lee, Ki-Seong; Pham, Ngoc-Son; Lee, Sun-Ro; Lee, Chan-Gun
2016-07-08
Energy efficiency is considered as a critical requirement for wireless sensor networks. As more wireless sensor nodes are equipped with multi-cores, there are emerging needs for energy-efficient real-time scheduling algorithms. The T-L plane-based scheme is known to be an optimal global scheduling technique for periodic real-time tasks on multi-cores. Unfortunately, there has been a scarcity of studies on extending T-L plane-based scheduling algorithms to exploit energy-saving techniques. In this paper, we propose a new T-L plane-based algorithm enabling energy-efficient real-time scheduling on multi-core sensor nodes with dynamic power management (DPM). Our approach addresses the overhead of processor mode transitions and reduces fragmentations of the idle time, which are inherent in T-L plane-based algorithms. Our experimental results show the effectiveness of the proposed algorithm compared to other energy-aware scheduling methods on T-L plane abstraction.
Laprise, Jean-François; Markowitz, Lauri E; Chesson, Harrell W; Drolet, Mélanie; Brisson, Marc
2016-09-01
A recent clinical trial using the 9-valent human papillomavirus virus (HPV) vaccine has shown that antibody responses after 2 doses are noninferior to those after 3 doses, suggesting that 2 and 3 doses may have comparable vaccine efficacy. We used an individual-based transmission-dynamic model to compare the population-level effectiveness and cost-effectiveness of 2- and 3-dose schedules of 9-valent HPV vaccine in the United States. Our model predicts that if 2 doses of 9-valent vaccine protect for ≥20 years, the additional benefits of a 3-dose schedule are small as compared to those of 2-dose schedules, and 2-dose schedules are likely much more cost-efficient than 3-dose schedules. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.
48 CFR 245.606 - Inventory schedules.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Inventory schedules. 245.606 Section 245.606 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM... Contractor Inventory 245.606 Inventory schedules. ...
Interactive Dynamic Mission Scheduling for ASCA
NASA Astrophysics Data System (ADS)
Antunes, A.; Nagase, F.; Isobe, T.
The Japanese X-ray astronomy satellite ASCA (Advanced Satellite for Cosmology and Astrophysics) mission requires scheduling for each 6-month observation phase, further broken down into weekly schedules at a few minutes resolution. Two tools, SPIKE and NEEDLE, written in Lisp and C, use artificial intelligence (AI) techniques combined with a graphic user interface for fast creation and alteration of mission schedules. These programs consider viewing and satellite attitude constraints as well as observer-requested criteria and present an optimized set of solutions for review by the planner. Six-month schedules at 1 day resolution are created for an oversubscribed set of targets by the SPIKE software, originally written for HST and presently being adapted for EUVE, XTE and AXAF. The NEEDLE code creates weekly schedules at 1 min resolution using in-house orbital routines and creates output for processing by the command generation software. Schedule creation on both the long- and short-term scale is rapid, less than 1 day for long-term, and one hour for short-term.
Novel Hybrid Scheduling Technique for Sensor Nodes with Mixed Criticality Tasks
Micea, Mihai-Victor; Stangaciu, Cristina-Sorina; Stangaciu, Valentin; Curiac, Daniel-Ioan
2017-01-01
Sensor networks become increasingly a key technology for complex control applications. Their potential use in safety- and time-critical domains has raised the need for task scheduling mechanisms specially adapted to sensor node specific requirements, often materialized in predictable jitter-less execution of tasks characterized by different criticality levels. This paper offers an efficient scheduling solution, named Hybrid Hard Real-Time Scheduling (H2RTS), which combines a static, clock driven method with a dynamic, event driven scheduling technique, in order to provide high execution predictability, while keeping a high node Central Processing Unit (CPU) utilization factor. From the detailed, integrated schedulability analysis of the H2RTS, a set of sufficiency tests are introduced and demonstrated based on the processor demand and linear upper bound metrics. The performance and correct behavior of the proposed hybrid scheduling technique have been extensively evaluated and validated both on a simulator and on a sensor mote equipped with ARM7 microcontroller. PMID:28672856
FASTER - A tool for DSN forecasting and scheduling
NASA Technical Reports Server (NTRS)
Werntz, David; Loyola, Steven; Zendejas, Silvino
1993-01-01
FASTER (Forecasting And Scheduling Tool for Earth-based Resources) is a suite of tools designed for forecasting and scheduling JPL's Deep Space Network (DSN). The DSN is a set of antennas and other associated resources that must be scheduled for satellite communications, astronomy, maintenance, and testing. FASTER consists of MS-Windows based programs that replace two existing programs (RALPH and PC4CAST). FASTER was designed to be more flexible, maintainable, and user friendly. FASTER makes heavy use of commercial software to allow for customization by users. FASTER implements scheduling as a two pass process: the first pass calculates a predictive profile of resource utilization; the second pass uses this information to calculate a cost function used in a dynamic programming optimization step. This information allows the scheduler to 'look ahead' at activities that are not as yet scheduled. FASTER has succeeded in allowing wider access to data and tools, reducing the amount of effort expended and increasing the quality of analysis.
Application of decomposition techniques to the preliminary design of a transport aircraft
NASA Technical Reports Server (NTRS)
Rogan, J. E.; Kolb, M. A.
1987-01-01
A nonlinear constrained optimization problem describing the preliminary design process for a transport aircraft has been formulated. A multifaceted decomposition of the optimization problem has been made. Flight dynamics, flexible aircraft loads and deformations, and preliminary structural design subproblems appear prominently in the decomposition. The use of design process decomposition for scheduling design projects, a new system integration approach to configuration control, and the application of object-centered programming to a new generation of design tools are discussed.
2004-06-22
KENNEDY SPACE CENTER, FLA. - At Space Launch Complex 2 on North Vandenberg Air Force Base, Calif., the Aura spacecraft is lifted up the mobile service tower, or gantry. The latest in the Earth Observing System (EOS) series, Aura is scheduled to launch July 10 aboard the Boeing Delta II rocket. Aura’s four state-of-the-art instruments will study the dynamics of chemistry occurring in the atmosphere. The spacecraft will provide data to help scientists better understand the Earth’s ozone, air quality and climate change.
2004-07-13
VANDENBERG AFB, CALIF. - The Aura spacecraft atop its Boeing Delta II launch vehicle sits on NASA’s Space Complex 2 at Vandenberg Air Force Base in California waiting to launch. Liftoff is now scheduled for no earlier than July 14. The latest in the Earth Observing System (EOS) series, Aura’s four state-of-the-art instruments will study the dynamics of chemistry occurring in the atmosphere. The spacecraft will provide data to help scientists better understand the Earth’s ozone, air quality and climate change. [Photo by Bill Ingalls/NASA
2004-06-22
KENNEDY SPACE CENTER, FLA. - At Space Launch Complex 2 on North Vandenberg Air Force Base, Calif., the Aura spacecraft is prepared for its lift up the mobile service tower, or gantry. The latest in the Earth Observing System (EOS) series, Aura is scheduled to launch July 10 aboard the Boeing Delta II rocket. Aura’s four state-of-the-art instruments will study the dynamics of chemistry occurring in the atmosphere. The spacecraft will provide data to help scientists better understand the Earth’s ozone, air quality and climate change.
2004-06-22
KENNEDY SPACE CENTER, FLA. - At Space Launch Complex 2 on North Vandenberg Air Force Base, Calif., the Aura spacecraft arrives at the base of the mobile service tower, or gantry. The latest in the Earth Observing System (EOS) series, Aura is scheduled to launch July 10 aboard the Boeing Delta II rocket. Aura’s four state-of-the-art instruments will study the dynamics of chemistry occurring in the atmosphere. The spacecraft will provide data to help scientists better understand the Earth’s ozone, air quality and climate change.
Support of the Laboratory for Terrestrial Physics for Dynamics of the Solid Earth (DOSE)
NASA Technical Reports Server (NTRS)
Vandenberg, Nancy R.; Ma, C. (Technical Monitor)
2001-01-01
This final report summarizes the accomplishments during the contract period. Under the contract NVI, Inc. provided support to the VLBI group at NASA's Goddard Space Flight Center. The contract covered a period of approximately eight years during which geodetic and astrometric VLBI evolved through several major changes. This report is divided into four sections which correspond to major task areas in the contract: A) Coordination and Scheduling, B) Field System, C) Station Support, and D) Analysis and Research and Development.
Support for the Laboratory for Terrestrial Physics for Dynamics of the Solid Earth (DOSE)
NASA Technical Reports Server (NTRS)
Ma, C. (Technical Monitor)
2001-01-01
This final report summarizes the accomplishments during the contract period. Under the contract NVI, Inc. provided support to the VLBI group at NASA's Goddard Space Flight Center. The contract covered a period of approximately eight years during which geodetic and astrometric VLBI evolved through several major changes. This report is divided into four sections which correspond to major task areas in the contract: A) Coordination and Scheduling, B) Field System, C) Station Support, and D) Analysis and Research and Development.
Novel high-fidelity realistic explosion damage simulation for urban environments
NASA Astrophysics Data System (ADS)
Liu, Xiaoqing; Yadegar, Jacob; Zhu, Youding; Raju, Chaitanya; Bhagavathula, Jaya
2010-04-01
Realistic building damage simulation has a significant impact in modern modeling and simulation systems especially in diverse panoply of military and civil applications where these simulation systems are widely used for personnel training, critical mission planning, disaster management, etc. Realistic building damage simulation should incorporate accurate physics-based explosion models, rubble generation, rubble flyout, and interactions between flying rubble and their surrounding entities. However, none of the existing building damage simulation systems sufficiently faithfully realize the criteria of realism required for effective military applications. In this paper, we present a novel physics-based high-fidelity and runtime efficient explosion simulation system to realistically simulate destruction to buildings. In the proposed system, a family of novel blast models is applied to accurately and realistically simulate explosions based on static and/or dynamic detonation conditions. The system also takes account of rubble pile formation and applies a generic and scalable multi-component based object representation to describe scene entities and highly scalable agent-subsumption architecture and scheduler to schedule clusters of sequential and parallel events. The proposed system utilizes a highly efficient and scalable tetrahedral decomposition approach to realistically simulate rubble formation. Experimental results demonstrate that the proposed system has the capability to realistically simulate rubble generation, rubble flyout and their primary and secondary impacts on surrounding objects including buildings, constructions, vehicles and pedestrians in clusters of sequential and parallel damage events.
Dynamic Scheduling for Web Monitoring Crawler
2009-02-27
researches on static scheduling methods , but they are not included in this project, because this project mainly focuses on the event-driven...pages from public search engines. This research aims to propose various query generation methods using MCRDR knowledge base and evaluates them to...South Wales Professor Hiroshi Motoda/Osaka University Dr. John Salerno, Air Force Research Laboratory/Information Directorate Report
User manual for NASA Lewis 10 by 10 foot supersonic wind tunnel. Revised
NASA Technical Reports Server (NTRS)
Soeder, Ronald H.
1995-01-01
This manual describes the 10- by 10-Foot Supersonic Wind Tunnel at the NASA Lewis Research Center and provides information for users who wish to conduct experiments in this facility. Tunnel performance operating envelopes of altitude, dynamic pressure, Reynolds number, total pressure, and total temperature as a function of test section Mach number are presented. Operating envelopes are shown for both the aerodynamic (closed) cycle and the propulsion (open) cycle. The tunnel test section Mach number range is 2.0 to 3.5. General support systems, such as air systems, hydraulic system, hydrogen system, fuel system, and Schlieren system, are described. Instrumentation and data processing and acquisition systems are also described. Pretest meeting formats and schedules are outlined. Tunnel user responsibility and personnel safety are also discussed.
Intelligent scheduling of execution for customized physical fitness and healthcare system.
Huang, Chung-Chi; Liu, Hsiao-Man; Huang, Chung-Lin
2015-01-01
Physical fitness and health of white collar business person is getting worse and worse in recent years. Therefore, it is necessary to develop a system which can enhance physical fitness and health for people. Although the exercise prescription can be generated after diagnosing for customized physical fitness and healthcare. It is hard to meet individual execution needs for general scheduling of physical fitness and healthcare system. So the main purpose of this research is to develop an intelligent scheduling of execution for customized physical fitness and healthcare system. The results of diagnosis and prescription for customized physical fitness and healthcare system will be generated by fuzzy logic Inference. Then the results of diagnosis and prescription for customized physical fitness and healthcare system will be scheduled and executed by intelligent computing. The scheduling of execution is generated by using genetic algorithm method. It will improve traditional scheduling of exercise prescription for physical fitness and healthcare. Finally, we will demonstrate the advantages of the intelligent scheduling of execution for customized physical fitness and healthcare system.
48 CFR 208.405 - Ordering procedures for Federal Supply Schedules.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Federal Supply Schedules. 208.405 Section 208.405 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE ACQUISITION PLANNING REQUIRED SOURCES OF SUPPLIES AND SERVICES Federal Supply Schedules 208.405 Ordering procedures for Federal Supply Schedules. In all orders...
Multivariable control of vapor compression systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, X.D.; Liu, S.; Asada, H.H.
1999-07-01
This paper presents the results of a study of multi-input multi-output (MIMO) control of vapor compression cycles that have multiple actuators and sensors for regulating multiple outputs, e.g., superheat and evaporating temperature. The conventional single-input single-output (SISO) control was shown to have very limited performance. A low order lumped-parameter model was developed to describe the significant dynamics of vapor compression cycles. Dynamic modes were analyzed based on the low order model to provide physical insight of system dynamic behavior. To synthesize a MIMO control system, the Linear-Quadratic Gaussian (LQG) technique was applied to coordinate compressor speed and expansion valve openingmore » with guaranteed stability robustness in the design. Furthermore, to control a vapor compression cycle over a wide range of operating conditions where system nonlinearities become evident, a gain scheduling scheme was used so that the MIMO controller could adapt to changing operating conditions. Both analytical studies and experimental tests showed that the MIMO control could significantly improve the transient behavior of vapor compression cycles compared to the conventional SISO control scheme. The MIMO control proposed in this paper could be extended to the control of vapor compression cycles in a variety of HVAC and refrigeration applications to improve system performance and energy efficiency.« less
Quantification of Dynamic Model Validation Metrics Using Uncertainty Propagation from Requirements
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Peck, Jeffrey A.; Stewart, Eric C.
2018-01-01
The Space Launch System, NASA's new large launch vehicle for long range space exploration, is presently in the final design and construction phases, with the first launch scheduled for 2019. A dynamic model of the system has been created and is critical for calculation of interface loads and natural frequencies and mode shapes for guidance, navigation, and control (GNC). Because of the program and schedule constraints, a single modal test of the SLS will be performed while bolted down to the Mobile Launch Pad just before the first launch. A Monte Carlo and optimization scheme will be performed to create thousands of possible models based on given dispersions in model properties and to determine which model best fits the natural frequencies and mode shapes from modal test. However, the question still remains as to whether this model is acceptable for the loads and GNC requirements. An uncertainty propagation and quantification (UP and UQ) technique to develop a quantitative set of validation metrics that is based on the flight requirements has therefore been developed and is discussed in this paper. There has been considerable research on UQ and UP and validation in the literature, but very little on propagating the uncertainties from requirements, so most validation metrics are "rules-of-thumb;" this research seeks to come up with more reason-based metrics. One of the main assumptions used to achieve this task is that the uncertainty in the modeling of the fixed boundary condition is accurate, so therefore that same uncertainty can be used in propagating the fixed-test configuration to the free-free actual configuration. The second main technique applied here is the usage of the limit-state formulation to quantify the final probabilistic parameters and to compare them with the requirements. These techniques are explored with a simple lumped spring-mass system and a simplified SLS model. When completed, it is anticipated that this requirements-based validation metric will provide a quantified confidence and probability of success for the final SLS dynamics model, which will be critical for a successful launch program, and can be applied in the many other industries where an accurate dynamic model is required.
The MICRO-BOSS scheduling system: Current status and future efforts
NASA Technical Reports Server (NTRS)
Sadeh, Norman M.
1992-01-01
In this paper, a micro-opportunistic approach to factory scheduling was described that closely monitors the evolution of bottlenecks during the construction of the schedule and continuously redirects search towards the bottleneck that appears to be most critical. This approach differs from earlier opportunistic approaches, as it does not require scheduling large resource subproblems or large job subproblems before revising the current scheduling strategy. This micro-opportunistic approach was implemented in the context of the MICRO-BOSS factory scheduling system. A study comparing MICRO-BOSS against a macro-opportunistic scheduler suggests that the additional flexibility of the micro-opportunistic approach to scheduling generally yields important reductions in both tardiness and inventory. Current research efforts include: adaptation of MICRO-BOSS to deal with sequence-dependent setups and development of micro-opportunistic reactive scheduling techniques that will enable the system to patch the schedule in the presence of contingencies such as machine breakdowns, raw materials arriving late, job cancellations, etc.
Analysis and design of gain scheduled control systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Shamma, Jeff S.
1988-01-01
Gain scheduling, as an idea, is to construct a global feedback control system for a time varying and/or nonlinear plant from a collection of local time invariant designs. However in the absence of a sound analysis, these designs come with no guarantees on the robustness, performance, or even nominal stability of the overall gain schedule design. Such an analysis is presented for three types of gain scheduling situations: (1) a linear parameter varying plant scheduling on its exogenous parameters, (2) a nonlinear plant scheduling on a prescribed reference trajectory, and (3) a nonlinear plant scheduling on the current plant output. Conditions are given which guarantee that the stability, robustness, and performance properties of the fixed operating point designs carry over to the global gain scheduled designs, such as the scheduling variable should vary slowly and capture the plants nonlinearities. Finally, an alternate design framework is proposed which removes the slowing varying restriction or gain scheduled systems. This framework addresses some fundamental feedback issues previously ignored in standard gain.
Schedule-Aware Workflow Management Systems
NASA Astrophysics Data System (ADS)
Mans, Ronny S.; Russell, Nick C.; van der Aalst, Wil M. P.; Moleman, Arnold J.; Bakker, Piet J. M.
Contemporary workflow management systems offer work-items to users through specific work-lists. Users select the work-items they will perform without having a specific schedule in mind. However, in many environments work needs to be scheduled and performed at particular times. For example, in hospitals many work-items are linked to appointments, e.g., a doctor cannot perform surgery without reserving an operating theater and making sure that the patient is present. One of the problems when applying workflow technology in such domains is the lack of calendar-based scheduling support. In this paper, we present an approach that supports the seamless integration of unscheduled (flow) and scheduled (schedule) tasks. Using CPN Tools we have developed a specification and simulation model for schedule-aware workflow management systems. Based on this a system has been realized that uses YAWL, Microsoft Exchange Server 2007, Outlook, and a dedicated scheduling service. The approach is illustrated using a real-life case study at the AMC hospital in the Netherlands. In addition, we elaborate on the experiences obtained when developing and implementing a system of this scale using formal techniques.
An ex ante control chart for project monitoring using earned duration management observations
NASA Astrophysics Data System (ADS)
Mortaji, Seyed Taha Hossein; Noori, Siamak; Noorossana, Rassoul; Bagherpour, Morteza
2017-12-01
In the past few years, there has been an increasing interest in developing project control systems. The primary purpose of such systems is to indicate whether the actual performance is consistent with the baseline and to produce a signal in the case of non-compliance. Recently, researchers have shown an increased interest in monitoring project's performance indicators, by plotting them on the Shewhart-type control charts over time. However, these control charts are fundamentally designed for processes and ignore project-specific dynamics, which can lead to weak results and misleading interpretations. By paying close attention to the project baseline schedule and using statistical foundations, this paper proposes a new ex ante control chart which discriminates between acceptable (as-planned) and non-acceptable (not-as-planned) variations of the project's schedule performance. Such control chart enables project managers to set more realistic thresholds leading to a better decision making for taking corrective and/or preventive actions. For the sake of clarity, an illustrative example has been presented to show how the ex ante control chart is constructed in practice. Furthermore, an experimental investigation has been set up to analyze the performance of the proposed control chart. As expected, the results confirm that, when a project starts to deflect significantly from the project's baseline schedule, the ex ante control chart shows a respectable ability to detect and report right signals while avoiding false alarms.
Integrated planning and scheduling for Earth science data processing
NASA Technical Reports Server (NTRS)
Boddy, Mark; White, Jim; Goldman, Robert; Short, Nick, Jr.
1995-01-01
Several current NASA programs such as the EOSDIS Core System (ECS) have data processing and data management requirements that call for an integrated planning and scheduling capability. In this paper, we describe the experience of applying advanced scheduling technology operationally, in terms of what was accomplished, lessons learned, and what remains to be done in order to achieve similar successes in ECS and other programs. We discuss the importance and benefits of advanced scheduling tools, and our progress toward realizing them, through examples and illustrations based on ECS requirements. The first part of the paper focuses on the Data Archive and Distribution (DADS) V0 Scheduler. We then discuss system integration issues ranging from communication with the scheduler to the monitoring of system events and re-scheduling in response to them. The challenge of adapting the scheduler to domain-specific features and scheduling policies is also considered. Extrapolation to the ECS domain raises issues of integrating scheduling with a product-generation planner (such as PlaSTiC), and implementing conditional planning in an operational system. We conclude by briefly noting ongoing technology development and deployment projects being undertaken by HTC and the ISTB.
Multiresource allocation and scheduling for periodic soft real-time applications
NASA Astrophysics Data System (ADS)
Gopalan, Kartik; Chiueh, Tzi-cker
2001-12-01
Real-time applications that utilize multiple system resources, such as CPU, disks, and network links, require coordinated scheduling of these resources in order to meet their end-to-end performance requirements. Most state-of-the-art operating systems support independent resource allocation and deadline-driven scheduling but lack coordination among multiple heterogeneous resources. This paper describes the design and implementation of an Integrated Real-time Resource Scheduler (IRS) that performs coordinated allocation and scheduling of multiple heterogeneous resources on the same machine for periodic soft real-time application. The principal feature of IRS is a heuristic multi-resource allocation algorithm that reserves multiple resources for real-time applications in a manner that can maximize the number of applications admitted into the system in the long run. At run-time, a global scheduler dispatches the tasks of the soft real-time application to individual resource schedulers according to the precedence constraints between tasks. The individual resource schedulers, which could be any deadline based schedulers, can make scheduling decisions locally and yet collectively satisfy a real-time application's performance requirements. The tightness of overall timing guarantees is ultimately determined by the properties of individual resource schedulers. However, IRS maximizes overall system resource utilization efficiency by coordinating deadline assignment across multiple tasks in a soft real-time application.
Completable scheduling: An integrated approach to planning and scheduling
NASA Technical Reports Server (NTRS)
Gervasio, Melinda T.; Dejong, Gerald F.
1992-01-01
The planning problem has traditionally been treated separately from the scheduling problem. However, as more realistic domains are tackled, it becomes evident that the problem of deciding on an ordered set of tasks to achieve a set of goals cannot be treated independently of the problem of actually allocating resources to the tasks. Doing so would result in losing the robustness and flexibility needed to deal with imperfectly modeled domains. Completable scheduling is an approach which integrates the two problems by allowing an a priori planning module to defer particular planning decisions, and consequently the associated scheduling decisions, until execution time. This allows a completable scheduling system to maximize plan flexibility by allowing runtime information to be taken into consideration when making planning and scheduling decision. Furthermore, through the criteria of achievability placed on deferred decision, a completable scheduling system is able to retain much of the goal-directedness and guarantees of achievement afforded by a priori planning. The completable scheduling approach is further enhanced by the use of contingent explanation-based learning, which enables a completable scheduling system to learn general completable plans from example and improve its performance through experience. Initial experimental results show that completable scheduling outperforms classical scheduling as well as pure reactive scheduling in a simple scheduling domain.
Zsigraiova, Zdena; Semiao, Viriato; Beijoco, Filipa
2013-04-01
This work proposes an innovative methodology for the reduction of the operation costs and pollutant emissions involved in the waste collection and transportation. Its innovative feature lies in combining vehicle route optimization with that of waste collection scheduling. The latter uses historical data of the filling rate of each container individually to establish the daily circuits of collection points to be visited, which is more realistic than the usual assumption of a single average fill-up rate common to all the system containers. Moreover, this allows for the ahead planning of the collection scheduling, which permits a better system management. The optimization process of the routes to be travelled makes recourse to Geographical Information Systems (GISs) and uses interchangeably two optimization criteria: total spent time and travelled distance. Furthermore, rather than using average values, the relevant parameters influencing fuel consumption and pollutant emissions, such as vehicle speed in different roads and loading weight, are taken into consideration. The established methodology is applied to the glass-waste collection and transportation system of Amarsul S.A., in Barreiro. Moreover, to isolate the influence of the dynamic load on fuel consumption and pollutant emissions a sensitivity analysis of the vehicle loading process is performed. For that, two hypothetical scenarios are tested: one with the collected volume increasing exponentially along the collection path; the other assuming that the collected volume decreases exponentially along the same path. The results evidence unquestionable beneficial impacts of the optimization on both the operation costs (labor and vehicles maintenance and fuel consumption) and pollutant emissions, regardless the optimization criterion used. Nonetheless, such impact is particularly relevant when optimizing for time yielding substantial improvements to the existing system: potential reductions of 62% for the total spent time, 43% for the fuel consumption and 40% for the emitted pollutants. This results in total cost savings of 57%, labor being the greatest contributor, representing over €11,000 per year for the two vehicles collecting glass-waste. Moreover, it is shown herein that the dynamic loading process of the collection vehicle impacts on both the fuel consumption and on pollutant emissions. Copyright © 2012 Elsevier Ltd. All rights reserved.
Approximation algorithms for scheduling unrelated parallel machines with release dates
NASA Astrophysics Data System (ADS)
Avdeenko, T. V.; Mesentsev, Y. A.; Estraykh, I. V.
2017-01-01
In this paper we propose approaches to optimal scheduling of unrelated parallel machines with release dates. One approach is based on the scheme of dynamic programming modified with adaptive narrowing of search domain ensuring its computational effectiveness. We discussed complexity of the exact schedules synthesis and compared it with approximate, close to optimal, solutions. Also we explain how the algorithm works for the example of two unrelated parallel machines and five jobs with release dates. Performance results that show the efficiency of the proposed approach have been given.
SLS-PLAN-IT: A knowledge-based blackboard scheduling system for Spacelab life sciences missions
NASA Technical Reports Server (NTRS)
Kao, Cheng-Yan; Lee, Seok-Hua
1992-01-01
The primary scheduling tool in use during the Spacelab Life Science (SLS-1) planning phase was the operations research (OR) based, tabular form Experiment Scheduling System (ESS) developed by NASA Marshall. PLAN-IT is an artificial intelligence based interactive graphic timeline editor for ESS developed by JPL. The PLAN-IT software was enhanced for use in the scheduling of Spacelab experiments to support the SLS missions. The enhanced software SLS-PLAN-IT System was used to support the real-time reactive scheduling task during the SLS-1 mission. SLS-PLAN-IT is a frame-based blackboard scheduling shell which, from scheduling input, creates resource-requiring event duration objects and resource-usage duration objects. The blackboard structure is to keep track of the effects of event duration objects on the resource usage objects. Various scheduling heuristics are coded in procedural form and can be invoked any time at the user's request. The system architecture is described along with what has been learned with the SLS-PLAN-IT project.
Maximally Expressive Modeling of Operations Tasks
NASA Technical Reports Server (NTRS)
Jaap, John; Richardson, Lea; Davis, Elizabeth
2002-01-01
Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed, the information sought is at the cutting edge of scientific endeavor, and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a "maximally expressive" modeling schema.
Space station payload operations scheduling with ESP2
NASA Technical Reports Server (NTRS)
Stacy, Kenneth L.; Jaap, John P.
1988-01-01
The Mission Analysis Division of the Systems Analysis and Integration Laboratory at the Marshall Space Flight Center is developing a system of programs to handle all aspects of scheduling payload operations for Space Station. The Expert Scheduling Program (ESP2) is the heart of this system. The task of payload operations scheduling can be simply stated as positioning the payload activities in a mission so that they collect their desired data without interfering with other activities or violating mission constraints. ESP2 is an advanced version of the Experiment Scheduling Program (ESP) which was developed by the Mission Integration Branch beginning in 1979 to schedule Spacelab payload activities. The automatic scheduler in ESP2 is an expert system that embodies the rules that expert planners would use to schedule payload operations by hand. This scheduler uses depth-first searching, backtracking, and forward chaining techniques to place an activity so that constraints (such as crew, resources, and orbit opportunities) are not violated. It has an explanation facility to show why an activity was or was not scheduled at a certain time. The ESP2 user can also place the activities in the schedule manually. The program offers graphical assistance to the user and will advise when constraints are being violated. ESP2 also has an option to identify conflict introduced into an existing schedule by changes to payload requirements, mission constraints, and orbit opportunities.
Virtual Habitat -a dynamic simulation of closed life support systems -human model status
NASA Astrophysics Data System (ADS)
Markus Czupalla, M. Sc.; Zhukov, Anton; Hwang, Su-Au; Schnaitmann, Jonas
In order to optimize Life Support Systems on a system level, stability questions must be in-vestigated. To do so the exploration group of the Technical University of Munich (TUM) is developing the "Virtual Habitat" (V-HAB) dynamic LSS simulation software. V-HAB shall provide the possibility to conduct dynamic simulations of entire mission scenarios for any given LSS configuration. The Virtual Habitat simulation tool consists of four main modules: • Closed Environment Module (CEM) -monitoring of compounds in a closed environment • Crew Module (CM) -dynamic human simulation • P/C Systems Module (PCSM) -dynamic P/C subsystems • Plant Module (PM) -dynamic plant simulation The core module of the simulation is the dynamic and environment sensitive human module. Introduced in its basic version in 2008, the human module has been significantly updated since, increasing its capabilities and maturity significantly. In this paper three newly added human model subsystems (thermal regulation, digestion and schedule controller) are introduced touching also on the human stress subsystem which is cur-rently under development. Upon the introduction of these new subsystems, the integration of these into the overall V-HAB human model is discussed, highlighting the impact on the most important I/F. The overall human model capabilities shall further be summarized and presented based on meaningful test cases. In addition to the presentation of the results, the correlation strategy for the Virtual Habitat human model shall be introduced assessing the models current confidence level and giving an outlook on the future correlation strategy. Last but not least, the remaining V-HAB mod-ules shall be introduced shortly showing how the human model is integrated into the overall simulation.
Interval Analysis Approach to Prototype the Robust Control of the Laboratory Overhead Crane
NASA Astrophysics Data System (ADS)
Smoczek, J.; Szpytko, J.; Hyla, P.
2014-07-01
The paper describes the software-hardware equipment and control-measurement solutions elaborated to prototype the laboratory scaled overhead crane control system. The novelty approach to crane dynamic system modelling and fuzzy robust control scheme design is presented. The iterative procedure for designing a fuzzy scheduling control scheme is developed based on the interval analysis of discrete-time closed-loop system characteristic polynomial coefficients in the presence of rope length and mass of a payload variation to select the minimum set of operating points corresponding to the midpoints of membership functions at which the linear controllers are determined through desired poles assignment. The experimental results obtained on the laboratory stand are presented.
CHIMERA II - A real-time multiprocessing environment for sensor-based robot control
NASA Technical Reports Server (NTRS)
Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.
1989-01-01
A multiprocessing environment for a wide variety of sensor-based robot system, providing the flexibility, performance, and UNIX-compatible interface needed for fast development of real-time code is addressed. The requirements imposed on the design of a programming environment for sensor-based robotic control is outlined. The details of the current hardware configuration are presented, along with the details of the CHIMERA II software. Emphasis is placed on the kernel, low-level interboard communication, user interface, extended file system, user-definable and dynamically selectable real-time schedulers, remote process synchronization, and generalized interprocess communication. A possible implementation of a hierarchical control model, the NASA/NBS standard reference model for telerobot control system is demonstrated.
NASA Technical Reports Server (NTRS)
Rash, James L. (Editor); Dent, Carolyn P. (Editor)
1989-01-01
Theoretical and implementation aspects of AI systems for space applications are discussed in reviews and reports. Sections are devoted to planning and scheduling, fault isolation and diagnosis, data management, modeling and simulation, and development tools and methods. Particular attention is given to a situated reasoning architecture for space repair and replace tasks, parallel plan execution with self-processing networks, the electrical diagnostics expert system for Spacelab life-sciences experiments, diagnostic tolerance for missing sensor data, the integration of perception and reasoning in fast neural modules, a connectionist model for dynamic control, and applications of fuzzy sets to the development of rule-based expert systems.
Design and architecture of the Mars relay network planning and analysis framework
NASA Technical Reports Server (NTRS)
Cheung, K. M.; Lee, C. H.
2002-01-01
In this paper we describe the design and architecture of the Mars Network planning and analysis framework that supports generation and validation of efficient planning and scheduling strategy. The goals are to minimize the transmitting time, minimize the delaying time, and/or maximize the network throughputs. The proposed framework would require (1) a client-server architecture to support interactive, batch, WEB, and distributed analysis and planning applications for the relay network analysis scheme, (2) a high-fidelity modeling and simulation environment that expresses link capabilities between spacecraft to spacecraft and spacecraft to Earth stations as time-varying resources, and spacecraft activities, link priority, Solar System dynamic events, the laws of orbital mechanics, and other limiting factors as spacecraft power and thermal constraints, (3) an optimization methodology that casts the resource and constraint models into a standard linear and nonlinear constrained optimization problem that lends itself to commercial off-the-shelf (COTS)planning and scheduling algorithms.
Construction schedule simulation of a diversion tunnel based on the optimized ventilation time.
Wang, Xiaoling; Liu, Xuepeng; Sun, Yuefeng; An, Juan; Zhang, Jing; Chen, Hongchao
2009-06-15
Former studies, the methods for estimating the ventilation time are all empirical in construction schedule simulation. However, in many real cases of construction schedule, the many factors have impact on the ventilation time. Therefore, in this paper the 3D unsteady quasi-single phase models are proposed to optimize the ventilation time with different tunneling lengths. The effect of buoyancy is considered in the momentum equation of the CO transport model, while the effects of inter-phase drag, lift force, and virtual mass force are taken into account in the momentum source of the dust transport model. The prediction by the present model for airflow in a diversion tunnel is confirmed by the experimental values reported by Nakayama [Nakayama, In-situ measurement and simulation by CFD of methane gas distribution at a heading faces, Shigen-to-Sozai 114 (11) (1998) 769-775]. The construction ventilation of the diversion tunnel of XinTangfang power station in China is used as a case. The distributions of airflow, CO and dust in the diversion tunnel are analyzed. A theory method for GIS-based dynamic visual simulation for the construction processes of underground structure groups is presented that combines cyclic operation network simulation, system simulation, network plan optimization, and GIS-based construction processes' 3D visualization. Based on the ventilation time the construction schedule of the diversion tunnel is simulated by the above theory method.
NASA Astrophysics Data System (ADS)
Sakellariou, J. S.; Fassois, S. D.
2017-01-01
The identification of a single global model for a stochastic dynamical system operating under various conditions is considered. Each operating condition is assumed to have a pseudo-static effect on the dynamics and be characterized by a single measurable scheduling variable. Identification is accomplished within a recently introduced Functionally Pooled (FP) framework, which offers a number of advantages over Linear Parameter Varying (LPV) identification techniques. The focus of the work is on the extension of the framework to include the important FP-ARMAX model case. Compared to their simpler FP-ARX counterparts, FP-ARMAX models are much more general and offer improved flexibility in describing various types of stochastic noise, but at the same time lead to a more complicated, non-quadratic, estimation problem. Prediction Error (PE), Maximum Likelihood (ML), and multi-stage estimation methods are postulated, and the PE estimator optimality, in terms of consistency and asymptotic efficiency, is analytically established. The postulated estimators are numerically assessed via Monte Carlo experiments, while the effectiveness of the approach and its superiority over its FP-ARX counterpart are demonstrated via an application case study pertaining to simulated railway vehicle suspension dynamics under various mass loading conditions.
12 CFR 229.12 - Availability schedule.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Availability schedule. 229.12 Section 229.12 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM... Availability Policies § 229.12 Availability schedule. (a) Effective date. The availability schedule contained...
5 CFR 532.513 - Flexible and compressed work schedules.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Flexible and compressed work schedules... REGULATIONS PREVAILING RATE SYSTEMS Premium Pay and Differentials § 532.513 Flexible and compressed work schedules. Federal Wage System employees who are authorized to work flexible and compressed work schedules...
Recursive heuristic classification
NASA Technical Reports Server (NTRS)
Wilkins, David C.
1994-01-01
The author will describe a new problem-solving approach called recursive heuristic classification, whereby a subproblem of heuristic classification is itself formulated and solved by heuristic classification. This allows the construction of more knowledge-intensive classification programs in a way that yields a clean organization. Further, standard knowledge acquisition and learning techniques for heuristic classification can be used to create, refine, and maintain the knowledge base associated with the recursively called classification expert system. The method of recursive heuristic classification was used in the Minerva blackboard shell for heuristic classification. Minerva recursively calls itself every problem-solving cycle to solve the important blackboard scheduler task, which involves assigning a desirability rating to alternative problem-solving actions. Knowing these ratings is critical to the use of an expert system as a component of a critiquing or apprenticeship tutoring system. One innovation of this research is a method called dynamic heuristic classification, which allows selection among dynamically generated classification categories instead of requiring them to be prenumerated.
NASA Technical Reports Server (NTRS)
Marr, Greg C.; Maher, Michael; Blizzard, Michael; Showell, Avanaugh; Asher, Mark; Devereux, Will
2004-01-01
Over an approximately 48-hour period from September 26 to 28,2002, the Thermosphere, Ionosphere, Mesosphere, Energetics and Dynamics (TIMED) mission was intensively supported by the Tracking and Data Relay Satellite System (TDRSS). The TIMED satellite is in a nearly circular low-Earth orbit with a semimajor axis of approximately 7000 km and an inclination of approximately 74 degrees. The objective was to provide TDRSS tracking support for orbit determination (OD) to generate a definitive ephemeris of 24-hour duration or more with a 3-sigma position error no greater than 100 meters, and this tracking campaign was successful. An ephemeris was generated by Goddard Space Flight Center (GSFC) personnel using the TDRSS tracking data and was compared with an ephemeris generated by the Johns Hopkins University's Applied Physics Lab (APL) using TIMED Global Positioning System (GPS) data. Prior to the tracking campaign OD error analysis was performed to justify scheduling the TDRSS support.
Generalized Support Software: Domain Analysis and Implementation
NASA Technical Reports Server (NTRS)
Stark, Mike; Seidewitz, Ed
1995-01-01
For the past five years, the Flight Dynamics Division (FDD) at NASA's Goddard Space Flight Center has been carrying out a detailed domain analysis effort and is now beginning to implement Generalized Support Software (GSS) based on this analysis. GSS is part of the larger Flight Dynamics Distributed System (FDDS), and is designed to run under the FDDS User Interface / Executive (UIX). The FDD is transitioning from a mainframe based environment to systems running on engineering workstations. The GSS will be a library of highly reusable components that may be configured within the standard FDDS architecture to quickly produce low-cost satellite ground support systems. The estimates for the first release is that this library will contain approximately 200,000 lines of code. The main driver for developing generalized software is development cost and schedule improvement. The goal is to ultimately have at least 80 percent of all software required for a spacecraft mission (within the domain supported by the GSS) to be configured from the generalized components.
Strategic Defense Initiative Organization adaptive structures program overview
NASA Astrophysics Data System (ADS)
Obal, Michael; Sater, Janet M.
In the currently envisioned architecture none of the Strategic Defense System (SDS) elements to be deployed will receive scheduled maintenance. Assessments of performance capability due to changes caused by the uncertain effects of environments will be difficult, at best. In addition, the system will have limited ability to adjust in order to maintain its required performance levels. The Materials and Structures Office of the Strategic Defense Initiative Organization (SDIO) has begun to address solutions to these potential difficulties via an adaptive structures technology program that combines health and environment monitoring with static and dynamic structural control. Conceivable system benefits include improved target tracking and hit-to-kill performance, on-orbit system health monitoring and reporting, and threat attack warning and assessment.
Runway Scheduling Using Generalized Dynamic Programming
NASA Technical Reports Server (NTRS)
Montoya, Justin; Wood, Zachary; Rathinam, Sivakumar
2011-01-01
A generalized dynamic programming method for finding a set of pareto optimal solutions for a runway scheduling problem is introduced. The algorithm generates a set of runway fight sequences that are optimal for both runway throughput and delay. Realistic time-based operational constraints are considered, including miles-in-trail separation, runway crossings, and wake vortex separation. The authors also model divergent runway takeoff operations to allow for reduced wake vortex separation. A modeled Dallas/Fort Worth International airport and three baseline heuristics are used to illustrate preliminary benefits of using the generalized dynamic programming method. Simulated traffic levels ranged from 10 aircraft to 30 aircraft with each test case spanning 15 minutes. The optimal solution shows a 40-70 percent decrease in the expected delay per aircraft over the baseline schedulers. Computational results suggest that the algorithm is promising for real-time application with an average computation time of 4.5 seconds. For even faster computation times, two heuristics are developed. As compared to the optimal, the heuristics are within 5% of the expected delay per aircraft and 1% of the expected number of runway operations per hour ad can be 100x faster.
Mozumdar, Biswita C; Hornsby, Douglas Neal; Gogate, Adheet S; Intriere, Lisa A; Hanson, Richard; McGreal, Karen; Kelly, Pauline; Ros, Pablo
2003-08-01
To study end-user attitudes and preferences with respect to radiology scheduling systems and to assess implications for retention and extension of the referral base. A study of the institution's historical data indicated reduced satisfaction with the process of patient scheduling in recent years. Sixty physicians who referred patients to a single, large academic radiology department received the survey. The survey was designed to identify (A) the preferred vehicle for patient scheduling (on-line versus telephone scheduling) and (B) whether ease of scheduling was a factor in physicians referring patients to other providers. Referring physicians were asked to forward the survey to any appropriate office staff member in case the latter scheduled appointments for patients. Users were asked to provide comments and suggestions for improvement. The statistical method used was the analysis of proportions. Thirty-three responses were received, corresponding to a return rate of 55%. Twenty-six of the 33 respondents (78.8%, P < .01) stated they were willing to try an online scheduling system; 16 of which tried the system. Twelve of the 16 (75%, P < .05) preferred the on-line application to the telephone system, stating logistical simplification as the primary reason for preference. Three (18.75%) did not consider online scheduling to be more convenient than traditional telephone scheduling. One respondent did not indicate any preference. Eleven of 33 users (33.33%, P < .001) stated that they would change radiology service providers if expectations of scheduling ease are not met. On-line scheduling applications are becoming the preferred scheduling vehicle. Augmenting their capabilities and availability can simplify the scheduling process, improve referring physician satisfaction, and provide a competitive advantage. Referrers are willing to change providers if scheduling expectations are not met.
NASA Technical Reports Server (NTRS)
2002-01-01
A software system that uses artificial intelligence techniques to help with complex Space Shuttle scheduling at Kennedy Space Center is commercially available. Stottler Henke Associates, Inc.(SHAI), is marketing its automatic scheduling system, the Automated Manifest Planner (AMP), to industries that must plan and project changes many different times before the tasks are executed. The system creates optimal schedules while reducing manpower costs. Using information entered into the system by expert planners, the system automatically makes scheduling decisions based upon resource limitations and other constraints. It provides a constraint authoring system for adding other constraints to the scheduling process as needed. AMP is adaptable to assist with a variety of complex scheduling problems in manufacturing, transportation, business, architecture, and construction. AMP can benefit vehicle assembly plants, batch processing plants, semiconductor manufacturing, printing and textiles, surface and underground mining operations, and maintenance shops. For most of SHAI's commercial sales, the company obtains a service contract to customize AMP to a specific domain and then issues the customer a user license.
[Toward a New Immunization Schedule in Spain, 2016 (Part 1)].
Limia-Sánchez, Aurora; Andreu, María Mar; Torres de Mier, María de Viarce; Navarro-Alonso, José Antonio
2016-03-08
The immunization Schedule is a dynamic public health tool that has incorporated different changes over the years influenced by the epidemiologic situation and the scientific evidence. The Immunization Advisory Committee [Ponencia de Programa y Registro de Vacunaciones], as the Interterritorial Council scientific and technical advisory body, carries out assessments of different programmes and vaccines and proposes changes that after approval will be introduced in the Regions schedule. This article is divided into two parts presenting the rationale followed to propose a new schedule for the immunization against diphtheria, tetanus, pertussis, hepatitis B and invasive disease by Haemophilus influenzae type b. This first part is focused in the reasoning to undertake the assessment, the review of the immunization policy and the impact of immunization in Spain, as well as a review of the immunization schedules in similar countries.
38 CFR 4.104 - Schedule of ratings-cardiovascular system.
Code of Federal Regulations, 2010 CFR
2010-07-01
...-cardiovascular system. 4.104 Section 4.104 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS SCHEDULE FOR RATING DISABILITIES Disability Ratings The Cardiovascular System § 4.104 Schedule of ratings—cardiovascular system. Diseases of the Heart Rating Note (1): Evaluate cor pulmonale, which is a form of...
Han, Sanguk; Saba, Farzaneh; Lee, Sanghyun; Mohamed, Yasser; Peña-Mora, Feniosky
2014-07-01
It is not unusual to observe that actual schedule and quality performances are different from planned performances (e.g., schedule delay and rework) during a construction project. Such differences often result in production pressure (e.g., being pressed to work faster). Previous studies demonstrated that such production pressure negatively affects safety performance. However, the process by which production pressure influences safety performance, and to what extent, has not been fully investigated. As a result, the impact of production pressure has not been incorporated much into safety management in practice. In an effort to address this issue, this paper examines how production pressure relates to safety performance over time by identifying their feedback processes. A conceptual causal loop diagram is created to identify the relationship between schedule and quality performances (e.g., schedule delays and rework) and the components related to a safety program (e.g., workers' perceptions of safety, safety training, safety supervision, and crew size). A case study is then experimentally undertaken to investigate this relationship with accident occurrence with the use of data collected from a construction site; the case study is used to build a System Dynamics (SD) model. The SD model, then, is validated through inequality statistics analysis. Sensitivity analysis and statistical screening techniques further permit an evaluation of the impact of the managerial components on accident occurrence. The results of the case study indicate that schedule delays and rework are the critical factors affecting accident occurrence for the monitored project. Copyright © 2013 Elsevier Ltd. All rights reserved.
U.S. Geological Survey Library classification system
Sasscer, R. Scott
2000-01-01
The U.S. Geological Survey Library classification system has been designed for earth science libraries. It is a tool for assigning call numbers to earth science and allied pure science materials in order to collect these materials into related subject groups on the library shelves and arrange them alphabetically by author and title. The classification can be used as a retrieval system to access materials through the subject and geographic numbers.The classification scheme has been developed over the years since 1904 to meet the ever-changing needs of increased specialization and the development of new areas of research in the earth sciences. The system contains seven schedules: Subject scheduleGeological survey schedule Earth science periodical scheduleGovernment document periodical scheduleGeneral science periodical schedule Earth science map schedule Geographic schedule Introduction provides detailed instructions on the construction of call numbers for works falling into the framework of the classification schedules.The tables following the introduction can be quickly accessed through the use of the newly expanded subject index.The purpose of this publication is to provide the earth science community with a classification and retrieval system for earth science materials, to offer sufficient explanation of its structure and use, and to enable library staff and clientele to classify or access research materials in a library collection.
40 CFR 141.702 - Sampling schedules.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 23 2011-07-01 2011-07-01 false Sampling schedules. 141.702 Section... Monitoring Requirements § 141.702 Sampling schedules. (a) Systems required to conduct source water monitoring under § 141.701 must submit a sampling schedule that specifies the calendar dates when the system will...
40 CFR 141.702 - Sampling schedules.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Sampling schedules. 141.702 Section... Monitoring Requirements § 141.702 Sampling schedules. (a) Systems required to conduct source water monitoring under § 141.701 must submit a sampling schedule that specifies the calendar dates when the system will...
NASA Technical Reports Server (NTRS)
Malik, Waqar
2016-01-01
Provide an overview of algorithms used in SARDA (Spot and Runway Departure Advisor) HITL (Human-in-the-Loop) simulation for Dallas Fort-Worth International Airport and Charlotte Douglas International airport. Outline a multi-objective dynamic programming (DP) based algorithm that finds the exact solution to the single runway scheduling (SRS) problem, and discuss heuristics to restrict the search space for the DP based algorithm and provide improvements.
Space network scheduling benchmark: A proof-of-concept process for technology transfer
NASA Technical Reports Server (NTRS)
Moe, Karen; Happell, Nadine; Hayden, B. J.; Barclay, Cathy
1993-01-01
This paper describes a detailed proof-of-concept activity to evaluate flexible scheduling technology as implemented in the Request Oriented Scheduling Engine (ROSE) and applied to Space Network (SN) scheduling. The criteria developed for an operational evaluation of a reusable scheduling system is addressed including a methodology to prove that the proposed system performs at least as well as the current system in function and performance. The improvement of the new technology must be demonstrated and evaluated against the cost of making changes. Finally, there is a need to show significant improvement in SN operational procedures. Successful completion of a proof-of-concept would eventually lead to an operational concept and implementation transition plan, which is outside the scope of this paper. However, a high-fidelity benchmark using actual SN scheduling requests has been designed to test the ROSE scheduling tool. The benchmark evaluation methodology, scheduling data, and preliminary results are described.
Spike: AI scheduling for Hubble Space Telescope after 18 months of orbital operations
NASA Technical Reports Server (NTRS)
Johnston, Mark D.
1992-01-01
This paper is a progress report on the Spike scheduling system, developed by the Space Telescope Science Institute for long-term scheduling of Hubble Space Telescope (HST) observations. Spike is an activity-based scheduler which exploits artificial intelligence (AI) techniques for constraint representation and for scheduling search. The system has been in operational use since shortly after HST launch in April 1990. Spike was adopted for several other satellite scheduling problems; of particular interest was the demonstration that the Spike framework is sufficiently flexible to handle both long-term and short-term scheduling, on timescales of years down to minutes or less. We describe the recent progress made in scheduling search techniques, the lessons learned from early HST operations, and the application of Spike to other problem domains. We also describe plans for the future evolution of the system.
DYNACLIPS (DYNAmic CLIPS): A dynamic knowledge exchange tool for intelligent agents
NASA Technical Reports Server (NTRS)
Cengeloglu, Yilmaz; Khajenoori, Soheil; Linton, Darrell
1994-01-01
In a dynamic environment, intelligent agents must be responsive to unanticipated conditions. When such conditions occur, an intelligent agent may have to stop a previously planned and scheduled course of actions and replan, reschedule, start new activities and initiate a new problem solving process to successfully respond to the new conditions. Problems occur when an intelligent agent does not have enough knowledge to properly respond to the new situation. DYNACLIPS is an implementation of a framework for dynamic knowledge exchange among intelligent agents. Each intelligent agent is a CLIPS shell and runs a separate process under SunOS operating system. Intelligent agents can exchange facts, rules, and CLIPS commands at run time. Knowledge exchange among intelligent agents at run times does not effect execution of either sender and receiver intelligent agent. Intelligent agents can keep the knowledge temporarily or permanently. In other words, knowledge exchange among intelligent agents would allow for a form of learning to be accomplished.
NASA Handbook for Spacecraft Structural Dynamics Testing
NASA Technical Reports Server (NTRS)
Kern, Dennis L.; Scharton, Terry D.
2005-01-01
Recent advances in the area of structural dynamics and vibrations, in both methodology and capability, have the potential to make spacecraft system testing more effective from technical, cost, schedule, and hardware safety points of view. However, application of these advanced test methods varies widely among the NASA Centers and their contractors. Identification and refinement of the best of these test methodologies and implementation approaches has been an objective of efforts by the Jet Propulsion Laboratory on behalf of the NASA Office of the Chief Engineer. But to develop the most appropriate overall test program for a flight project from the selection of advanced methodologies, as well as conventional test methods, spacecraft project managers and their technical staffs will need overall guidance and technical rationale. Thus, the Chief Engineer's Office has recently tasked JPL to prepare a NASA Handbook for Spacecraft Structural Dynamics Testing. An outline of the proposed handbook, with a synopsis of each section, has been developed and is presented herein. Comments on the proposed handbook are solicited from the spacecraft structural dynamics testing community.
NASA Handbook for Spacecraft Structural Dynamics Testing
NASA Technical Reports Server (NTRS)
Kern, Dennis L.; Scharton, Terry D.
2004-01-01
Recent advances in the area of structural dynamics and vibrations, in both methodology and capability, have the potential to make spacecraft system testing more effective from technical, cost, schedule, and hardware safety points of view. However, application of these advanced test methods varies widely among the NASA Centers and their contractors. Identification and refinement of the best of these test methodologies and implementation approaches has been an objective of efforts by the Jet Propulsion Laboratory on behalf of the NASA Office of the Chief Engineer. But to develop the most appropriate overall test program for a flight project from the selection of advanced methodologies, as well as conventional test methods, spacecraft project managers and their technical staffs will need overall guidance and technical rationale. Thus, the Chief Engineer's Office has recently tasked JPL to prepare a NASA Handbook for Spacecraft Structural Dynamics Testing. An outline of the proposed handbook, with a synopsis of each section, has been developed and is presented herein. Comments on the proposed handbook is solicited from the spacecraft structural dynamics testing community.
Kauppi, Jukka-Pekka; Martikainen, Kalle; Ruotsalainen, Ulla
2010-12-01
The central purpose of passive signal intercept receivers is to perform automatic categorization of unknown radar signals. Currently, there is an urgent need to develop intelligent classification algorithms for these devices due to emerging complexity of radar waveforms. Especially multifunction radars (MFRs) capable of performing several simultaneous tasks by utilizing complex, dynamically varying scheduled waveforms are a major challenge for automatic pattern classification systems. To assist recognition of complex radar emissions in modern intercept receivers, we have developed a novel method to recognize dynamically varying pulse repetition interval (PRI) modulation patterns emitted by MFRs. We use robust feature extraction and classifier design techniques to assist recognition in unpredictable real-world signal environments. We classify received pulse trains hierarchically which allows unambiguous detection of the subpatterns using a sliding window. Accuracy, robustness and reliability of the technique are demonstrated with extensive simulations using both static and dynamically varying PRI modulation patterns. Copyright © 2010 Elsevier Ltd. All rights reserved.
A Convex Approach to Fault Tolerant Control
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Cox, David E.; Bauer, Frank (Technical Monitor)
2002-01-01
The design of control laws for dynamic systems with the potential for actuator failures is considered in this work. The use of Linear Matrix Inequalities allows more freedom in controller design criteria than typically available with robust control. This work proposes an extension of fault-scheduled control design techniques that can find a fixed controller with provable performance over a set of plants. Through convexity of the objective function, performance bounds on this set of plants implies performance bounds on a range of systems defined by a convex hull. This is used to incorporate performance bounds for a variety of soft and hard failures into the control design problem.
Time-critical multirate scheduling using contemporary real-time operating system services
NASA Technical Reports Server (NTRS)
Eckhardt, D. E., Jr.
1983-01-01
Although real-time operating systems provide many of the task control services necessary to process time-critical applications (i.e., applications with fixed, invariant deadlines), it may still be necessary to provide a scheduling algorithm at a level above the operating system in order to coordinate a set of synchronized, time-critical tasks executing at different cyclic rates. The scheduling requirements for such applications and develops scheduling algorithms using services provided by contemporary real-time operating systems.
Assessing Weapon System Acquisition Cycle Times: Setting Program Schedules
2015-06-01
additional research, focused as follows: 1 . Acquisition schedule development: How are schedules for acquisition programs actually set and how are they...the germinating requirements documents specific to systems reviewed. A clear statement was found for only one system (Air and Missile Defense Radar...AMDR) when specific threat capabilities were projected to be operational. • Program schedule setting varies in rigor: 1 Up to the interim version of
A task scheduler framework for self-powered wireless sensors.
Nordman, Mikael M
2003-10-01
The cost and inconvenience of cabling is a factor limiting widespread use of intelligent sensors. Recent developments in short-range, low-power radio seem to provide an opening to this problem, making development of wireless sensors feasible. However, for these sensors the energy availability is a main concern. The common solution is either to use a battery or to harvest ambient energy. The benefit of harvested ambient energy is that the energy feeder can be considered as lasting a lifetime, thus it saves the user from concerns related to energy management. The problem is, however, the unpredictability and unsteady behavior of ambient energy sources. This becomes a main concern for sensors that run multiple tasks at different priorities. This paper proposes a new scheduler framework that enables the reliable assignment of task priorities and scheduling in sensors powered by ambient energy. The framework being based on environment parameters, virtual queues, and a state machine with transition conditions, dynamically manages task execution according to priorities. The framework is assessed in a test system powered by a solar panel. The results show the functionality of the framework and how task execution reliably is handled without violating the priority scheme that has been assigned to it.
Reinventing The Design Process: Teams and Models
NASA Technical Reports Server (NTRS)
Wall, Stephen D.
1999-01-01
The future of space mission designing will be dramatically different from the past. Formerly, performance-driven paradigms emphasized data return with cost and schedule being secondary issues. Now and in the future, costs are capped and schedules fixed-these two variables must be treated as independent in the design process. Accordingly, JPL has redesigned its design process. At the conceptual level, design times have been reduced by properly defining the required design depth, improving the linkages between tools, and managing team dynamics. In implementation-phase design, system requirements will be held in crosscutting models, linked to subsystem design tools through a central database that captures the design and supplies needed configuration management and control. Mission goals will then be captured in timelining software that drives the models, testing their capability to execute the goals. Metrics are used to measure and control both processes and to ensure that design parameters converge through the design process within schedule constraints. This methodology manages margins controlled by acceptable risk levels. Thus, teams can evolve risk tolerance (and cost) as they would any engineering parameter. This new approach allows more design freedom for a longer time, which tends to encourage revolutionary and unexpected improvements in design.
Robust Gain-Scheduled Fault Tolerant Control for a Transport Aircraft
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob; Gregory, Irene
2007-01-01
This paper presents an application of robust gain-scheduled control concepts using a linear parameter-varying (LPV) control synthesis method to design fault tolerant controllers for a civil transport aircraft. To apply the robust LPV control synthesis method, the nonlinear dynamics must be represented by an LPV model, which is developed using the function substitution method over the entire flight envelope. The developed LPV model associated with the aerodynamic coefficient uncertainties represents nonlinear dynamics including those outside the equilibrium manifold. Passive and active fault tolerant controllers (FTC) are designed for the longitudinal dynamics of the Boeing 747-100/200 aircraft in the presence of elevator failure. Both FTC laws are evaluated in the full nonlinear aircraft simulation in the presence of the elevator fault and the results are compared to show pros and cons of each control law.
Control-Relevant Modeling, Analysis, and Design for Scramjet-Powered Hypersonic Vehicles
NASA Technical Reports Server (NTRS)
Rodriguez, Armando A.; Dickeson, Jeffrey J.; Sridharan, Srikanth; Benavides, Jose; Soloway, Don; Kelkar, Atul; Vogel, Jerald M.
2009-01-01
Within this paper, control-relevant vehicle design concepts are examined using a widely used 3 DOF (plus flexibility) nonlinear model for the longitudinal dynamics of a generic carrot-shaped scramjet powered hypersonic vehicle. Trade studies associated with vehicle/engine parameters are examined. The impact of parameters on control-relevant static properties (e.g. level-flight trimmable region, trim controls, AOA, thrust margin) and dynamic properties (e.g. instability and right half plane zero associated with flight path angle) are examined. Specific parameters considered include: inlet height, diffuser area ratio, lower forebody compression ramp inclination angle, engine location, center of gravity, and mass. Vehicle optimizations is also examined. Both static and dynamic considerations are addressed. The gap-metric optimized vehicle is obtained to illustrate how this control-centric concept can be used to "reduce" scheduling requirements for the final control system. A classic inner-outer loop control architecture and methodology is used to shed light on how specific vehicle/engine design parameter selections impact control system design. In short, the work represents an important first step toward revealing fundamental tradeoffs and systematically treating control-relevant vehicle design.
The MICRO-BOSS scheduling system: Current status and future efforts
NASA Technical Reports Server (NTRS)
Sadeh, Norman M.
1993-01-01
In this paper, a micro-opportunistic approach to factory scheduling was described that closely monitors the evolution of bottlenecks during the construction of the schedule, and continuously redirects search towards the bottleneck that appears to be most critical. This approach differs from earlier opportunistic approaches, as it does not require scheduling large resource subproblems or large job subproblems before revising the current scheduling strategy. This micro-opportunistic approach was implemented in the context of the MICRO-BOSS factory scheduling system. A study comparing MICRO-BOSS against a macro-opportunistic scheduler suggests that the additional flexibility of the micro-opportunistic approach to scheduling generally yields important reductions in both tardiness and inventory.
Automating Mid- and Long-Range Scheduling for NASA's Deep Space Network
NASA Technical Reports Server (NTRS)
Johnston, Mark D.; Tran, Daniel; Arroyo, Belinda; Sorensen, Sugi; Tay, Peter; Carruth, Butch; Coffman, Adam; Wallace, Mike
2012-01-01
NASA has recently deployed a new mid-range scheduling system for the antennas of the Deep Space Network (DSN), called Service Scheduling Software, or S(sup 3). This system is architected as a modern web application containing a central scheduling database integrated with a collaborative environment, exploiting the same technologies as social web applications but applied to a space operations context. This is highly relevant to the DSN domain since the network schedule of operations is developed in a peer-to-peer negotiation process among all users who utilize the DSN (representing 37 projects including international partners and ground-based science and calibration users). The initial implementation of S(sup 3) is complete and the system has been operational since July 2011. S(sup 3) has been used for negotiating schedules since April 2011, including the baseline schedules for three launching missions in late 2011. S(sup 3) supports a distributed scheduling model, in which changes can potentially be made by multiple users based on multiple schedule "workspaces" or versions of the schedule. This has led to several challenges in the design of the scheduling database, and of a change proposal workflow that allows users to concur with or to reject proposed schedule changes, and then counter-propose with alternative or additional suggested changes. This paper describes some key aspects of the S(sup 3) system and lessons learned from its operational deployment to date, focusing on the challenges of multi-user collaborative scheduling in a practical and mission-critical setting. We will also describe the ongoing project to extend S(sup 3) to encompass long-range planning, downtime analysis, and forecasting, as the next step in developing a single integrated DSN scheduling tool suite to cover all time ranges.
The cinema LED lighting system design based on SCM
NASA Astrophysics Data System (ADS)
En, De; Wang, Xiaobin
2010-11-01
A LED lighting system in the modern theater and the corresponding control program is introduced. Studies show that moderate and mutative brightness in the space would attract audiences' attention on the screen easily. SCM controls LED dynamically by outputting PWM pulse in different duty cycle. That cinema dome lights' intensity can vary with the plot changed, make people get a better view of experience. This article expounds the architecture of hardware system in the schedule and the control flow of the host of the solution. Besides, it introduces the design of software as well. At last, the system which is proved energy-saving, reliable, good visual effect and having using value by means of producing a small-scale model, which reproduce the whole system and achieves the desired result.
Scheduling Independent Partitions in Integrated Modular Avionics Systems
Du, Chenglie; Han, Pengcheng
2016-01-01
Recently the integrated modular avionics (IMA) architecture has been widely adopted by the avionics industry due to its strong partition mechanism. Although the IMA architecture can achieve effective cost reduction and reliability enhancement in the development of avionics systems, it results in a complex allocation and scheduling problem. All partitions in an IMA system should be integrated together according to a proper schedule such that their deadlines will be met even under the worst case situations. In order to help provide a proper scheduling table for all partitions in IMA systems, we study the schedulability of independent partitions on a multiprocessor platform in this paper. We firstly present an exact formulation to calculate the maximum scaling factor and determine whether all partitions are schedulable on a limited number of processors. Then with a Game Theory analogy, we design an approximation algorithm to solve the scheduling problem of partitions, by allowing each partition to optimize its own schedule according to the allocations of the others. Finally, simulation experiments are conducted to show the efficiency and reliability of the approach proposed in terms of time consumption and acceptance ratio. PMID:27942013
NASA Technical Reports Server (NTRS)
Thalman, Nancy E.; Sparn, Thomas P.
1990-01-01
SURE (Science User Resource Expert) is one of three components that compose the SURPASS (Science User Resource Planning and Scheduling System). This system is a planning and scheduling tool which supports distributed planning and scheduling, based on resource allocation and optimization. Currently SURE is being used within the SURPASS by the UARS (Upper Atmospheric Research Satellite) SOLSTICE instrument to build a daily science plan and activity schedule and in a prototyping effort with NASA GSFC to demonstrate distributed planning and scheduling for the SOLSTICE II instrument on the EOS platform. For the SOLSTICE application the SURE utilizes a rule-based system. Development of a rule-based program using Ada CLIPS as opposed to using conventional programming, allows for capture of the science planning and scheduling heuristics in rules and provides flexibility in inserting or removing rules as the scientific objectives and mission constraints change. The SURE system's role as a component in the SURPASS, the purpose of the SURE planning and scheduling tool, the SURE knowledge base, and the software architecture of the SURE component are described.
Scheduling Dependent Real-Time Activities
1990-08-01
dependency relationships in a way that is suitable for all real - time systems . This thesis provides an algorithm, called DASA, that is effective for...scheduling the class of real - time systems known as supervisory control systems. Simulation experiments that account for the time required to make scheduling
Bounding the errors for convex dynamics on one or more polytopes.
Tresser, Charles
2007-09-01
We discuss the greedy algorithm for approximating a sequence of inputs in a family of polytopes lying in affine spaces by an output sequence made of vertices of the respective polytopes. More precisely, we consider here the case when the greed of the algorithm is dictated by the Euclidean norms of the successive cumulative errors. This algorithm can be interpreted as a time-dependent dynamical system in the vector space, where the errors live, or as a time-dependent dynamical system in an affine space containing copies of all the original polytopes. This affine space contains the inputs, as well as the inputs modified by adding the respective former errors; it is the evolution of these modified inputs that the dynamical system in affine space describes. Scheduling problems with many polytopes arise naturally, for instance, when the inputs are from a single polytope P, but one imposes the constraint that whenever the input belongs to a codimension n face, the output has to be in the same codimension n face (as when scheduling drivers among participants of a carpool). It has been previously shown that the error is bounded in the case of a single polytope by proving the existence of an arbitrary large convex invariant region for the dynamics in affine space: A region that is simultaneously invariant for several polytopes, each considered separately, was also constructed. It was then shown that there cannot be an invariant region in affine space in the general case of a family of polytopes. Here we prove the existence of an arbitrary large convex invariant set for the dynamics in the vector space in the case when the sizes of the polytopes in the family are bounded and the set of all the outgoing normals to all the faces of all the polytopes is finite. It was also previously known that starting from zero as the initial error set, the error set could not be saturated in finitely many steps in some cases with several polytopes: Contradicting a former conjecture, we show that the same happens for some single quadrilaterals and for a single pentagon with an axial symmetry. The disproof of that conjecture is the new piece of information that leads us to expect, and then to verify, as we recount here, that the proof that the errors are bounded in the general case could be a small step beyond the proof of the same statement for the single polytope case.
Bounding the errors for convex dynamics on one or more polytopes
NASA Astrophysics Data System (ADS)
Tresser, Charles
2007-09-01
We discuss the greedy algorithm for approximating a sequence of inputs in a family of polytopes lying in affine spaces by an output sequence made of vertices of the respective polytopes. More precisely, we consider here the case when the greed of the algorithm is dictated by the Euclidean norms of the successive cumulative errors. This algorithm can be interpreted as a time-dependent dynamical system in the vector space, where the errors live, or as a time-dependent dynamical system in an affine space containing copies of all the original polytopes. This affine space contains the inputs, as well as the inputs modified by adding the respective former errors; it is the evolution of these modified inputs that the dynamical system in affine space describes. Scheduling problems with many polytopes arise naturally, for instance, when the inputs are from a single polytope P, but one imposes the constraint that whenever the input belongs to a codimension n face, the output has to be in the same codimension n face (as when scheduling drivers among participants of a carpool). It has been previously shown that the error is bounded in the case of a single polytope by proving the existence of an arbitrary large convex invariant region for the dynamics in affine space: A region that is simultaneously invariant for several polytopes, each considered separately, was also constructed. It was then shown that there cannot be an invariant region in affine space in the general case of a family of polytopes. Here we prove the existence of an arbitrary large convex invariant set for the dynamics in the vector space in the case when the sizes of the polytopes in the family are bounded and the set of all the outgoing normals to all the faces of all the polytopes is finite. It was also previously known that starting from zero as the initial error set, the error set could not be saturated in finitely many steps in some cases with several polytopes: Contradicting a former conjecture, we show that the same happens for some single quadrilaterals and for a single pentagon with an axial symmetry. The disproof of that conjecture is the new piece of information that leads us to expect, and then to verify, as we recount here, that the proof that the errors are bounded in the general case could be a small step beyond the proof of the same statement for the single polytope case.
Cost and schedule estimation study report
NASA Technical Reports Server (NTRS)
Condon, Steve; Regardie, Myrna; Stark, Mike; Waligora, Sharon
1993-01-01
This report describes the analysis performed and the findings of a study of the software development cost and schedule estimation models used by the Flight Dynamics Division (FDD), Goddard Space Flight Center. The study analyzes typical FDD projects, focusing primarily on those developed since 1982. The study reconfirms the standard SEL effort estimation model that is based on size adjusted for reuse; however, guidelines for the productivity and growth parameters in the baseline effort model have been updated. The study also produced a schedule prediction model based on empirical data that varies depending on application type. Models for the distribution of effort and schedule by life-cycle phase are also presented. Finally, this report explains how to use these models to plan SEL projects.
T-L Plane Abstraction-Based Energy-Efficient Real-Time Scheduling for Multi-Core Wireless Sensors
Kim, Youngmin; Lee, Ki-Seong; Pham, Ngoc-Son; Lee, Sun-Ro; Lee, Chan-Gun
2016-01-01
Energy efficiency is considered as a critical requirement for wireless sensor networks. As more wireless sensor nodes are equipped with multi-cores, there are emerging needs for energy-efficient real-time scheduling algorithms. The T-L plane-based scheme is known to be an optimal global scheduling technique for periodic real-time tasks on multi-cores. Unfortunately, there has been a scarcity of studies on extending T-L plane-based scheduling algorithms to exploit energy-saving techniques. In this paper, we propose a new T-L plane-based algorithm enabling energy-efficient real-time scheduling on multi-core sensor nodes with dynamic power management (DPM). Our approach addresses the overhead of processor mode transitions and reduces fragmentations of the idle time, which are inherent in T-L plane-based algorithms. Our experimental results show the effectiveness of the proposed algorithm compared to other energy-aware scheduling methods on T-L plane abstraction. PMID:27399722
Nonstandard Work Schedules, Family Dynamics, and Mother-Child Interactions During Early Childhood.
Prickett, Kate C
2018-03-01
The rising number of parents who work nonstandard schedules has led to a growing body of research concerned with what this trend means for children. The negative outcomes for children of parents who work nonstandard schedules are thought to arise from the disruptions these schedules place on family life, and thus, the types of parenting that support their children's development, particularly when children are young. Using a nationally representative sample of two-parent families (Early Childhood Longitudinal Study-Birth cohort, n = 3,650), this study examined whether mothers' and their partners' nonstandard work schedules were associated with mothers' parenting when children were 2 and 4 years old. Structural equation models revealed that mothers' and their partners' nonstandard work schedules were associated with mothers' lower scores on measures of positive and involved parenting. These associations were mediated by fathers' lower levels of participation in cognitively supportive parenting and greater imbalance in cognitively supportive tasks conducted by mothers versus fathers.
A planning language for activity scheduling
NASA Technical Reports Server (NTRS)
Zoch, David R.; Lavallee, David; Weinstein, Stuart; Tong, G. Michael
1991-01-01
Mission planning and scheduling of spacecraft operations are becoming more complex at NASA. Described here are a mission planning process; a robust, flexible planning language for spacecraft and payload operations; and a software scheduling system that generates schedules based on planning language inputs. The mission planning process often involves many people and organizations. Consequently, a planning language is needed to facilitate communication, to provide a standard interface, and to represent flexible requirements. The software scheduling system interprets the planning language and uses the resource, time duration, constraint, and alternative plan flexibilities to resolve scheduling conflicts.
Long range science scheduling for the Hubble Space Telescope
NASA Technical Reports Server (NTRS)
Miller, Glenn; Johnston, Mark
1991-01-01
Observations with NASA's Hubble Space Telescope (HST) are scheduled with the assistance of a long-range scheduling system (SPIKE) that was developed using artificial intelligence techniques. In earlier papers, the system architecture and the constraint representation and propagation mechanisms were described. The development of high-level automated scheduling tools, including tools based on constraint satisfaction techniques and neural networks is described. The performance of these tools in scheduling HST observations is discussed.
Management of Temporal Constraints for Factory Scheduling.
1987-06-01
consistency of scheduling decisions were implemented in both the ISIS [Fox 84] and SOJA [LePape 85a] scheduling systems. More recent work with the...kinds of time propagation systems: the symbolic and the numeric ones. Symbolic systems combine relationships with a temporal logic a la Allen [Allen 81...maintains consistency by narrowing time windows associated with activities as decisions are made, and SOJA [LePape 85b] guarantees a schedule’s
Range and mission scheduling automation using combined AI and operations research techniques
NASA Technical Reports Server (NTRS)
Arbabi, Mansur; Pfeifer, Michael
1987-01-01
Ground-based systems for Satellite Command, Control, and Communications (C3) operations require a method for planning, scheduling and assigning the range resources such as: antenna systems scattered around the world, communications systems, and personnel. The method must accommodate user priorities, last minute changes, maintenance requirements, and exceptions from nominal requirements. Described are computer programs which solve 24 hour scheduling problems, using heuristic algorithms and a real time interactive scheduling process.
NASA Glenn 1-by 1-Foot Supersonic Wind Tunnel User Manual
NASA Technical Reports Server (NTRS)
Seablom, Kirk D.; Soeder, Ronald H.; Stark, David E.; Leone, John F. X.; Henry, Michael W.
1999-01-01
This manual describes the NASA Glenn Research Center's 1 - by 1 -Foot Supersonic Wind Tunnel and provides information for customers who wish to conduct experiments in this facility. Tunnel performance envelopes of total pressure, total temperature, and dynamic pressure as a function of test section Mach number are presented. For each Mach number, maps are presented of Reynolds number per foot as a function of the total air temperature at the test section inlet for constant total air pressure at the inlet. General support systems-such as the service air, combustion air, altitude exhaust system, auxiliary bleed system, model hydraulic system, schlieren system, model pressure-sensitive paint, and laser sheet system are discussed. In addition, instrumentation and data processing, acquisition systems are described, pretest meeting formats and schedules are outlined, and customer responsibilities and personnel safety are addressed.
APGEN Scheduling: 15 Years of Experience in Planning Automation
NASA Technical Reports Server (NTRS)
Maldague, Pierre F.; Wissler, Steve; Lenda, Matthew; Finnerty, Daniel
2014-01-01
In this paper, we discuss the scheduling capability of APGEN (Activity Plan Generator), a multi-mission planning application that is part of the NASA AMMOS (Advanced Multi- Mission Operations System), and how APGEN scheduling evolved over its applications to specific Space Missions. Our analysis identifies two major reasons for the successful application of APGEN scheduling to real problems: an expressive DSL (Domain-Specific Language) for formulating scheduling algorithms, and a well-defined process for enlisting the help of auxiliary modeling tools in providing high-fidelity, system-level simulations of the combined spacecraft and ground support system.
NASA Technical Reports Server (NTRS)
Bedrossian, Nazareth; Jang, Jiann-Woei; McCants, Edward; Omohundro, Zachary; Ring, Tom; Templeton, Jeremy; Zoss, Jeremy; Wallace, Jonathan; Ziegler, Philip
2011-01-01
Draper Station Analysis Tool (DSAT) is a computer program, built on commercially available software, for simulating and analyzing complex dynamic systems. Heretofore used in designing and verifying guidance, navigation, and control systems of the International Space Station, DSAT has a modular architecture that lends itself to modification for application to spacecraft or terrestrial systems. DSAT consists of user-interface, data-structures, simulation-generation, analysis, plotting, documentation, and help components. DSAT automates the construction of simulations and the process of analysis. DSAT provides a graphical user interface (GUI), plus a Web-enabled interface, similar to the GUI, that enables a remotely located user to gain access to the full capabilities of DSAT via the Internet and Webbrowser software. Data structures are used to define the GUI, the Web-enabled interface, simulations, and analyses. Three data structures define the type of analysis to be performed: closed-loop simulation, frequency response, and/or stability margins. DSAT can be executed on almost any workstation, desktop, or laptop computer. DSAT provides better than an order of magnitude improvement in cost, schedule, and risk assessment for simulation based design and verification of complex dynamic systems.
Experimental comparison of conventional and nonlinear model-based control of a mixing tank
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haeggblom, K.E.
1993-11-01
In this case study concerning control of a laboratory-scale mixing tank, conventional multiloop single-input single-output (SISO) control is compared with model-based'' control where the nonlinearity and multivariable characteristics of the process are explicitly taken into account. It is shown, especially if the operating range of the process is large, that the two outputs (level and temperature) cannot be adequately controlled by multiloop SISO control even if gain scheduling is used. By nonlinear multiple-input multiple-output (MIMO) control, on the other hand, very good control performance is obtained. The basic approach to nonlinear control used in this study is first to transformmore » the process into a globally linear and decoupled system, and then to design controllers for this system. Because of the properties of the resulting MIMO system, the controller design is very easy. Two nonlinear control system designs based on a steady-state and a dynamic model, respectively, are considered. In the dynamic case, both setpoint tracking and disturbance rejection can be addressed separately.« less
Ground Vibration Testing Options for Space Launch Vehicles
NASA Technical Reports Server (NTRS)
Patterson, Alan; Smith, Robert K.; Goggin, David; Newsom, Jerry
2011-01-01
New NASA launch vehicles will require development of robust systems in a fiscally-constrained environment. NASA, Department of Defense (DoD), and commercial space companies routinely conduct ground vibration tests as an essential part of math model validation and launch vehicle certification. Although ground vibration testing must be a part of the integrated test planning process, more affordable approaches must also be considered. A study evaluated several ground vibration test options for the NASA Constellation Program flight test vehicles, Orion-1 and Orion-2, which concluded that more affordable ground vibration test options are available. The motivation for ground vibration testing is supported by historical examples from NASA and DoD. The approach used in the present study employed surveys of ground vibration test subject-matter experts that provided data to qualitatively rank six test options. Twenty-five experts from NASA, DoD, and industry provided scoring and comments for this study. The current study determined that both element-level modal tests and integrated vehicle modal tests have technical merits. Both have been successful in validating structural dynamic math models of launch vehicles. However, element-level testing has less overall cost and schedule risk as compared to integrated vehicle testing. Future NASA launch vehicle development programs should anticipate that some structural dynamics testing will be necessary. Analysis alone will be inadequate to certify a crew-capable launch vehicle. At a minimum, component and element structural dynamic tests are recommended for new vehicle elements. Three viable structural dynamic test options were identified. Modal testing of the new vehicle elements and an integrated vehicle test on the mobile launcher provided the optimal trade between technical, cost, and schedule.
Multiobjective Resource-Constrained Project Scheduling with a Time-Varying Number of Tasks
Abello, Manuel Blanco
2014-01-01
In resource-constrained project scheduling (RCPS) problems, ongoing tasks are restricted to utilizing a fixed number of resources. This paper investigates a dynamic version of the RCPS problem where the number of tasks varies in time. Our previous work investigated a technique called mapping of task IDs for centroid-based approach with random immigrants (McBAR) that was used to solve the dynamic problem. However, the solution-searching ability of McBAR was investigated over only a few instances of the dynamic problem. As a consequence, only a small number of characteristics of McBAR, under the dynamics of the RCPS problem, were found. Further, only a few techniques were compared to McBAR with respect to its solution-searching ability for solving the dynamic problem. In this paper, (a) the significance of the subalgorithms of McBAR is investigated by comparing McBAR to several other techniques; and (b) the scope of investigation in the previous work is extended. In particular, McBAR is compared to a technique called, Estimation Distribution Algorithm (EDA). As with McBAR, EDA is applied to solve the dynamic problem, an application that is unique in the literature. PMID:24883398
Uncertainty management by relaxation of conflicting constraints in production process scheduling
NASA Technical Reports Server (NTRS)
Dorn, Juergen; Slany, Wolfgang; Stary, Christian
1992-01-01
Mathematical-analytical methods as used in Operations Research approaches are often insufficient for scheduling problems. This is due to three reasons: the combinatorial complexity of the search space, conflicting objectives for production optimization, and the uncertainty in the production process. Knowledge-based techniques, especially approximate reasoning and constraint relaxation, are promising ways to overcome these problems. A case study from an industrial CIM environment, namely high-grade steel production, is presented to demonstrate how knowledge-based scheduling with the desired capabilities could work. By using fuzzy set theory, the applied knowledge representation technique covers the uncertainty inherent in the problem domain. Based on this knowledge representation, a classification of jobs according to their importance is defined which is then used for the straightforward generation of a schedule. A control strategy which comprises organizational, spatial, temporal, and chemical constraints is introduced. The strategy supports the dynamic relaxation of conflicting constraints in order to improve tentative schedules.
Innovative routing and scheduling concepts for transit systems.
DOT National Transportation Integrated Search
1984-01-01
The objective of this research was to investigate innovative routing and scheduling concepts to determine how transit systems in Virginia may improve ridership and reduce operating costs. Information on innovative routing and scheduling concepts was ...
77 FR 23277 - Wekiva River System Advisory Management Committee Meetings (FY2012)
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-18
...: Notice of upcoming scheduled meetings. SUMMARY: This notice announces a schedule of upcoming meetings for... 5, 2012 (Recreation Hall). Time: All scheduled meetings will begin at 3 p.m. and will end by 5 p.m... public. Each scheduled meeting will result in decisions and steps that advance the Wekiva River System...
NASA Astrophysics Data System (ADS)
Anseán, D.; Dubarry, M.; Devie, A.; Liaw, B. Y.; García, V. M.; Viera, J. C.; González, M.
2017-07-01
Lithium plating is considered one of the most detrimental phenomenon in lithium ion batteries (LIBs), as it increases cell degradation and might lead to safety issues. Plating induced LIB failure presents a major concern for emerging applications in transportation and electrical energy storage. Hence, the necessity to operando monitor, detect and analyze lithium plating becomes critical for safe and reliable usage of LIB systems. Here, we report in situ lithium plating analyses for a commercial graphite||LiFePO4 cell cycled under dynamic stress test (DST) driving schedule. We designed a framework based on incremental capacity (IC) analysis and mechanistic model simulations to quantify degradation modes, relate their effects to lithium plating occurrence and assess cell degradation. The results show that lithium plating was induced by large loss of active material on the negative electrode that eventually led the electrode to over-lithiate. Moreover, when lithium plating emerged, we quantified that the loss of lithium inventory pace was increased by a factor of four. This study illustrates the benefits of the proposed framework to improve lithium plating analysis. It also discloses the symptoms of lithium plating formation, which prove valuable for novel, online strategies on early lithium plating detection.
Use of Dynamic Models and Operational Architecture to Solve Complex Navy Challenges
NASA Technical Reports Server (NTRS)
Grande, Darby; Black, J. Todd; Freeman, Jared; Sorber, TIm; Serfaty, Daniel
2010-01-01
The United States Navy established 8 Maritime Operations Centers (MOC) to enhance the command and control of forces at the operational level of warfare. Each MOC is a headquarters manned by qualified joint operational-level staffs, and enabled by globally interoperable C41 systems. To assess and refine MOC staffing, equipment, and schedules, a dynamic software model was developed. The model leverages pre-existing operational process architecture, joint military task lists that define activities and their precedence relations, as well as Navy documents that specify manning and roles per activity. The software model serves as a "computational wind-tunnel" in which to test a MOC on a mission, and to refine its structure, staffing, processes, and schedules. More generally, the model supports resource allocation decisions concerning Doctrine, Organization, Training, Material, Leadership, Personnel and Facilities (DOTMLPF) at MOCs around the world. A rapid prototype effort efficiently produced this software in less than five months, using an integrated process team consisting of MOC military and civilian staff, modeling experts, and software developers. The work reported here was conducted for Commander, United States Fleet Forces Command in Norfolk, Virginia, code N5-0LW (Operational Level of War) that facilitates the identification, consolidation, and prioritization of MOC capabilities requirements, and implementation and delivery of MOC solutions.
Dynamic force profile in hydraulic hybrid vehicles: a numerical investigation
NASA Astrophysics Data System (ADS)
Mohaghegh-Motlagh, Amin; Elahinia, Mohammad H.
2010-04-01
A hybrid hydraulic vehicle (HHV) combines a hydraulic sub-system with the conventional drivetrain in order to improve fuel economy for heavy vehicles. The added hydraulic module manages the storage and release of fluid power necessary to assist the motion of the vehicle. The power collected by a pump/motor (P/M) from the regenerative braking phase is stored in a high-pressure accumulator and then released by the P/M to the driveshaft during the acceleration phase. This technology is effective in significantly improving fuel-economy for heavy-class vehicles with frequent stop-and-go drive schedules. Despite improved fuel economy and higher vehicle acceleration, noise and vibrations are one of the main problems of these vehicles. The dual function P/Ms are the main source of noise and vibration in a HHV. This study investigates the dynamics of a P/M and particularly the profile and frequency-dependence of the dynamic forces generated by a bent-axis P/M unit. To this end, the fluid dynamics side of the problem has been simplified for investigating the system from a dynamics perspective. A mathematical model of a bent axis P/M has been developed to investigate the cause of vibration and noise in HHVs. The forces are calculated in time and frequency domains. The results of this work can be used to study the vibration response of the chassis and to design effective vibration isolation systems for HHVs.
Peer-to-peer Cooperative Scheduling Architecture for National Grid Infrastructure
NASA Astrophysics Data System (ADS)
Matyska, Ludek; Ruda, Miroslav; Toth, Simon
For some ten years, the Czech National Grid Infrastructure MetaCentrum uses a single central PBSPro installation to schedule jobs across the country. This centralized approach keeps a full track about all the clusters, providing support for jobs spanning several sites, implementation for the fair-share policy and better overall control of the grid environment. Despite a steady progress in the increased stability and resilience to intermittent very short network failures, growing number of sites and processors makes this architecture, with a single point of failure and scalability limits, obsolete. As a result, a new scheduling architecture is proposed, which relies on higher autonomy of clusters. It is based on a peer to peer network of semi-independent schedulers for each site or even cluster. Each scheduler accepts jobs for the whole infrastructure, cooperating with other schedulers on implementation of global policies like central job accounting, fair-share, or submission of jobs across several sites. The scheduling system is integrated with the Magrathea system to support scheduling of virtual clusters, including the setup of their internal network, again eventually spanning several sites. On the other hand, each scheduler is local to one of several clusters and is able to directly control and submit jobs to them even if the connection of other scheduling peers is lost. In parallel to the change of the overall architecture, the scheduling system itself is being replaced. Instead of PBSPro, chosen originally for its declared support of large scale distributed environment, the new scheduling architecture is based on the open-source Torque system. The implementation and support for the most desired properties in PBSPro and Torque are discussed and the necessary modifications to Torque to support the MetaCentrum scheduling architecture are presented, too.
Expert systems tools for Hubble Space Telescope observation scheduling
NASA Technical Reports Server (NTRS)
Miller, Glenn; Rosenthal, Don; Cohen, William; Johnston, Mark
1987-01-01
The utility of expert systems techniques for the Hubble Space Telescope (HST) planning and scheduling is discussed and a plan for development of expert system tools which will augment the existing ground system is described. Additional capabilities provided by these tools will include graphics-oriented plan evaluation, long-range analysis of the observation pool, analysis of optimal scheduling time intervals, constructing sequences of spacecraft activities which minimize operational overhead, and optimization of linkages between observations. Initial prototyping of a scheduler used the Automated Reasoning Tool running on a LISP workstation.
Toward an Autonomous Telescope Network: the TBT Scheduler
NASA Astrophysics Data System (ADS)
Racero, E.; Ibarra, A.; Ocaña, F.; de Lis, S. B.; Ponz, J. D.; Castillo, M.; Sánchez-Portal, M.
2015-09-01
Within the ESA SSA program, it is foreseen to deploy several robotic telescopes to provide surveillance and tracking services for hazardous objects. The TBT project will procure a validation platform for an autonomous optical observing system in a realistic scenario, consisting of two telescopes located in Spain and Australia, to collect representative test data for precursor SSA services. In this context, the planning and scheduling of the night consists of two software modules, the TBT Scheduler, that will allow the manual and autonomous planning of the night, and the control of the real-time response of the system, done by the RTS2 internal scheduler. The TBT Scheduler allocates tasks for both telescopes without human intervention. Every night it takes all the inputs needed and prepares the schedule following some predefined rules. The main purpose of the scheduler is the distribution of the time for follow-up of recently discovered targets and surveys. The TBT Scheduler considers the overall performance of the system, and combine follow-up with a priori survey strategies for both kind of objects. The strategy is defined according to the expected combined performance for both systems the upcoming night (weather, sky brightness, object accessibility and priority). Therefore, TBT Scheduler defines the global approach for the network and relies on the RTS2 internal scheduler for the final detailed distribution of tasks at each sensor.
Characterization of Tactical Departure Scheduling in the National Airspace System
NASA Technical Reports Server (NTRS)
Capps, Alan; Engelland, Shawn A.
2011-01-01
This paper discusses and analyzes current day utilization and performance of the tactical departure scheduling process in the National Airspace System (NAS) to understand the benefits in improving this process. The analysis used operational air traffic data from over 1,082,000 flights during the month of January, 2011. Specific metrics included the frequency of tactical departure scheduling, site specific variances in the technology's utilization, departure time prediction compliance used in the tactical scheduling process and the performance with which the current system can predict the airborne slot that aircraft are being scheduled into from the airport surface. Operational data analysis described in this paper indicates significant room for improvement exists in the current system primarily in the area of reduced departure time prediction uncertainty. Results indicate that a significant number of tactically scheduled aircraft did not meet their scheduled departure slot due to departure time uncertainty. In addition to missed slots, the operational data analysis identified increased controller workload associated with tactical departures which were subject to traffic management manual re-scheduling or controller swaps. An analysis of achievable levels of departure time prediction accuracy as obtained by a new integrated surface and tactical scheduling tool is provided to assess the benefit it may provide as a solution to the identified shortfalls. A list of NAS facilities which are likely to receive the greatest benefit from the integrated surface and tactical scheduling technology are provided.
Static Schedulers for Embedded Real-Time Systems
1989-12-01
Because of the need for having efficient scheduling algorithms in large scale real time systems , software engineers put a lot of effort on developing...provide static schedulers for he Embedded Real Time Systems with single processor using Ada programming language. The independent nonpreemptable...support the Computer Aided Rapid Prototyping for Embedded Real Time Systems so that we determine whether the system, as designed, meets the required
Mission Operations Planning and Scheduling System (MOPSS)
NASA Technical Reports Server (NTRS)
Wood, Terri; Hempel, Paul
2011-01-01
MOPSS is a generic framework that can be configured on the fly to support a wide range of planning and scheduling applications. It is currently used to support seven missions at Goddard Space Flight Center (GSFC) in roles that include science planning, mission planning, and real-time control. Prior to MOPSS, each spacecraft project built its own planning and scheduling capability to plan satellite activities and communications and to create the commands to be uplinked to the spacecraft. This approach required creating a data repository for storing planning and scheduling information, building user interfaces to display data, generating needed scheduling algorithms, and implementing customized external interfaces. Complex scheduling problems that involved reacting to multiple variable situations were analyzed manually. Operators then used the results to add commands to the schedule. Each architecture was unique to specific satellite requirements. MOPSS is an expert system that automates mission operations and frees the flight operations team to concentrate on critical activities. It is easily reconfigured by the flight operations team as the mission evolves. The heart of the system is a custom object-oriented data layer mapped onto an Oracle relational database. The combination of these two technologies allows a user or system engineer to capture any type of scheduling or planning data in the system's generic data storage via a GUI.
Support of the Laboratory for Terrestrial Physics for Dynamics of the Solid Earth (DOSE)
NASA Technical Reports Server (NTRS)
Vandenberg, N. R.; Ma, C. (Technical Monitor)
2002-01-01
This final report summarizes the accomplishments during the contract period. Under the contract Nepal, Inc. provided support to the VLBI group at NASA's Goddard Space Flight Center. The contract covered a period of approximately eight years during high geodetic and astrometric VLBI evolved through several major changes. This report is divided into five sections that correspond to major task areas in the contract: A) Coordination rid Scheduling, B) Field System, CN Station Support, D) Analysis and Research and Development, and E) Computer Support.
2004-06-22
KENNEDY SPACE CENTER, FLA. - The Aura spacecraft on a transporter heads a convoy of vehicles in the predawn hours as it moves to Space Launch Complex 2 on North Vandenberg Air Force Base, Calif. The latest in the Earth Observing System (EOS) series, Aura is scheduled to launch July 10 aboard a Boeing Delta II rocket. Aura’s four state-of-the-art instruments will study the dynamics of chemistry occurring in the atmosphere. The spacecraft will provide data to help scientists better understand the Earth’s ozone, air quality and climate change.
2004-06-22
KENNEDY SPACE CENTER, FLA. - In the predawn hours, the Aura spacecraft is transported the short distance from the Astrotech payload processing facility to Space Launch Complex 2 on North Vandenberg Air Force Base, Calif. The latest in the Earth Observing System (EOS) series, Aura is scheduled to launch July 10 aboard a Boeing Delta II rocket. Aura’s four state-of-the-art instruments will study the dynamics of chemistry occurring in the atmosphere. The spacecraft will provide data to help scientists better understand the Earth’s ozone, air quality and climate change.
2004-06-22
KENNEDY SPACE CENTER, FLA. - In the predawn hours, the Aura spacecraft is being transported from the Astrotech payload processing facility located a few miles south of Space Launch Complex 2 on North Vandenberg Air Force Base, Calif. The latest in the Earth Observing System (EOS) series, Aura is scheduled to launch July 10 aboard a Boeing Delta II rocket. Aura’s four state-of-the-art instruments will study the dynamics of chemistry occurring in the atmosphere. The spacecraft will provide data to help scientists better understand the Earth’s ozone, air quality and climate change.
Dynamically allocating sets of fine-grained processors to running computations
NASA Technical Reports Server (NTRS)
Middleton, David
1988-01-01
Researchers explore an approach to using general purpose parallel computers which involves mapping hardware resources onto computations instead of mapping computations onto hardware. Problems such as processor allocation, task scheduling and load balancing, which have traditionally proven to be challenging, change significantly under this approach and may become amenable to new attacks. Researchers describe the implementation of this approach used by the FFP Machine whose computation and communication resources are repeatedly partitioned into disjoint groups that match the needs of available tasks from moment to moment. Several consequences of this system are examined.
2009-10-01
CAPE CANAVERAL, Fla. – At the Astrotech Space Operations facility in Titusville, Fla., workers secure the Solar Dynamics Observatory, or SDO, onto a work stand during preparations for propulsion system testing and leak checks on the spacecraft. SDO is the first space weather research network mission in NASA's Living With a Star Program. The spacecraft's long-term measurements will give solar scientists in-depth information about changes in the sun's magnetic field and insight into how they affect Earth. Liftoff on an Atlas V rocket is scheduled for Feb. 3, 2010. Photo credit: NASA/Amanda Diller
Self-balancing dynamic scheduling of electrical energy for energy-intensive enterprises
NASA Astrophysics Data System (ADS)
Gao, Yunlong; Gao, Feng; Zhai, Qiaozhu; Guan, Xiaohong
2013-06-01
Balancing production and consumption with self-generation capacity in energy-intensive enterprises has huge economic and environmental benefits. However, balancing production and consumption with self-generation capacity is a challenging task since the energy production and consumption must be balanced in real time with the criteria specified by power grid. In this article, a mathematical model for minimising the production cost with exactly realisable energy delivery schedule is formulated. And a dynamic programming (DP)-based self-balancing dynamic scheduling algorithm is developed to obtain the complete solution set for such a multiple optimal solutions problem. For each stage, a set of conditions are established to determine whether a feasible control trajectory exists. The state space under these conditions is partitioned into subsets and each subset is viewed as an aggregate state, the cost-to-go function is then expressed as a function of initial and terminal generation levels of each stage and is proved to be a staircase function with finite steps. This avoids the calculation of the cost-to-go of every state to resolve the issue of dimensionality in DP algorithm. In the backward sweep process of the algorithm, an optimal policy is determined to maximise the realisability of energy delivery schedule across the entire time horizon. And then in the forward sweep process, the feasible region of the optimal policy with the initial and terminal state at each stage is identified. Different feasible control trajectories can be identified based on the region; therefore, optimising for the feasible control trajectory is performed based on the region with economic and reliability objectives taken into account.
NASA Astrophysics Data System (ADS)
Nagata, Takeshi; Tao, Yasuhiro; Utatani, Masahiro; Sasaki, Hiroshi; Fujita, Hideki
This paper proposes a multi-agent approach to maintenance scheduling in restructured power systems. The restructuring of electric power industry has resulted in market-based approaches for unbundling a multitude of service provided by self-interested entities such as power generating companies (GENCOs), transmission providers (TRANSCOs) and distribution companies (DISCOs). The Independent System Operator (ISO) is responsible for the security of the system operation. The schedule submitted to ISO by GENCOs and TRANSCOs should satisfy security and reliability constraints. The proposed method consists of several GENCO Agents (GAGs), TARNSCO Agents (TAGs) and a ISO Agent(IAG). The IAG’s role in maintenance scheduling is limited to ensuring that the submitted schedules do not cause transmission congestion or endanger the system reliability. From the simulation results, it can be seen the proposed multi-agent approach could coordinate between generation and transmission maintenance schedules.
Separation Assurance and Scheduling Coordination in the Arrival Environment
NASA Technical Reports Server (NTRS)
Aweiss, Arwa S.; Cone, Andrew C.; Holladay, Joshua J.; Munoz, Epifanio; Lewis, Timothy A.
2016-01-01
Separation assurance (SA) automation has been proposed as either a ground-based or airborne paradigm. The arrival environment is complex because aircraft are being sequenced and spaced to the arrival fix. This paper examines the effect of the allocation of the SA and scheduling functions on the performance of the system. Two coordination configurations between an SA and an arrival management system are tested using both ground and airborne implementations. All configurations have a conflict detection and resolution (CD&R) system and either an integrated or separated scheduler. Performance metrics are presented for the ground and airborne systems based on arrival traffic headed to Dallas/ Fort Worth International airport. The total delay, time-spacing conformance, and schedule conformance are used to measure efficiency. The goal of the analysis is to use the metrics to identify performance differences between the configurations that are based on different function allocations. A surveillance range limitation of 100 nmi and a time delay for sharing updated trajectory intent of 30 seconds were implemented for the airborne system. Overall, these results indicate that the surveillance range and the sharing of trajectories and aircraft schedules are important factors in determining the efficiency of an airborne arrival management system. These parameters are not relevant to the ground-based system as modeled for this study because it has instantaneous access to all aircraft trajectories and intent. Creating a schedule external to the CD&R and the scheduling conformance system was seen to reduce total delays for the airborne system, and had a minor effect on the ground-based system. The effect of an external scheduler on other metrics was mixed.
NASA Astrophysics Data System (ADS)
Delgado, Francisco; Schumacher, German
2014-08-01
The Large Synoptic Survey Telescope (LSST) is a complex system of systems with demanding performance and operational requirements. The nature of its scientific goals requires a special Observatory Control System (OCS) and particularly a very specialized automatic Scheduler. The OCS Scheduler is an autonomous software component that drives the survey, selecting the detailed sequence of visits in real time, taking into account multiple science programs, the current external and internal conditions, and the history of observations. We have developed a SysML model for the OCS Scheduler that fits coherently in the OCS and LSST integrated model. We have also developed a prototype of the Scheduler that implements the scheduling algorithms in the simulation environment provided by the Operations Simulator, where the environment and the observatory are modeled with real weather data and detailed kinematics parameters. This paper expands on the Scheduler architecture and the proposed algorithms to achieve the survey goals.
NASA Technical Reports Server (NTRS)
Zweben, Monte
1991-01-01
The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint-based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all the inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocation for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its application to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.
NASA Technical Reports Server (NTRS)
Zweben, Monte
1991-01-01
The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocations for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its applications to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.
NASA Technical Reports Server (NTRS)
Zweben, Monte
1993-01-01
The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint-based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all the inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocation for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its application to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.
Robotics On-Board Trainer (ROBoT)
NASA Technical Reports Server (NTRS)
Johnson, Genevieve; Alexander, Greg
2013-01-01
ROBoT is an on-orbit version of the ground-based Dynamics Skills Trainer (DST) that astronauts use for training on a frequent basis. This software consists of two primary software groups. The first series of components is responsible for displaying the graphical scenes. The remaining components are responsible for simulating the Mobile Servicing System (MSS), the Japanese Experiment Module Remote Manipulator System (JEMRMS), and the H-II Transfer Vehicle (HTV) Free Flyer Robotics Operations. The MSS simulation software includes: Robotic Workstation (RWS) simulation, a simulation of the Space Station Remote Manipulator System (SSRMS), a simulation of the ISS Command and Control System (CCS), and a portion of the Portable Computer System (PCS) software necessary for MSS operations. These components all run under the CentOS4.5 Linux operating system. The JEMRMS simulation software includes real-time, HIL, dynamics, manipulator multi-body dynamics, and a moving object contact model with Tricks discrete time scheduling. The JEMRMS DST will be used as a functional proficiency and skills trainer for flight crews. The HTV Free Flyer Robotics Operations simulation software adds a functional simulation of HTV vehicle controllers, sensors, and data to the MSS simulation software. These components are intended to support HTV ISS visiting vehicle analysis and training. The scene generation software will use DOUG (Dynamic On-orbit Ubiquitous Graphics) to render the graphical scenes. DOUG runs on a laptop running the CentOS4.5 Linux operating system. DOUG is an Open GL-based 3D computer graphics rendering package. It uses pre-built three-dimensional models of on-orbit ISS and space shuttle systems elements, and provides realtime views of various station and shuttle configurations.
Testing Task Schedulers on Linux System
NASA Astrophysics Data System (ADS)
Jelenković, Leonardo; Groš, Stjepan; Jakobović, Domagoj
Testing task schedulers on Linux operating system proves to be a challenging task. There are two main problems. The first one is to identify which properties of the scheduler to test. The second problem is how to perform it, e.g., which API to use that is sufficiently precise and in the same time supported on most platforms. This paper discusses the problems in realizing test framework for testing task schedulers and presents one potential solution. Observed behavior of the scheduler is the one used for “normal” task scheduling (SCHED_OTHER), unlike one used for real-time tasks (SCHED_FIFO, SCHED_RR).
Contingency rescheduling of spacecraft operations
NASA Technical Reports Server (NTRS)
Britt, Daniel L.; Geoffroy, Amy L.; Gohring, John R.
1988-01-01
Spacecraft activity scheduling was a focus of attention in artificial intelligence recently. Several scheduling systems were devised which more-or-less successfully address various aspects of the activity scheduling problem, though most of these are not yet mature, with the notable expection of NASA's ESP. Few current scheduling systems, however, make any attempt to deal fully with the problem of modifying a schedule in near-real-time in the event of contingencies which may arise during schedule execution. These contingencies can include resources becoming unavailable unpredictably, a change in spacecraft conditions or environment, or the need to perform an activity not scheduled. In these cases it becomes necessary to repair an existing schedule, disrupting ongoing operations as little as possible. Normal scheduling is just a part of that which must be accomplished during contingency rescheduling. A prototype system named MAESTRO was developed for spacecraft activity scheduling. MAESTRO is briefly described with a focus on recent work in the area of real-time contingency handling. Included is a discussion of some of the complexities of the scheduling problem and how they affect contingency rescheduling, such as temporal constraints between activities, activities which may be interrupted and continued in any of several ways, and different ways to choose a resource complement which will allow continuation of an activity. Various heuristics used in MAESTRO for contingency rescheduling is discussed, as are operational concerns such as interaction of the scheduler with spacecraft subsystems controllers.
Automation of the space station core module power management and distribution system
NASA Technical Reports Server (NTRS)
Weeks, David J.
1988-01-01
Under the Advanced Development Program for Space Station, Marshall Space Flight Center has been developing advanced automation applications for the Power Management and Distribution (PMAD) system inside the Space Station modules for the past three years. The Space Station Module Power Management and Distribution System (SSM/PMAD) test bed features three artificial intelligence (AI) systems coupled with conventional automation software functioning in an autonomous or closed-loop fashion. The AI systems in the test bed include a baseline scheduler/dynamic rescheduler (LES), a load shedding management system (LPLMS), and a fault recovery and management expert system (FRAMES). This test bed will be part of the NASA Systems Autonomy Demonstration for 1990 featuring cooperating expert systems in various Space Station subsystem test beds. It is concluded that advanced automation technology involving AI approaches is sufficiently mature to begin applying the technology to current and planned spacecraft applications including the Space Station.
Fair Energy Scheduling for Vehicle-to-Grid Networks Using Adaptive Dynamic Programming.
Xie, Shengli; Zhong, Weifeng; Xie, Kan; Yu, Rong; Zhang, Yan
2016-08-01
Research on the smart grid is being given enormous supports worldwide due to its great significance in solving environmental and energy crises. Electric vehicles (EVs), which are powered by clean energy, are adopted increasingly year by year. It is predictable that the huge charge load caused by high EV penetration will have a considerable impact on the reliability of the smart grid. Therefore, fair energy scheduling for EV charge and discharge is proposed in this paper. By using the vehicle-to-grid technology, the scheduler controls the electricity loads of EVs considering fairness in the residential distribution network. We propose contribution-based fairness, in which EVs with high contributions have high priorities to obtain charge energy. The contribution value is defined by both the charge/discharge energy and the timing of the action. EVs can achieve higher contribution values when discharging during the load peak hours. However, charging during this time will decrease the contribution values seriously. We formulate the fair energy scheduling problem as an infinite-horizon Markov decision process. The methodology of adaptive dynamic programming is employed to maximize the long-term fairness by processing online network training. The numerical results illustrate that the proposed EV energy scheduling is able to mitigate and flatten the peak load in the distribution network. Furthermore, contribution-based fairness achieves a fast recovery of EV batteries that have deeply discharged and guarantee fairness in the full charge time of all EVs.
Free energy reconstruction from steered dynamics without post-processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athenes, Manuel, E-mail: Manuel.Athenes@cea.f; Condensed Matter and Materials Division, Physics and Life Sciences Directorate, LLNL, Livermore, CA 94551; Marinica, Mihai-Cosmin
2010-09-20
Various methods achieving importance sampling in ensembles of nonequilibrium trajectories enable one to estimate free energy differences and, by maximum-likelihood post-processing, to reconstruct free energy landscapes. Here, based on Bayes theorem, we propose a more direct method in which a posterior likelihood function is used both to construct the steered dynamics and to infer the contribution to equilibrium of all the sampled states. The method is implemented with two steering schedules. First, using non-autonomous steering, we calculate the migration barrier of the vacancy in Fe-{alpha}. Second, using an autonomous scheduling related to metadynamics and equivalent to temperature-accelerated molecular dynamics, wemore » accurately reconstruct the two-dimensional free energy landscape of the 38-atom Lennard-Jones cluster as a function of an orientational bond-order parameter and energy, down to the solid-solid structural transition temperature of the cluster and without maximum-likelihood post-processing.« less
CARMENES instrument control system and operational scheduler
NASA Astrophysics Data System (ADS)
Garcia-Piquer, Alvaro; Guàrdia, Josep; Colomé, Josep; Ribas, Ignasi; Gesa, Lluis; Morales, Juan Carlos; Pérez-Calpena, Ana; Seifert, Walter; Quirrenbach, Andreas; Amado, Pedro J.; Caballero, José A.; Reiners, Ansgar
2014-07-01
The main goal of the CARMENES instrument is to perform high-accuracy measurements of stellar radial velocities (1m/s) with long-term stability. CARMENES will be installed in 2015 at the 3.5 m telescope in the Calar Alto Observatory (Spain) and it will be equipped with two spectrographs covering from the visible to the near-infrared. It will make use of its near-IR capabilities to observe late-type stars, whose peak of the spectral energy distribution falls in the relevant wavelength interval. The technology needed to develop this instrument represents a challenge at all levels. We present two software packages that play a key role in the control layer for an efficient operation of the instrument: the Instrument Control System (ICS) and the Operational Scheduler. The coordination and management of CARMENES is handled by the ICS, which is responsible for carrying out the operations of the different subsystems providing a tool to operate the instrument in an integrated manner from low to high user interaction level. The ICS interacts with the following subsystems: the near-IR and visible channels, composed by the detectors and exposure meters; the calibration units; the environment sensors; the front-end electronics; the acquisition and guiding module; the interfaces with telescope and dome; and, finally, the software subsystems for operational scheduling of tasks, data processing, and data archiving. We describe the ICS software design, which implements the CARMENES operational design and is planned to be integrated in the instrument by the end of 2014. The CARMENES operational scheduler is the second key element in the control layer described in this contribution. It is the main actor in the translation of the survey strategy into a detailed schedule for the achievement of the optimization goals. The scheduler is based on Artificial Intelligence techniques and computes the survey planning by combining the static constraints that are known a priori (i.e., target visibility, sky background, required time sampling coverage) and the dynamic change of the system conditions (i.e., weather, system conditions). Off-line and on-line strategies are integrated into a single tool for a suitable transfer of the target prioritization made by the science team to the real-time schedule that will be used by the instrument operators. A suitable solution will be expected to increase the efficiency of telescope operations, which will represent an important benefit in terms of scientific return and operational costs. We present the operational scheduling tool designed for CARMENES, which is based on two algorithms combining a global and a local search: Genetic Algorithms and Hill Climbing astronomy-based heuristics, respectively. The algorithm explores a large amount of potential solutions from the vast search space and is able to identify the most efficient ones. A planning solution is considered efficient when it optimizes the objectives defined, which, in our case, are related to the reduction of the time that the telescope is not in use and the maximization of the scientific return, measured in terms of the time coverage of each target in the survey. We present the results obtained using different test cases.
COMPASS: A general purpose computer aided scheduling tool
NASA Technical Reports Server (NTRS)
Mcmahon, Mary Beth; Fox, Barry; Culbert, Chris
1991-01-01
COMPASS is a generic scheduling system developed by McDonnell Douglas under the direction of the Software Technology Branch at JSC. COMPASS is intended to illustrate the latest advances in scheduling technology and provide a basis from which custom scheduling systems can be built. COMPASS was written in Ada to promote readability and to conform to potential NASA Space Station Freedom standards. COMPASS has some unique characteristics that distinguishes it from commercial products. These characteristics are discussed and used to illustrate some differences between scheduling tools.
COMPASS: An Ada based scheduler
NASA Technical Reports Server (NTRS)
Mcmahon, Mary Beth; Culbert, Chris
1992-01-01
COMPASS is a generic scheduling system developed by McDonnell Douglas and funded by the Software Technology Branch of NASA Johnson Space Center. The motivation behind COMPASS is to illustrate scheduling technology and provide a basis from which custom scheduling systems can be built. COMPASS was written in Ada to promote readability and to conform to DOD standards. COMPASS has some unique characteristics that distinguishes it from commercial products. This paper discusses these characteristics and uses them to illustrate some differences between scheduling tools.
Expert mission planning and replanning scheduling system for NASA KSC payload operations
NASA Technical Reports Server (NTRS)
Pierce, Roger
1987-01-01
EMPRESS (Expert Mission Planning and REplanning Scheduling System) is an expert system created to assist payload mission planners at Kennedy in the long range planning and scheduling of horizontal payloads for space shuttle flights. Using the current flight manifest, these planners develop mission and payload schedules detailing all processing to be performed in the Operations and Checkout building at Kennedy. With the EMPRESS system, schedules are generated quickly using standard flows that represent the tasks and resources required to process a specific horizontal carrier. Resources can be tracked and resource conflicts can be determined and resolved interactively. Constraint relationships between tasks are maintained and can be enforced when a task is moved or rescheduled. The domain, structure, and functionality of the EMPRESS system is briefly designed. The limitations of the EMPRESS system are described as well as improvements expected with the EMPRESS-2 development.
NASA Technical Reports Server (NTRS)
VanZwieten, Tannen; Zhu, J. Jim; Adami, Tony; Berry, Kyle; Grammar, Alex; Orr, Jeb S.; Best, Eric A.
2014-01-01
Recently, a robust and practical adaptive control scheme for launch vehicles [ [1] has been introduced. It augments a classical controller with a real-time loop-gain adaptation, and it is therefore called Adaptive Augmentation Control (AAC). The loop-gain will be increased from the nominal design when the tracking error between the (filtered) output and the (filtered) command trajectory is large; whereas it will be decreased when excitation of flex or sloshing modes are detected. There is a need to determine the range and rate of the loop-gain adaptation in order to retain (exponential) stability, which is critical in vehicle operation, and to develop some theoretically based heuristic tuning methods for the adaptive law gain parameters. The classical launch vehicle flight controller design technics are based on gain-scheduling, whereby the launch vehicle dynamics model is linearized at selected operating points along the nominal tracking command trajectory, and Linear Time-Invariant (LTI) controller design techniques are employed to ensure asymptotic stability of the tracking error dynamics, typically by meeting some prescribed Gain Margin (GM) and Phase Margin (PM) specifications. The controller gains at the design points are then scheduled, tuned and sometimes interpolated to achieve good performance and stability robustness under external disturbances (e.g. winds) and structural perturbations (e.g. vehicle modeling errors). While the GM does give a bound for loop-gain variation without losing stability, it is for constant dispersions of the loop-gain because the GM is based on frequency-domain analysis, which is applicable only for LTI systems. The real-time adaptive loop-gain variation of the AAC effectively renders the closed-loop system a time-varying system, for which it is well-known that the LTI system stability criterion is neither necessary nor sufficient when applying to a Linear Time-Varying (LTV) system in a frozen-time fashion. Therefore, a generalized stability metric for time-varying loop=gain perturbations is needed for the AAC.
Automated Planning and Scheduling for Planetary Rover Distributed Operations
NASA Technical Reports Server (NTRS)
Backes, Paul G.; Rabideau, Gregg; Tso, Kam S.; Chien, Steve
1999-01-01
Automated planning and Scheduling, including automated path planning, has been integrated with an Internet-based distributed operations system for planetary rover operations. The resulting prototype system enables faster generation of valid rover command sequences by a distributed planetary rover operations team. The Web Interface for Telescience (WITS) provides Internet-based distributed collaboration, the Automated Scheduling and Planning Environment (ASPEN) provides automated planning and scheduling, and an automated path planner provided path planning. The system was demonstrated on the Rocky 7 research rover at JPL.
76 FR 77895 - Schedules of Controlled Substances: Placement of Ezogabine Into Schedule V
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-15
... ester, is a new chemical substance with central nervous system depressant properties and is classified... nervous system as an anticonvulsant and the potential side effects of the drug therein, warrant closer... the central nervous system is alone not enough to merit its inclusion into Schedule IV of the CSA, nor...
Experience with dynamic reinforcement rates decreases resistance to extinction.
Craig, Andrew R; Shahan, Timothy A
2016-03-01
The ability of organisms to detect reinforcer-rate changes in choice preparations is positively related to two factors: the magnitude of the change in rate and the frequency with which rates change. Gallistel (2012) suggested similar rate-detection processes are responsible for decreases in responding during operant extinction. Although effects of magnitude of change in reinforcer rate on resistance to extinction are well known (e.g., the partial-reinforcement-extinction effect), effects of frequency of changes in rate prior to extinction are unknown. Thus, the present experiments examined whether frequency of changes in baseline reinforcer rates impacts resistance to extinction. Pigeons pecked keys for variable-interval food under conditions where reinforcer rates were stable and where they changed within and between sessions. Overall reinforcer rates between conditions were controlled. In Experiment 1, resistance to extinction was lower following exposure to dynamic reinforcement schedules than to static schedules. Experiment 2 showed that resistance to presession feeding, a disruptor that should not involve change-detection processes, was unaffected by baseline-schedule dynamics. These findings are consistent with the suggestion that change detection contributes to extinction. We discuss implications of change-detection processes for extinction of simple and discriminated operant behavior and relate these processes to the behavioral-momentum based approach to understanding extinction. © 2016 Society for the Experimental Analysis of Behavior.
Two is better than one; toward a rational design of combinatorial therapy.
Chen, Sheng-Hong; Lahav, Galit
2016-12-01
Drug combination is an appealing strategy for combating the heterogeneity of tumors and evolution of drug resistance. However, the rationale underlying combinatorial therapy is often not well established due to lack of understandings of the specific pathways responding to the drugs, and their temporal dynamics following each treatment. Here we present several emerging trends in harnessing properties of biological systems for the optimal design of drug combinations, including the type of drugs, specific concentration, sequence of addition and the temporal schedule of treatments. We highlight recent studies showing different approaches for efficient design of drug combinations including single-cell signaling dynamics, adaption and pathway crosstalk. Finally, we discuss novel and feasible approaches that can facilitate the optimal design of combinatorial therapy. Copyright © 2016 Elsevier Ltd. All rights reserved.
System control of an autonomous planetary mobile spacecraft
NASA Technical Reports Server (NTRS)
Dias, William C.; Zimmerman, Barbara A.
1990-01-01
The goal is to suggest the scheduling and control functions necessary for accomplishing mission objectives of a fairly autonomous interplanetary mobile spacecraft, while maximizing reliability. Goals are to provide an extensible, reliable system conservative in its use of on-board resources, while getting full value from subsystem autonomy, and avoiding the lure of ground micromanagement. A functional layout consisting of four basic elements is proposed: GROUND and SYSTEM EXECUTIVE system functions and RESOURCE CONTROL and ACTIVITY MANAGER subsystem functions. The system executive includes six subfunctions: SYSTEM MANAGER, SYSTEM FAULT PROTECTION, PLANNER, SCHEDULE ADAPTER, EVENT MONITOR and RESOURCE MONITOR. The full configuration is needed for autonomous operation on Moon or Mars, whereas a reduced version without the planning, schedule adaption and event monitoring functions could be appropriate for lower-autonomy use on the Moon. An implementation concept is suggested which is conservative in use of system resources and consists of modules combined with a network communications fabric. A language concept termed a scheduling calculus for rapidly performing essential on-board schedule adaption functions is introduced.
Space communications scheduler: A rule-based approach to adaptive deadline scheduling
NASA Technical Reports Server (NTRS)
Straguzzi, Nicholas
1990-01-01
Job scheduling is a deceptively complex subfield of computer science. The highly combinatorial nature of the problem, which is NP-complete in nearly all cases, requires a scheduling program to intelligently transverse an immense search tree to create the best possible schedule in a minimal amount of time. In addition, the program must continually make adjustments to the initial schedule when faced with last-minute user requests, cancellations, unexpected device failures, quests, cancellations, unexpected device failures, etc. A good scheduler must be quick, flexible, and efficient, even at the expense of generating slightly less-than-optimal schedules. The Space Communication Scheduler (SCS) is an intelligent rule-based scheduling system. SCS is an adaptive deadline scheduler which allocates modular communications resources to meet an ordered set of user-specified job requests on board the NASA Space Station. SCS uses pattern matching techniques to detect potential conflicts through algorithmic and heuristic means. As a result, the system generates and maintains high density schedules without relying heavily on backtracking or blind search techniques. SCS is suitable for many common real-world applications.
FALCON: A distributed scheduler for MIMD architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grimshaw, A.S.; Vivas, V.E. Jr.
1991-01-01
This paper describes FALCON (Fully Automatic Load COordinator for Networks), the scheduler for the Mentat parallel processing system. FALCON has a modular structure and is designed for systems that use a task scheduling mechanism. FALCON is distributed, stable, supports system heterogeneities, and employs a sender-initiated adaptive load sharing policy with static task assignment. FALCON is parameterizable and is implemented in Mentat, a working distributed system. We present the design and implementation of FALCON as well as a brief introduction to those features of the Mentat run-time system that influence FALCON. Performance measures under different scheduler configurations are also presented andmore » analyzed with respect to the system parameters. 36 refs., 8 figs.« less
Analysis of information systems for hydropower operations
NASA Technical Reports Server (NTRS)
Sohn, R. L.; Becker, L.; Estes, J.; Simonett, D.; Yeh, W. W. G.
1976-01-01
The operations of hydropower systems were analyzed with emphasis on water resource management, to determine how aerospace derived information system technologies can increase energy output. Better utilization of water resources was sought through improved reservoir inflow forecasting based on use of hydrometeorologic information systems with new or improved sensors, satellite data relay systems, and use of advanced scheduling techniques for water release. Specific mechanisms for increased energy output were determined, principally the use of more timely and accurate short term (0-7 days) inflow information to reduce spillage caused by unanticipated dynamic high inflow events. The hydrometeorologic models used in predicting inflows were examined to determine the sensitivity of inflow prediction accuracy to the many variables employed in the models, and the results used to establish information system requirements. Sensor and data handling system capabilities were reviewed and compared to the requirements, and an improved information system concept outlined.
Analysis of information systems for hydropower operations: Executive summary
NASA Technical Reports Server (NTRS)
Sohn, R. L.; Becker, L.; Estes, J.; Simonett, D.; Yeh, W.
1976-01-01
An analysis was performed of the operations of hydropower systems, with emphasis on water resource management, to determine how aerospace derived information system technologies can effectively increase energy output. Better utilization of water resources was sought through improved reservoir inflow forecasting based on use of hydrometeorologic information systems with new or improved sensors, satellite data relay systems, and use of advanced scheduling techniques for water release. Specific mechanisms for increased energy output were determined, principally the use of more timely and accurate short term (0-7 days) inflow information to reduce spillage caused by unanticipated dynamic high inflow events. The hydrometeorologic models used in predicting inflows were examined in detail to determine the sensitivity of inflow prediction accuracy to the many variables employed in the models, and the results were used to establish information system requirements. Sensor and data handling system capabilities were reviewed and compared to the requirements, and an improved information system concept was outlined.
Design and Analysis of Self-Adapted Task Scheduling Strategies in Wireless Sensor Networks
Guo, Wenzhong; Xiong, Naixue; Chao, Han-Chieh; Hussain, Sajid; Chen, Guolong
2011-01-01
In a wireless sensor network (WSN), the usage of resources is usually highly related to the execution of tasks which consume a certain amount of computing and communication bandwidth. Parallel processing among sensors is a promising solution to provide the demanded computation capacity in WSNs. Task allocation and scheduling is a typical problem in the area of high performance computing. Although task allocation and scheduling in wired processor networks has been well studied in the past, their counterparts for WSNs remain largely unexplored. Existing traditional high performance computing solutions cannot be directly implemented in WSNs due to the limitations of WSNs such as limited resource availability and the shared communication medium. In this paper, a self-adapted task scheduling strategy for WSNs is presented. First, a multi-agent-based architecture for WSNs is proposed and a mathematical model of dynamic alliance is constructed for the task allocation problem. Then an effective discrete particle swarm optimization (PSO) algorithm for the dynamic alliance (DPSO-DA) with a well-designed particle position code and fitness function is proposed. A mutation operator which can effectively improve the algorithm’s ability of global search and population diversity is also introduced in this algorithm. Finally, the simulation results show that the proposed solution can achieve significant better performance than other algorithms. PMID:22163971
Automating Mid- and Long-Range Scheduling for the NASA Deep Space Network
NASA Technical Reports Server (NTRS)
Johnston, Mark D.; Tran, Daniel
2012-01-01
NASA has recently deployed a new mid-range scheduling system for the antennas of the Deep Space Network (DSN), called Service Scheduling Software, or S(sup 3). This system was designed and deployed as a modern web application containing a central scheduling database integrated with a collaborative environment, exploiting the same technologies as social web applications but applied to a space operations context. This is highly relevant to the DSN domain since the network schedule of operations is developed in a peer-to-peer negotiation process among all users of the DSN. These users represent not only NASA's deep space missions, but also international partners and ground-based science and calibration users. The initial implementation of S(sup 3) is complete and the system has been operational since July 2011. This paper describes some key aspects of the S(sup 3) system and on the challenges of modeling complex scheduling requirements and the ongoing extension of S(sup 3) to encompass long-range planning, downtime analysis, and forecasting, as the next step in developing a single integrated DSN scheduling tool suite to cover all time ranges.
Duan, Litian; Wang, Zizhong John; Duan, Fu
2016-11-16
In the multiple-reader environment (MRE) of radio frequency identification (RFID) system, multiple readers are often scheduled to interrogate the randomized tags via operating at different time slots or frequency channels to decrease the signal interferences. Based on this, a Geometric Distribution-based Multiple-reader Scheduling Optimization Algorithm using Artificial Immune System (GD-MRSOA-AIS) is proposed to fairly and optimally schedule the readers operating from the viewpoint of resource allocations. GD-MRSOA-AIS is composed of two parts, where a geometric distribution function combined with the fairness consideration is first introduced to generate the feasible scheduling schemes for reader operation. After that, artificial immune system (including immune clone, immune mutation and immune suppression) quickly optimize these feasible ones as the optimal scheduling scheme to ensure that readers are fairly operating with larger effective interrogation range and lower interferences. Compared with the state-of-the-art algorithm, the simulation results indicate that GD-MRSOA-AIS could efficiently schedules the multiple readers operating with a fairer resource allocation scheme, performing in larger effective interrogation range.
Duan, Litian; Wang, Zizhong John; Duan, Fu
2016-01-01
In the multiple-reader environment (MRE) of radio frequency identification (RFID) system, multiple readers are often scheduled to interrogate the randomized tags via operating at different time slots or frequency channels to decrease the signal interferences. Based on this, a Geometric Distribution-based Multiple-reader Scheduling Optimization Algorithm using Artificial Immune System (GD-MRSOA-AIS) is proposed to fairly and optimally schedule the readers operating from the viewpoint of resource allocations. GD-MRSOA-AIS is composed of two parts, where a geometric distribution function combined with the fairness consideration is first introduced to generate the feasible scheduling schemes for reader operation. After that, artificial immune system (including immune clone, immune mutation and immune suppression) quickly optimize these feasible ones as the optimal scheduling scheme to ensure that readers are fairly operating with larger effective interrogation range and lower interferences. Compared with the state-of-the-art algorithm, the simulation results indicate that GD-MRSOA-AIS could efficiently schedules the multiple readers operating with a fairer resource allocation scheme, performing in larger effective interrogation range. PMID:27854342
Simplifying Facility and Event Scheduling: Saving Time and Money.
ERIC Educational Resources Information Center
Raasch, Kevin
2003-01-01
Describes a product called the Event Management System (EMS), a computer software program to manage facility and event scheduling. Provides example of the school district and university uses of EMS. Describes steps in selecting a scheduling-management system. (PKP)