Sample records for model-based schedule design

  1. Future aircraft networks and schedules

    NASA Astrophysics Data System (ADS)

    Shu, Yan

    2011-07-01

    Because of the importance of air transportation scheduling, the emergence of small aircraft and the vision of future fuel-efficient aircraft, this thesis has focused on the study of aircraft scheduling and network design involving multiple types of aircraft and flight services. It develops models and solution algorithms for the schedule design problem and analyzes the computational results. First, based on the current development of small aircraft and on-demand flight services, this thesis expands a business model for integrating on-demand flight services with the traditional scheduled flight services. This thesis proposes a three-step approach to the design of aircraft schedules and networks from scratch under the model. In the first step, both a frequency assignment model for scheduled flights that incorporates a passenger path choice model and a frequency assignment model for on-demand flights that incorporates a passenger mode choice model are created. In the second step, a rough fleet assignment model that determines a set of flight legs, each of which is assigned an aircraft type and a rough departure time is constructed. In the third step, a timetable model that determines an exact departure time for each flight leg is developed. Based on the models proposed in the three steps, this thesis creates schedule design instances that involve almost all the major airports and markets in the United States. The instances of the frequency assignment model created in this thesis are large-scale non-convex mixed-integer programming problems, and this dissertation develops an overall network structure and proposes iterative algorithms for solving these instances. The instances of both the rough fleet assignment model and the timetable model created in this thesis are large-scale mixed-integer programming problems, and this dissertation develops subproblem schemes for solving these instances. Based on these solution algorithms, this dissertation also presents computational results of these large-scale instances. To validate the models and solution algorithms developed, this thesis also compares the daily flight schedules that it designs with the schedules of the existing airlines. Furthermore, it creates instances that represent different economic and fuel-prices conditions and derives schedules under these different conditions. In addition, it discusses the implication of using new aircraft in the future flight schedules. Finally, future research in three areas---model, computational method, and simulation for validation---is proposed.

  2. Taking the Lag out of Jet Lag through Model-Based Schedule Design

    PubMed Central

    Dean, Dennis A.; Forger, Daniel B.; Klerman, Elizabeth B.

    2009-01-01

    Travel across multiple time zones results in desynchronization of environmental time cues and the sleep–wake schedule from their normal phase relationships with the endogenous circadian system. Circadian misalignment can result in poor neurobehavioral performance, decreased sleep efficiency, and inappropriately timed physiological signals including gastrointestinal activity and hormone release. Frequent and repeated transmeridian travel is associated with long-term cognitive deficits, and rodents experimentally exposed to repeated schedule shifts have increased death rates. One approach to reduce the short-term circadian, sleep–wake, and performance problems is to use mathematical models of the circadian pacemaker to design countermeasures that rapidly shift the circadian pacemaker to align with the new schedule. In this paper, the use of mathematical models to design sleep–wake and countermeasure schedules for improved performance is demonstrated. We present an approach to designing interventions that combines an algorithm for optimal placement of countermeasures with a novel mode of schedule representation. With these methods, rapid circadian resynchrony and the resulting improvement in neurobehavioral performance can be quickly achieved even after moderate to large shifts in the sleep–wake schedule. The key schedule design inputs are endogenous circadian period length, desired sleep–wake schedule, length of intervention, background light level, and countermeasure strength. The new schedule representation facilitates schedule design, simulation studies, and experiment design and significantly decreases the amount of time to design an appropriate intervention. The method presented in this paper has direct implications for designing jet lag, shift-work, and non-24-hour schedules, including scheduling for extreme environments, such as in space, undersea, or in polar regions. PMID:19543382

  3. Modeling Off-Nominal Recovery in NextGen Terminal-Area Operations

    NASA Technical Reports Server (NTRS)

    Callantine, Todd J.

    2011-01-01

    Robust schedule-based arrival management requires efficient recovery from off-nominal situations. This paper presents research on modeling off-nominal situations and plans for recovering from them using TRAC, a route/airspace design, fast-time simulation, and analysis tool for studying NextGen trajectory-based operations. The paper provides an overview of a schedule-based arrival-management concept and supporting controller tools, then describes TRAC implementations of methods for constructing off-nominal scenarios, generating trajectory options to meet scheduling constraints, and automatically producing recovery plans.

  4. Application of precomputed control laws in a reconfigurable aircraft flight control system

    NASA Technical Reports Server (NTRS)

    Moerder, Daniel D.; Halyo, Nesim; Broussard, John R.; Caglayan, Alper K.

    1989-01-01

    A self-repairing flight control system concept in which the control law is reconfigured after actuator and/or control surface damage to preserve stability and pilot command tracking is described. A key feature of the controller is reconfigurable multivariable feedback. The feedback gains are designed off-line and scheduled as a function of the aircraft control impairment status so that reconfiguration is performed simply by updating the gain schedule after detection of an impairment. A novel aspect of the gain schedule design procedure is that the schedule is calculated using a linear quadratic optimization-based simultaneous stabilization algorithm in which the scheduled gain is constrained to stabilize a collection of plant models representing the aircraft in various control failure modes. A description and numerical evaluation of a controller design for a model of a statically unstable high-performance aircraft are given.

  5. Multiobjective optimisation design for enterprise system operation in the case of scheduling problem with deteriorating jobs

    NASA Astrophysics Data System (ADS)

    Wang, Hongfeng; Fu, Yaping; Huang, Min; Wang, Junwei

    2016-03-01

    The operation process design is one of the key issues in the manufacturing and service sectors. As a typical operation process, the scheduling with consideration of the deteriorating effect has been widely studied; however, the current literature only studied single function requirement and rarely considered the multiple function requirements which are critical for a real-world scheduling process. In this article, two function requirements are involved in the design of a scheduling process with consideration of the deteriorating effect and then formulated into two objectives of a mathematical programming model. A novel multiobjective evolutionary algorithm is proposed to solve this model with combination of three strategies, i.e. a multiple population scheme, a rule-based local search method and an elitist preserve strategy. To validate the proposed model and algorithm, a series of randomly-generated instances are tested and the experimental results indicate that the model is effective and the proposed algorithm can achieve the satisfactory performance which outperforms the other state-of-the-art multiobjective evolutionary algorithms, such as nondominated sorting genetic algorithm II and multiobjective evolutionary algorithm based on decomposition, on all the test instances.

  6. A Model-Driven Co-Design Framework for Fusing Control and Scheduling Viewpoints.

    PubMed

    Sundharam, Sakthivel Manikandan; Navet, Nicolas; Altmeyer, Sebastian; Havet, Lionel

    2018-02-20

    Model-Driven Engineering (MDE) is widely applied in the industry to develop new software functions and integrate them into the existing run-time environment of a Cyber-Physical System (CPS). The design of a software component involves designers from various viewpoints such as control theory, software engineering, safety, etc. In practice, while a designer from one discipline focuses on the core aspects of his field (for instance, a control engineer concentrates on designing a stable controller), he neglects or considers less importantly the other engineering aspects (for instance, real-time software engineering or energy efficiency). This may cause some of the functional and non-functional requirements not to be met satisfactorily. In this work, we present a co-design framework based on timing tolerance contract to address such design gaps between control and real-time software engineering. The framework consists of three steps: controller design, verified by jitter margin analysis along with co-simulation, software design verified by a novel schedulability analysis, and the run-time verification by monitoring the execution of the models on target. This framework builds on CPAL (Cyber-Physical Action Language), an MDE design environment based on model-interpretation, which enforces a timing-realistic behavior in simulation through timing and scheduling annotations. The application of our framework is exemplified in the design of an automotive cruise control system.

  7. A Model-Driven Co-Design Framework for Fusing Control and Scheduling Viewpoints

    PubMed Central

    Navet, Nicolas; Havet, Lionel

    2018-01-01

    Model-Driven Engineering (MDE) is widely applied in the industry to develop new software functions and integrate them into the existing run-time environment of a Cyber-Physical System (CPS). The design of a software component involves designers from various viewpoints such as control theory, software engineering, safety, etc. In practice, while a designer from one discipline focuses on the core aspects of his field (for instance, a control engineer concentrates on designing a stable controller), he neglects or considers less importantly the other engineering aspects (for instance, real-time software engineering or energy efficiency). This may cause some of the functional and non-functional requirements not to be met satisfactorily. In this work, we present a co-design framework based on timing tolerance contract to address such design gaps between control and real-time software engineering. The framework consists of three steps: controller design, verified by jitter margin analysis along with co-simulation, software design verified by a novel schedulability analysis, and the run-time verification by monitoring the execution of the models on target. This framework builds on CPAL (Cyber-Physical Action Language), an MDE design environment based on model-interpretation, which enforces a timing-realistic behavior in simulation through timing and scheduling annotations. The application of our framework is exemplified in the design of an automotive cruise control system. PMID:29461489

  8. Compositional schedulability analysis of real-time actor-based systems.

    PubMed

    Jaghoori, Mohammad Mahdi; de Boer, Frank; Longuet, Delphine; Chothia, Tom; Sirjani, Marjan

    2017-01-01

    We present an extension of the actor model with real-time, including deadlines associated with messages, and explicit application-level scheduling policies, e.g.,"earliest deadline first" which can be associated with individual actors. Schedulability analysis in this setting amounts to checking whether, given a scheduling policy for each actor, every task is processed within its designated deadline. To check schedulability, we introduce a compositional automata-theoretic approach, based on maximal use of model checking combined with testing. Behavioral interfaces define what an actor expects from the environment, and the deadlines for messages given these assumptions. We use model checking to verify that actors match their behavioral interfaces. We extend timed automata refinement with the notion of deadlines and use it to define compatibility of actor environments with the behavioral interfaces. Model checking of compatibility is computationally hard, so we propose a special testing process. We show that the analyses are decidable and automate the process using the Uppaal model checker.

  9. Rapid Prototyping of High Performance Signal Processing Applications

    NASA Astrophysics Data System (ADS)

    Sane, Nimish

    Advances in embedded systems for digital signal processing (DSP) are enabling many scientific projects and commercial applications. At the same time, these applications are key to driving advances in many important kinds of computing platforms. In this region of high performance DSP, rapid prototyping is critical for faster time-to-market (e.g., in the wireless communications industry) or time-to-science (e.g., in radio astronomy). DSP system architectures have evolved from being based on application specific integrated circuits (ASICs) to incorporate reconfigurable off-the-shelf field programmable gate arrays (FPGAs), the latest multiprocessors such as graphics processing units (GPUs), or heterogeneous combinations of such devices. We, thus, have a vast design space to explore based on performance trade-offs, and expanded by the multitude of possibilities for target platforms. In order to allow systematic design space exploration, and develop scalable and portable prototypes, model based design tools are increasingly used in design and implementation of embedded systems. These tools allow scalable high-level representations, model based semantics for analysis and optimization, and portable implementations that can be verified at higher levels of abstractions and targeted toward multiple platforms for implementation. The designer can experiment using such tools at an early stage in the design cycle, and employ the latest hardware at later stages. In this thesis, we have focused on dataflow-based approaches for rapid DSP system prototyping. This thesis contributes to various aspects of dataflow-based design flows and tools as follows: 1. We have introduced the concept of topological patterns, which exploits commonly found repetitive patterns in DSP algorithms to allow scalable, concise, and parameterizable representations of large scale dataflow graphs in high-level languages. We have shown how an underlying design tool can systematically exploit a high-level application specification consisting of topological patterns in various aspects of the design flow. 2. We have formulated the core functional dataflow (CFDF) model of computation, which can be used to model a wide variety of deterministic dynamic dataflow behaviors. We have also presented key features of the CFDF model and tools based on these features. These tools provide support for heterogeneous dataflow behaviors, an intuitive and common framework for functional specification, support for functional simulation, portability from several existing dataflow models to CFDF, integrated emphasis on minimally-restricted specification of actor functionality, and support for efficient static, quasi-static, and dynamic scheduling techniques. 3. We have developed a generalized scheduling technique for CFDF graphs based on decomposition of a CFDF graph into static graphs that interact at run-time. Furthermore, we have refined this generalized scheduling technique using a new notion of "mode grouping," which better exposes the underlying static behavior. We have also developed a scheduling technique for a class of dynamic applications that generates parameterized looped schedules (PLSs), which can handle dynamic dataflow behavior without major limitations on compile-time predictability. 4. We have demonstrated the use of dataflow-based approaches for design and implementation of radio astronomy DSP systems using an application example of a tunable digital downconverter (TDD) for spectrometers. Design and implementation of this module has been an integral part of this thesis work. This thesis demonstrates a design flow that consists of a high-level software prototype, analysis, and simulation using the dataflow interchange format (DIF) tool, and integration of this design with the existing tool flow for the target implementation on an FPGA platform, called interconnect break-out board (IBOB). We have also explored the trade-off between low hardware cost for fixed configurations of digital downconverters and flexibility offered by TDD designs. 5. This thesis has contributed significantly to the development and release of the latest version of a graph package oriented toward models of computation (MoCGraph). Our enhancements to this package include support for tree data structures, and generalized schedule trees (GSTs), which provide a useful data structure for a wide variety of schedule representations. Our extensions to the MoCGraph package provided key support for the CFDF model, and functional simulation capabilities in the DIF package.

  10. Design and implementation of priority and time-window based traffic scheduling and routing-spectrum allocation mechanism in elastic optical networks

    NASA Astrophysics Data System (ADS)

    Wang, Honghuan; Xing, Fangyuan; Yin, Hongxi; Zhao, Nan; Lian, Bizhan

    2016-02-01

    With the explosive growth of network services, the reasonable traffic scheduling and efficient configuration of network resources have an important significance to increase the efficiency of the network. In this paper, an adaptive traffic scheduling policy based on the priority and time window is proposed and the performance of this algorithm is evaluated in terms of scheduling ratio. The routing and spectrum allocation are achieved by using the Floyd shortest path algorithm and establishing a node spectrum resource allocation model based on greedy algorithm, which is proposed by us. The fairness index is introduced to improve the capability of spectrum configuration. The results show that the designed traffic scheduling strategy can be applied to networks with multicast and broadcast functionalities, and makes them get real-time and efficient response. The scheme of node spectrum configuration improves the frequency resource utilization and gives play to the efficiency of the network.

  11. Optimal designs for population pharmacokinetic studies of the partner drugs co-administered with artemisinin derivatives in patients with uncomplicated falciparum malaria.

    PubMed

    Jamsen, Kris M; Duffull, Stephen B; Tarning, Joel; Lindegardh, Niklas; White, Nicholas J; Simpson, Julie A

    2012-07-11

    Artemisinin-based combination therapy (ACT) is currently recommended as first-line treatment for uncomplicated malaria, but of concern, it has been observed that the effectiveness of the main artemisinin derivative, artesunate, has been diminished due to parasite resistance. This reduction in effect highlights the importance of the partner drugs in ACT and provides motivation to gain more knowledge of their pharmacokinetic (PK) properties via population PK studies. Optimal design methodology has been developed for population PK studies, which analytically determines a sampling schedule that is clinically feasible and yields precise estimation of model parameters. In this work, optimal design methodology was used to determine sampling designs for typical future population PK studies of the partner drugs (mefloquine, lumefantrine, piperaquine and amodiaquine) co-administered with artemisinin derivatives. The optimal designs were determined using freely available software and were based on structural PK models from the literature and the key specifications of 100 patients with five samples per patient, with one sample taken on the seventh day of treatment. The derived optimal designs were then evaluated via a simulation-estimation procedure. For all partner drugs, designs consisting of two sampling schedules (50 patients per schedule) with five samples per patient resulted in acceptable precision of the model parameter estimates. The sampling schedules proposed in this paper should be considered in future population pharmacokinetic studies where intensive sampling over many days or weeks of follow-up is not possible due to either ethical, logistic or economical reasons.

  12. Optimizing Chemotherapy Dose and Schedule by Norton-Simon Mathematical Modeling

    PubMed Central

    Traina, Tiffany A.; Dugan, Ute; Higgins, Brian; Kolinsky, Kenneth; Theodoulou, Maria; Hudis, Clifford A.; Norton, Larry

    2011-01-01

    Background To hasten and improve anticancer drug development, we created a novel approach to generating and analyzing preclinical dose-scheduling data so as to optimize benefit-to-toxicity ratios. Methods We applied mathematical methods based upon Norton-Simon growth kinetic modeling to tumor-volume data from breast cancer xenografts treated with capecitabine (Xeloda®, Roche) at the conventional schedule of 14 days of treatment followed by a 7-day rest (14 - 7). Results The model predicted that 7 days of treatment followed by a 7-day rest (7 - 7) would be superior. Subsequent preclinical studies demonstrated that this biweekly capecitabine schedule allowed for safe delivery of higher daily doses, improved tumor response, and prolonged animal survival. Conclusions We demonstrated that the application of Norton-Simon modeling to the design and analysis of preclinical data predicts an improved capecitabine dosing schedule in xenograft models. This method warrants further investigation and application in clinical drug development. PMID:20519801

  13. Model-based Acceleration Control of Turbofan Engines with a Hammerstein-Wiener Representation

    NASA Astrophysics Data System (ADS)

    Wang, Jiqiang; Ye, Zhifeng; Hu, Zhongzhi; Wu, Xin; Dimirovsky, Georgi; Yue, Hong

    2017-05-01

    Acceleration control of turbofan engines is conventionally designed through either schedule-based or acceleration-based approach. With the widespread acceptance of model-based design in aviation industry, it becomes necessary to investigate the issues associated with model-based design for acceleration control. In this paper, the challenges for implementing model-based acceleration control are explained; a novel Hammerstein-Wiener representation of engine models is introduced; based on the Hammerstein-Wiener model, a nonlinear generalized minimum variance type of optimal control law is derived; the feature of the proposed approach is that it does not require the inversion operation that usually upsets those nonlinear control techniques. The effectiveness of the proposed control design method is validated through a detailed numerical study.

  14. An Evaluation of the ROSE System

    NASA Technical Reports Server (NTRS)

    Usher, John M.

    2002-01-01

    A request-oriented scheduling engine, better known as ROSE, is under development within the Flight Projects Directorate for the purpose of planning and scheduling of the activities and resources associated with the science experiments to be performed aboard the International Space Station (ISS). ROSE is being designed to incrementally process requests from payload developers (PDs) to model and schedule the execution of their science experiments on the ISS. The novelty of the approach comes from its web-based interface permitting the PDs to define their request via the construction of a graphical model to represent their requirements. Based on an examination of the current ROSE implementation, this paper proposes several recommendations for changes to the modeling component and makes mention of other potential applications of the ROSE system.

  15. A COTS-Based Attitude Dependent Contact Scheduling System

    NASA Technical Reports Server (NTRS)

    DeGumbia, Jonathan D.; Stezelberger, Shane T.; Woodard, Mark

    2006-01-01

    The mission architecture of the Gamma-ray Large Area Space Telescope (GLAST) requires a sophisticated ground system component for scheduling the downlink of science data. Contacts between the ````````````````` satellite and the Tracking and Data Relay Satellite System (TDRSS) are restricted by the limited field-of-view of the science data downlink antenna. In addition, contacts must be scheduled when permitted by the satellite s complex and non-repeating attitude profile. Complicating the matter further, the long lead-time required to schedule TDRSS services, combined with the short duration of the downlink contact opportunities, mandates accurate GLAST orbit and attitude modeling. These circumstances require the development of a scheduling system that is capable of predictively and accurately modeling not only the orbital position of GLAST but also its attitude. This paper details the methods used in the design of a Commercial Off The Shelf (COTS)-based attitude-dependent. TDRSS contact Scheduling system that meets the unique scheduling requirements of the GLAST mission, and it suggests a COTS-based scheduling approach to support future missions. The scheduling system applies filtering and smoothing algorithms to telemetered GPS data to produce high-accuracy predictive GLAST orbit ephemerides. Next, bus pointing commands from the GLAST Science Support Center are used to model the complexities of the two dynamic science gathering attitude modes. Attitude-dependent view periods are then generated between GLAST and each of the supporting TDRSs. Numerous scheduling constraints are then applied to account for various mission specific resource limitations. Next, an optimization engine is used to produce an optimized TDRSS contact schedule request which is sent to TDRSS scheduling for confirmation. Lastly, the confirmed TDRSS contact schedule is rectified with an updated ephemeris and adjusted bus pointing commands to produce a final science downlink contact schedule.

  16. A new task scheduling algorithm based on value and time for cloud platform

    NASA Astrophysics Data System (ADS)

    Kuang, Ling; Zhang, Lichen

    2017-08-01

    Tasks scheduling, a key part of increasing resource utilization and enhancing system performance, is a never outdated problem especially in cloud platforms. Based on the value density algorithm of the real-time task scheduling system and the character of the distributed system, the paper present a new task scheduling algorithm by further studying the cloud technology and the real-time system: Least Level Value Density First (LLVDF). The algorithm not only introduces some attributes of time and value for tasks, it also can describe weighting relationships between these properties mathematically. As this feature of the algorithm, it can gain some advantages to distinguish between different tasks more dynamically and more reasonably. When the scheme was used in the priority calculation of the dynamic task scheduling on cloud platform, relying on its advantage, it can schedule and distinguish tasks with large amounts and many kinds more efficiently. The paper designs some experiments, some distributed server simulation models based on M/M/C model of queuing theory and negative arrivals, to compare the algorithm against traditional algorithm to observe and show its characters and advantages.

  17. A new parallel DNA algorithm to solve the task scheduling problem based on inspired computational model.

    PubMed

    Wang, Zhaocai; Ji, Zuwen; Wang, Xiaoming; Wu, Tunhua; Huang, Wei

    2017-12-01

    As a promising approach to solve the computationally intractable problem, the method based on DNA computing is an emerging research area including mathematics, computer science and molecular biology. The task scheduling problem, as a well-known NP-complete problem, arranges n jobs to m individuals and finds the minimum execution time of last finished individual. In this paper, we use a biologically inspired computational model and describe a new parallel algorithm to solve the task scheduling problem by basic DNA molecular operations. In turn, we skillfully design flexible length DNA strands to represent elements of the allocation matrix, take appropriate biological experiment operations and get solutions of the task scheduling problem in proper length range with less than O(n 2 ) time complexity. Copyright © 2017. Published by Elsevier B.V.

  18. Optimal Experimental Design for Model Discrimination

    ERIC Educational Resources Information Center

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it…

  19. An AI approach for scheduling space-station payloads at Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Castillo, D.; Ihrie, D.; Mcdaniel, M.; Tilley, R.

    1987-01-01

    The Payload Processing for Space-Station Operations (PHITS) is a prototype modeling tool capable of addressing many Space Station related concerns. The system's object oriented design approach coupled with a powerful user interface provide the user with capabilities to easily define and model many applications. PHITS differs from many artificial intelligence based systems in that it couples scheduling and goal-directed simulation to ensure that on-orbit requirement dates are satisfied.

  20. Systems cost/performance analysis (study 2.3). Volume 2: Systems cost/performance model. [unmanned automated payload programs and program planning

    NASA Technical Reports Server (NTRS)

    Campbell, B. H.

    1974-01-01

    A methodology which was developed for balanced designing of spacecraft subsystems and interrelates cost, performance, safety, and schedule considerations was refined. The methodology consists of a two-step process: the first step is one of selecting all hardware designs which satisfy the given performance and safety requirements, the second step is one of estimating the cost and schedule required to design, build, and operate each spacecraft design. Using this methodology to develop a systems cost/performance model allows the user of such a model to establish specific designs and the related costs and schedule. The user is able to determine the sensitivity of design, costs, and schedules to changes in requirements. The resulting systems cost performance model is described and implemented as a digital computer program.

  1. Real-time design with peer tasks

    NASA Technical Reports Server (NTRS)

    Goforth, Andre; Howes, Norman R.; Wood, Jonathan D.; Barnes, Michael J.

    1995-01-01

    We introduce a real-time design methodology for large scale, distributed, parallel architecture, real-time systems (LDPARTS), as an alternative to those methods using rate or dead-line monotonic analysis. In our method the fundamental units of prioritization, work items, are domain specific objects with timing requirements (deadlines) found in user's specification. A work item consists of a collection of tasks of equal priority. Current scheduling theories are applied with artifact deadlines introduced by the designer whereas our method schedules work items to meet user's specification deadlines (sometimes called end-to-end deadlines). Our method supports these scheduling properties. Work item scheduling is based on domain specific importance instead of task level urgency and still meets as many user specification deadlines as can be met by scheduling tasks with respect to urgency. Second, the minimum (closest) on-line deadline that can be guaranteed for a work item of highest importance, scheduled at run time, is approximately the inverse of the throughput, measured in work items per second. Third, throughput is not degraded during overload and instead of resorting to task shedding during overload, the designer can specify which work items to shed. We prove these properties in a mathematical model.

  2. Realization of planning design of mechanical manufacturing system by Petri net simulation model

    NASA Astrophysics Data System (ADS)

    Wu, Yanfang; Wan, Xin; Shi, Weixiang

    1991-09-01

    Planning design is to work out a more overall long-term plan. In order to guarantee a mechanical manufacturing system (MMS) designed to obtain maximum economical benefit, it is necessary to carry out a reasonable planning design for the system. First, some principles on planning design for MMS are introduced. Problems of production scheduling and their decision rules for computer simulation are presented. Realizable method of each production scheduling decision rule in Petri net model is discussed. Second, the solution of conflict rules for conflict problems during running Petri net is given. Third, based on the Petri net model of MMS which includes part flow and tool flow, according to the principle of minimum event time advance, a computer dynamic simulation of the Petri net model, that is, a computer dynamic simulation of MMS, is realized. Finally, the simulation program is applied to a simulation exmple, so the scheme of a planning design for MMS can be evaluated effectively.

  3. Range Process Simulation Tool

    NASA Technical Reports Server (NTRS)

    Phillips, Dave; Haas, William; Barth, Tim; Benjamin, Perakath; Graul, Michael; Bagatourova, Olga

    2005-01-01

    Range Process Simulation Tool (RPST) is a computer program that assists managers in rapidly predicting and quantitatively assessing the operational effects of proposed technological additions to, and/or upgrades of, complex facilities and engineering systems such as the Eastern Test Range. Originally designed for application to space transportation systems, RPST is also suitable for assessing effects of proposed changes in industrial facilities and large organizations. RPST follows a model-based approach that includes finite-capacity schedule analysis and discrete-event process simulation. A component-based, scalable, open architecture makes RPST easily and rapidly tailorable for diverse applications. Specific RPST functions include: (1) definition of analysis objectives and performance metrics; (2) selection of process templates from a processtemplate library; (3) configuration of process models for detailed simulation and schedule analysis; (4) design of operations- analysis experiments; (5) schedule and simulation-based process analysis; and (6) optimization of performance by use of genetic algorithms and simulated annealing. The main benefits afforded by RPST are provision of information that can be used to reduce costs of operation and maintenance, and the capability for affordable, accurate, and reliable prediction and exploration of the consequences of many alternative proposed decisions.

  4. Design Change Model for Effective Scheduling Change Propagation Paths

    NASA Astrophysics Data System (ADS)

    Zhang, Hai-Zhu; Ding, Guo-Fu; Li, Rong; Qin, Sheng-Feng; Yan, Kai-Yin

    2017-09-01

    Changes in requirements may result in the increasing of product development project cost and lead time, therefore, it is important to understand how requirement changes propagate in the design of complex product systems and be able to select best options to guide design. Currently, a most approach for design change is lack of take the multi-disciplinary coupling relationships and the number of parameters into account integrally. A new design change model is presented to systematically analyze and search change propagation paths. Firstly, a PDS-Behavior-Structure-based design change model is established to describe requirement changes causing the design change propagation in behavior and structure domains. Secondly, a multi-disciplinary oriented behavior matrix is utilized to support change propagation analysis of complex product systems, and the interaction relationships of the matrix elements are used to obtain an initial set of change paths. Finally, a rough set-based propagation space reducing tool is developed to assist in narrowing change propagation paths by computing the importance of the design change parameters. The proposed new design change model and its associated tools have been demonstrated by the scheduling change propagation paths of high speed train's bogie to show its feasibility and effectiveness. This model is not only supportive to response quickly to diversified market requirements, but also helpful to satisfy customer requirements and reduce product development lead time. The proposed new design change model can be applied in a wide range of engineering systems design with improved efficiency.

  5. Modelling Temporal Schedule of Urban Trains Using Agent-Based Simulation and NSGA2-BASED Multiobjective Optimization Approaches

    NASA Astrophysics Data System (ADS)

    Sahelgozin, M.; Alimohammadi, A.

    2015-12-01

    Increasing distances between locations of residence and services leads to a large number of daily commutes in urban areas. Developing subway systems has been taken into consideration of transportation managers as a response to this huge amount of travel demands. In developments of subway infrastructures, representing a temporal schedule for trains is an important task; because an appropriately designed timetable decreases Total passenger travel times, Total Operation Costs and Energy Consumption of trains. Since these variables are not positively correlated, subway scheduling is considered as a multi-criteria optimization problem. Therefore, proposing a proper solution for subway scheduling has been always a controversial issue. On the other hand, research on a phenomenon requires a summarized representation of the real world that is known as Model. In this study, it is attempted to model temporal schedule of urban trains that can be applied in Multi-Criteria Subway Schedule Optimization (MCSSO) problems. At first, a conceptual framework is represented for MCSSO. Then, an agent-based simulation environment is implemented to perform Sensitivity Analysis (SA) that is used to extract the interrelations between the framework components. These interrelations is then taken into account in order to construct the proposed model. In order to evaluate performance of the model in MCSSO problems, Tehran subway line no. 1 is considered as the case study. Results of the study show that the model was able to generate an acceptable distribution of Pareto-optimal solutions which are applicable in the real situations while solving a MCSSO is the goal. Also, the accuracy of the model in representing the operation of subway systems was significant.

  6. A General Cross-Layer Cloud Scheduling Framework for Multiple IoT Computer Tasks.

    PubMed

    Wu, Guanlin; Bao, Weidong; Zhu, Xiaomin; Zhang, Xiongtao

    2018-05-23

    The diversity of IoT services and applications brings enormous challenges to improving the performance of multiple computer tasks' scheduling in cross-layer cloud computing systems. Unfortunately, the commonly-employed frameworks fail to adapt to the new patterns on the cross-layer cloud. To solve this issue, we design a new computer task scheduling framework for multiple IoT services in cross-layer cloud computing systems. Specifically, we first analyze the features of the cross-layer cloud and computer tasks. Then, we design the scheduling framework based on the analysis and present detailed models to illustrate the procedures of using the framework. With the proposed framework, the IoT services deployed in cross-layer cloud computing systems can dynamically select suitable algorithms and use resources more effectively to finish computer tasks with different objectives. Finally, the algorithms are given based on the framework, and extensive experiments are also given to validate its effectiveness, as well as its superiority.

  7. Designing an optimal software intensive system acquisition: A game theoretic approach

    NASA Astrophysics Data System (ADS)

    Buettner, Douglas John

    The development of schedule-constrained software-intensive space systems is challenging. Case study data from national security space programs developed at the U.S. Air Force Space and Missile Systems Center (USAF SMC) provide evidence of the strong desire by contractors to skip or severely reduce software development design and early defect detection methods in these schedule-constrained environments. The research findings suggest recommendations to fully address these issues at numerous levels. However, the observations lead us to investigate modeling and theoretical methods to fundamentally understand what motivated this behavior in the first place. As a result, Madachy's inspection-based system dynamics model is modified to include unit testing and an integration test feedback loop. This Modified Madachy Model (MMM) is used as a tool to investigate the consequences of this behavior on the observed defect dynamics for two remarkably different case study software projects. Latin Hypercube sampling of the MMM with sample distributions for quality, schedule and cost-driven strategies demonstrate that the higher cost and effort quality-driven strategies provide consistently better schedule performance than the schedule-driven up-front effort-reduction strategies. Game theory reasoning for schedule-driven engineers cutting corners on inspections and unit testing is based on the case study evidence and Austin's agency model to describe the observed phenomena. Game theory concepts are then used to argue that the source of the problem and hence the solution to developers cutting corners on quality for schedule-driven system acquisitions ultimately lies with the government. The game theory arguments also lead to the suggestion that the use of a multi-player dynamic Nash bargaining game provides a solution for our observed lack of quality game between the government (the acquirer) and "large-corporation" software developers. A note is provided that argues this multi-player dynamic Nash bargaining game also provides the solution to Freeman Dyson's problem, for a way to place a label of good or bad on systems.

  8. Hybrid optimal scheduling for intermittent androgen suppression of prostate cancer

    NASA Astrophysics Data System (ADS)

    Hirata, Yoshito; di Bernardo, Mario; Bruchovsky, Nicholas; Aihara, Kazuyuki

    2010-12-01

    We propose a method for achieving an optimal protocol of intermittent androgen suppression for the treatment of prostate cancer. Since the model that reproduces the dynamical behavior of the surrogate tumor marker, prostate specific antigen, is piecewise linear, we can obtain an analytical solution for the model. Based on this, we derive conditions for either stopping or delaying recurrent disease. The solution also provides a design principle for the most favorable schedule of treatment that minimizes the rate of expansion of the malignant cell population.

  9. Scheduling optimization of design stream line for production research and development projects

    NASA Astrophysics Data System (ADS)

    Liu, Qinming; Geng, Xiuli; Dong, Ming; Lv, Wenyuan; Ye, Chunming

    2017-05-01

    In a development project, efficient design stream line scheduling is difficult and important owing to large design imprecision and the differences in the skills and skill levels of employees. The relative skill levels of employees are denoted as fuzzy numbers. Multiple execution modes are generated by scheduling different employees for design tasks. An optimization model of a design stream line scheduling problem is proposed with the constraints of multiple executive modes, multi-skilled employees and precedence. The model considers the parallel design of multiple projects, different skills of employees, flexible multi-skilled employees and resource constraints. The objective function is to minimize the duration and tardiness of the project. Moreover, a two-dimensional particle swarm algorithm is used to find the optimal solution. To illustrate the validity of the proposed method, a case is examined in this article, and the results support the feasibility and effectiveness of the proposed model and algorithm.

  10. Bidding-based autonomous process planning and scheduling

    NASA Astrophysics Data System (ADS)

    Gu, Peihua; Balasubramanian, Sivaram; Norrie, Douglas H.

    1995-08-01

    Improving productivity through computer integrated manufacturing systems (CIMS) and concurrent engineering requires that the islands of automation in an enterprise be completely integrated. The first step in this direction is to integrate design, process planning, and scheduling. This can be achieved through a bidding-based process planning approach. The product is represented in a STEP model with detailed design and administrative information including design specifications, batch size, and due dates. Upon arrival at the manufacturing facility, the product registered in the shop floor manager which is essentially a coordinating agent. The shop floor manager broadcasts the product's requirements to the machines. The shop contains autonomous machines that have knowledge about their functionality, capabilities, tooling, and schedule. Each machine has its own process planner and responds to the product's request in a different way that is consistent with its capabilities and capacities. When more than one machine offers certain process(es) for the same requirements, they enter into negotiation. Based on processing time, due date, and cost, one of the machines wins the contract. The successful machine updates its schedule and advises the product to request raw material for processing. The concept was implemented using a multi-agent system with the task decomposition and planning achieved through contract nets. The examples are included to illustrate the approach.

  11. Hypertext-based design of a user interface for scheduling

    NASA Technical Reports Server (NTRS)

    Woerner, Irene W.; Biefeld, Eric

    1993-01-01

    Operations Mission Planner (OMP) is an ongoing research project at JPL that utilizes AI techniques to create an intelligent, automated planning and scheduling system. The information space reflects the complexity and diversity of tasks necessary in most real-world scheduling problems. Thus the problem of the user interface is to present as much information as possible at a given moment and allow the user to quickly navigate through the various types of displays. This paper describes a design which applies the hypertext model to solve these user interface problems. The general paradigm is to provide maps and search queries to allow the user to quickly find an interesting conflict or problem, and then allow the user to navigate through the displays in a hypertext fashion.

  12. Scheduling elective surgeries: the tradeoff among bed capacity, waiting patients and operating room utilization using goal programming.

    PubMed

    Li, Xiangyong; Rafaliya, N; Baki, M Fazle; Chaouch, Ben A

    2017-03-01

    Scheduling of surgeries in the operating rooms under limited competing resources such as surgical and nursing staff, anesthesiologist, medical equipment, and recovery beds in surgical wards is a complicated process. A well-designed schedule should be concerned with the welfare of the entire system by allocating the available resources in an efficient and effective manner. In this paper, we develop an integer linear programming model in a manner useful for multiple goals for optimally scheduling elective surgeries based on the availability of surgeons and operating rooms over a time horizon. In particular, the model is concerned with the minimization of the following important goals: (1) the anticipated number of patients waiting for service; (2) the underutilization of operating room time; (3) the maximum expected number of patients in the recovery unit; and (4) the expected range (the difference between maximum and minimum expected number) of patients in the recovery unit. We develop two goal programming (GP) models: lexicographic GP model and weighted GP model. The lexicographic GP model schedules operating rooms when various preemptive priority levels are given to these four goals. A numerical study is conducted to illustrate the optimal master-surgery schedule obtained from the models. The numerical results demonstrate that when the available number of surgeons and operating rooms is known without error over the planning horizon, the proposed models can produce good schedules and priority levels and preference weights of four goals affect the resulting schedules. The results quantify the tradeoffs that must take place as the preemptive-weights of the four goals are changed.

  13. Aspects of job scheduling

    NASA Technical Reports Server (NTRS)

    Phillips, K.

    1976-01-01

    A mathematical model for job scheduling in a specified context is presented. The model uses both linear programming and combinatorial methods. While designed with a view toward optimization of scheduling of facility and plant operations at the Deep Space Communications Complex, the context is sufficiently general to be widely applicable. The general scheduling problem including options for scheduling objectives is discussed and fundamental parameters identified. Mathematical algorithms for partitioning problems germane to scheduling are presented.

  14. Generation of Look-Up Tables for Dynamic Job Shop Scheduling Decision Support Tool

    NASA Astrophysics Data System (ADS)

    Oktaviandri, Muchamad; Hassan, Adnan; Mohd Shaharoun, Awaluddin

    2016-02-01

    Majority of existing scheduling techniques are based on static demand and deterministic processing time, while most job shop scheduling problem are concerned with dynamic demand and stochastic processing time. As a consequence, the solutions obtained from the traditional scheduling technique are ineffective wherever changes occur to the system. Therefore, this research intends to develop a decision support tool (DST) based on promising artificial intelligent that is able to accommodate the dynamics that regularly occur in job shop scheduling problem. The DST was designed through three phases, i.e. (i) the look-up table generation, (ii) inverse model development and (iii) integration of DST components. This paper reports the generation of look-up tables for various scenarios as a part in development of the DST. A discrete event simulation model was used to compare the performance among SPT, EDD, FCFS, S/OPN and Slack rules; the best performances measures (mean flow time, mean tardiness and mean lateness) and the job order requirement (inter-arrival time, due dates tightness and setup time ratio) which were compiled into look-up tables. The well-known 6/6/J/Cmax Problem from Muth and Thompson (1963) was used as a case study. In the future, the performance measure of various scheduling scenarios and the job order requirement will be mapped using ANN inverse model.

  15. Planning Risk-Based SQC Schedules for Bracketed Operation of Continuous Production Analyzers.

    PubMed

    Westgard, James O; Bayat, Hassan; Westgard, Sten A

    2018-02-01

    To minimize patient risk, "bracketed" statistical quality control (SQC) is recommended in the new CLSI guidelines for SQC (C24-Ed4). Bracketed SQC requires that a QC event both precedes and follows (brackets) a group of patient samples. In optimizing a QC schedule, the frequency of QC or run size becomes an important planning consideration to maintain quality and also facilitate responsive reporting of results from continuous operation of high production analytic systems. Different plans for optimizing a bracketed SQC schedule were investigated on the basis of Parvin's model for patient risk and CLSI C24-Ed4's recommendations for establishing QC schedules. A Sigma-metric run size nomogram was used to evaluate different QC schedules for processes of different sigma performance. For high Sigma performance, an effective SQC approach is to employ a multistage QC procedure utilizing a "startup" design at the beginning of production and a "monitor" design periodically throughout production. Example QC schedules are illustrated for applications with measurement procedures having 6-σ, 5-σ, and 4-σ performance. Continuous production analyzers that demonstrate high σ performance can be effectively controlled with multistage SQC designs that employ a startup QC event followed by periodic monitoring or bracketing QC events. Such designs can be optimized to minimize the risk of harm to patients. © 2017 American Association for Clinical Chemistry.

  16. Incentive-compatible demand-side management for smart grids based on review strategies

    NASA Astrophysics Data System (ADS)

    Xu, Jie; van der Schaar, Mihaela

    2015-12-01

    Demand-side load management is able to significantly improve the energy efficiency of smart grids. Since the electricity production cost depends on the aggregate energy usage of multiple consumers, an important incentive problem emerges: self-interested consumers want to increase their own utilities by consuming more than the socially optimal amount of energy during peak hours since the increased cost is shared among the entire set of consumers. To incentivize self-interested consumers to take the socially optimal scheduling actions, we design a new class of protocols based on review strategies. These strategies work as follows: first, a review stage takes place in which a statistical test is performed based on the daily prices of the previous billing cycle to determine whether or not the other consumers schedule their electricity loads in a socially optimal way. If the test fails, the consumers trigger a punishment phase in which, for a certain time, they adjust their energy scheduling in such a way that everybody in the consumer set is punished due to an increased price. Using a carefully designed protocol based on such review strategies, consumers then have incentives to take the socially optimal load scheduling to avoid entering this punishment phase. We rigorously characterize the impact of deploying protocols based on review strategies on the system's as well as the users' performance and determine the optimal design (optimal billing cycle, punishment length, etc.) for various smart grid deployment scenarios. Even though this paper considers a simplified smart grid model, our analysis provides important and useful insights for designing incentive-compatible demand-side management schemes based on aggregate energy usage information in a variety of practical scenarios.

  17. Multimode resource-constrained multiple project scheduling problem under fuzzy random environment and its application to a large scale hydropower construction project.

    PubMed

    Xu, Jiuping; Feng, Cuiying

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method.

  18. Multimode Resource-Constrained Multiple Project Scheduling Problem under Fuzzy Random Environment and Its Application to a Large Scale Hydropower Construction Project

    PubMed Central

    Xu, Jiuping

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method. PMID:24550708

  19. Stochastic Modeling of Airlines' Scheduled Services Revenue

    NASA Technical Reports Server (NTRS)

    Hamed, M. M.

    1999-01-01

    Airlines' revenue generated from scheduled services account for the major share in the total revenue. As such, predicting airlines' total scheduled services revenue is of great importance both to the governments (in case of national airlines) and private airlines. This importance stems from the need to formulate future airline strategic management policies, determine government subsidy levels, and formulate governmental air transportation policies. The prediction of the airlines' total scheduled services revenue is dealt with in this paper. Four key components of airline's scheduled services are considered. These include revenues generated from passenger, cargo, mail, and excess baggage. By addressing the revenue generated from each schedule service separately, air transportation planners and designers arc able to enhance their ability to formulate specific strategies for each component. Estimation results clearly indicate that the four stochastic processes (scheduled services components) are represented by different Box-Jenkins ARIMA models. The results demonstrate the appropriateness of the developed models and their ability to provide air transportation planners with future information vital to the planning and design processes.

  20. Stochastic Modeling of Airlines' Scheduled Services Revenue

    NASA Technical Reports Server (NTRS)

    Hamed, M. M.

    1999-01-01

    Airlines' revenue generated from scheduled services account for the major share in the total revenue. As such, predicting airlines' total scheduled services revenue is of great importance both to the governments (in case of national airlines) and private airlines. This importance stems from the need to formulate future airline strategic management policies, determine government subsidy levels, and formulate governmental air transportation policies. The prediction of the airlines' total scheduled services revenue is dealt with in this paper. Four key components of airline's scheduled services are considered. These include revenues generated from passenger, cargo, mail, and excess baggage. By addressing the revenue generated from each schedule service separately, air transportation planners and designers are able to enhance their ability to formulate specific strategies for each component. Estimation results clearly indicate that the four stochastic processes (scheduled services components) are represented by different Box-Jenkins ARIMA models. The results demonstrate the appropriateness of the developed models and their ability to provide air transportation planners with future information vital to the planning and design processes.

  1. SMI Compatible Simulation Scheduler Design for Reuse of Model Complying with Smp Standard

    NASA Astrophysics Data System (ADS)

    Koo, Cheol-Hea; Lee, Hoon-Hee; Cheon, Yee-Jin

    2010-12-01

    Software reusability is one of key factors which impacts cost and schedule on a software development project. It is very crucial also in satellite simulator development since there are many commercial simulator models related to satellite and dynamics. If these models can be used in another simulator platform, great deal of confidence and cost/schedule reduction would be achieved. Simulation model portability (SMP) is maintained by European Space Agency and many models compatible with SMP/simulation model interface (SMI) are available. Korea Aerospace Research Institute (KARI) is developing hardware abstraction layer (HAL) supported satellite simulator to verify on-board software of satellite. From above reasons, KARI wants to port these SMI compatible models to the HAL supported satellite simulator. To port these SMI compatible models to the HAL supported satellite simulator, simulation scheduler is preliminary designed according to the SMI standard.

  2. Constraint-Based Scheduling System

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Eskey, Megan; Stock, Todd; Taylor, Will; Kanefsky, Bob; Drascher, Ellen; Deale, Michael; Daun, Brian; Davis, Gene

    1995-01-01

    Report describes continuing development of software for constraint-based scheduling system implemented eventually on massively parallel computer. Based on machine learning as means of improving scheduling. Designed to learn when to change search strategy by analyzing search progress and learning general conditions under which resource bottleneck occurs.

  3. Reliable gain-scheduled control of discrete-time systems and its application to CSTR model

    NASA Astrophysics Data System (ADS)

    Sakthivel, R.; Selvi, S.; Mathiyalagan, K.; Shi, Y.

    2016-10-01

    This paper is focused on reliable gain-scheduled controller design for a class of discrete-time systems with randomly occurring nonlinearities and actuator fault. Further, the nonlinearity in the system model is assumed to occur randomly according to a Bernoulli distribution with measurable time-varying probability in real time. The main purpose of this paper is to design a gain-scheduled controller by implementing a probability-dependent Lyapunov function and linear matrix inequality (LMI) approach such that the closed-loop discrete-time system is stochastically stable for all admissible randomly occurring nonlinearities. The existence conditions for the reliable controller is formulated in terms of LMI constraints. Finally, the proposed reliable gain-scheduled control scheme is applied on continuously stirred tank reactor model to demonstrate the effectiveness and applicability of the proposed design technique.

  4. A software tool for dataflow graph scheduling

    NASA Technical Reports Server (NTRS)

    Jones, Robert L., III

    1994-01-01

    A graph-theoretic design process and software tool is presented for selecting a multiprocessing scheduling solution for a class of computational problems. The problems of interest are those that can be described using a dataflow graph and are intended to be executed repetitively on multiple processors. The dataflow paradigm is very useful in exposing the parallelism inherent in algorithms. It provides a graphical and mathematical model which describes a partial ordering of algorithm tasks based on data precedence.

  5. Linear parameter varying representations for nonlinear control design

    NASA Astrophysics Data System (ADS)

    Carter, Lance Huntington

    Linear parameter varying (LPV) systems are investigated as a framework for gain-scheduled control design and optimal hybrid control. An LPV system is defined as a linear system whose dynamics depend upon an a priori unknown but measurable exogenous parameter. A gain-scheduled autopilot design is presented for a bank-to-turn (BTT) missile. The method is novel in that the gain-scheduled design does not involve linearizations about operating points. Instead, the missile dynamics are brought to LPV form via a state transformation. This idea is applied to the design of a coupled longitudinal/lateral BTT missile autopilot. The pitch and yaw/roll dynamics are separately transformed to LPV form, where the cross axis states are treated as "exogenous" parameters. These are actually endogenous variables, so such a plant is called "quasi-LPV." Once in quasi-LPV form, a family of robust controllers using mu synthesis is designed for both the pitch and yaw/roll channels, using angle-of-attack and roll rate as the scheduling variables. The closed-loop time response is simulated using the original nonlinear model and also using perturbed aerodynamic coefficients. Modeling and control of engine idle speed is investigated using LPV methods. It is shown how generalized discrete nonlinear systems may be transformed into quasi-LPV form. A discrete nonlinear engine model is developed and expressed in quasi-LPV form with engine speed as the scheduling variable. An example control design is presented using linear quadratic methods. Simulations are shown comparing the LPV based controller performance to that using PID control. LPV representations are also shown to provide a setting for hybrid systems. A hybrid system is characterized by control inputs consisting of both analog signals and discrete actions. A solution is derived for the optimal control of hybrid systems with generalized cost functions. This is shown to be computationally intensive, so a suboptimal strategy is proposed that neglects a subset of possible parameter trajectories. A computational algorithm is constructed for this suboptimal solution applied to a class of linear non-quadratic cost functions.

  6. Sum-of-Squares-Based Region of Attraction Analysis for Gain-Scheduled Three-Loop Autopilot

    NASA Astrophysics Data System (ADS)

    Seo, Min-Won; Kwon, Hyuck-Hoon; Choi, Han-Lim

    2018-04-01

    A conventional method of designing a missile autopilot is to linearize the original nonlinear dynamics at several trim points, then to determine linear controllers for each linearized model, and finally implement gain-scheduling technique. The validation of such a controller is often based on linear system analysis for the linear closed-loop system at the trim conditions. Although this type of gain-scheduled linear autopilot works well in practice, validation based solely on linear analysis may not be sufficient to fully characterize the closed-loop system especially when the aerodynamic coefficients exhibit substantial nonlinearity with respect to the flight condition. The purpose of this paper is to present a methodology for analyzing the stability of a gain-scheduled controller in a setting close to the original nonlinear setting. The method is based on sum-of-squares (SOS) optimization that can be used to characterize the region of attraction of a polynomial system by solving convex optimization problems. The applicability of the proposed SOS-based methodology is verified on a short-period autopilot of a skid-to-turn missile.

  7. Thermal-Aware Test Access Mechanism and Wrapper Design Optimization for System-on-Chips

    NASA Astrophysics Data System (ADS)

    Yu, Thomas Edison; Yoneda, Tomokazu; Chakrabarty, Krishnendu; Fujiwara, Hideo

    Rapid advances in semiconductor manufacturing technology have led to higher chip power densities, which places greater emphasis on packaging and temperature control during testing. For system-on-chips, peak power-based scheduling algorithms have been used to optimize tests under specified power constraints. However, imposing power constraints does not always solve the problem of overheating due to the non-uniform distribution of power across the chip. This paper presents a TAM/Wrapper co-design methodology for system-on-chips that ensures thermal safety while still optimizing the test schedule. The method combines a simplified thermal-cost model with a traditional bin-packing algorithm to minimize test time while satisfying temperature constraints. Furthermore, for temperature checking, thermal simulation is done using cycle-accurate power profiles for more realistic results. Experiments show that even a minimal sacrifice in test time can yield a considerable decrease in test temperature as well as the possibility of further lowering temperatures beyond those achieved using traditional power-based test scheduling.

  8. Design and evaluation of a theory-based, culturally relevant outreach model for breast and cervical cancer screening for Latina immigrants.

    PubMed

    White, Kari; Garces, Isabel C; Bandura, Lisa; McGuire, Allison A; Scarinci, Isabel C

    2012-01-01

    Breast and cervical cancer are common among Latinas, but screening rates among foreign-born Latinas are relatively low. In this article we describe the design and implementation of a theory-based (PEN-3) outreach program to promote breast and cervical cancer screening to Latina immigrants, and evaluate the program's effectiveness. We used data from self-administered questionnaires completed at six annual outreach events to examine the sociodemographic characteristics of attendees and evaluate whether the program reached the priority population - foreign-born Latina immigrants with limited access to health care and screening services. To evaluate the program's effectiveness in connecting women to screening, we examined the proportion and characteristics of women who scheduled and attended Pap smear and mammography appointments. Among the 782 Latinas who attended the outreach program, 60% and 83% had not had a Pap smear or mammogram, respectively, in at least a year. Overall, 80% scheduled a Pap smear and 78% scheduled a mammogram. Women without insurance, who did not know where to get screening and had not been screened in the last year were more likely to schedule appointments (P < .05). Among women who scheduled appointments, 65% attended their Pap smear and 79% attended the mammogram. We did not identify significant differences in sociodemographic characteristics associated with appointment attendance. Using a theoretical approach to outreach design and implementation, it is possible to reach a substantial number of Latina immigrants and connect them to cancer screening services.

  9. Assessing Potential Energy Savings in Household Travel: Methodological and Empirical Considerations of Vehicle Capability Constraints and Multi-day Activity Patterns

    NASA Astrophysics Data System (ADS)

    Bolon, Kevin M.

    The lack of multi-day data for household travel and vehicle capability requirements is an impediment to evaluations of energy savings strategies, since (1) travel requirements vary from day-to-day, and (2) energy-saving transportation options often have reduced capability. This work demonstrates a survey methodology and modeling system for evaluating the energy-savings potential of household travel, considering multi-day travel requirements and capability constraints imposed by the available transportation resources. A stochastic scheduling model is introduced---the multi-day Household Activity Schedule Estimator (mPHASE)---which generates synthetic daily schedules based on "fuzzy" descriptions of activity characteristics using a finite-element representation of activity flexibility, coordination among household members, and scheduling conflict resolution. Results of a thirty-household pilot study are presented in which responses to an interactive computer assisted personal interview were used as inputs to the mPHASE model in order to illustrate the feasibility of generating complex, realistic multi-day household schedules. Study vehicles were equipped with digital cameras and GPS data acquisition equipment to validate the model results. The synthetically generated schedules captured an average of 60 percent of household travel distance, and exhibited many of the characteristics of complex household travel, including day-to-day travel variation, and schedule coordination among household members. Future advances in the methodology may improve the model results, such as encouraging more detailed and accurate responses by providing a selection of generated schedules during the interview. Finally, the Constraints-based Transportation Resource Assignment Model (CTRAM) is introduced. Using an enumerative optimization approach, CTRAM determines the energy-minimizing vehicle-to-trip assignment decisions, considering trip schedules, occupancy, and vehicle capability. Designed to accept either actual or synthetic schedules, results of an application of the optimization model to the 2001 and 2009 National Household Travel Survey data show that U.S. households can reduce energy use by 10 percent, on average, by modifying the assignment of existing vehicles to trips. Households in 2009 show a higher tendency to assign vehicles optimally than in 2001, and multi-vehicle households with diverse fleets have greater savings potential, indicating that fleet modification strategies may be effective, particularly under higher energy price conditions.

  10. A comparison of multiprocessor scheduling methods for iterative data flow architectures

    NASA Technical Reports Server (NTRS)

    Storch, Matthew

    1993-01-01

    A comparative study is made between the Algorithm to Architecture Mapping Model (ATAMM) and three other related multiprocessing models from the published literature. The primary focus of all four models is the non-preemptive scheduling of large-grain iterative data flow graphs as required in real-time systems, control applications, signal processing, and pipelined computations. Important characteristics of the models such as injection control, dynamic assignment, multiple node instantiations, static optimum unfolding, range-chart guided scheduling, and mathematical optimization are identified. The models from the literature are compared with the ATAMM for performance, scheduling methods, memory requirements, and complexity of scheduling and design procedures.

  11. Based new WiMax simulation model to investigate Qos with OPNET modeler in sheduling environment

    NASA Astrophysics Data System (ADS)

    Saini, Sanju; Saini, K. K.

    2012-11-01

    WiMAX stands for World Interoperability for Microwave Access. It is considered a major part of broadband wireless network having the IEEE 802.16 standard. WiMAX provides innovative, fixed as well as mobile platforms for broadband internet access anywhere anytime with different transmission modes. The results show approximately equal load and throughput while the delay values vary among the different Base Stations Introducing the various type of scheduling algorithm, like FIFO,PQ,WFQ, for comparison of four type of scheduling service, with its own QoS needs and also introducing OPNET modeler support for Worldwide Interoperability for Microwave Access (WiMAX) network. The simulation results indicate the correctness and the effectiveness of this algorithm. This paper presents a WiMAX simulation model designed with OPNET modeler 14 to measure the delay, load and the throughput performance factors.

  12. An extended continuous estimation of distribution algorithm for solving the permutation flow-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Shao, Zhongshi; Pi, Dechang; Shao, Weishi

    2017-11-01

    This article proposes an extended continuous estimation of distribution algorithm (ECEDA) to solve the permutation flow-shop scheduling problem (PFSP). In ECEDA, to make a continuous estimation of distribution algorithm (EDA) suitable for the PFSP, the largest order value rule is applied to convert continuous vectors to discrete job permutations. A probabilistic model based on a mixed Gaussian and Cauchy distribution is built to maintain the exploration ability of the EDA. Two effective local search methods, i.e. revolver-based variable neighbourhood search and Hénon chaotic-based local search, are designed and incorporated into the EDA to enhance the local exploitation. The parameters of the proposed ECEDA are calibrated by means of a design of experiments approach. Simulation results and comparisons based on some benchmark instances show the efficiency of the proposed algorithm for solving the PFSP.

  13. 2B-Alert Web: An Open-Access Tool for Predicting the Effects of Sleep/Wake Schedules and Caffeine Consumption on Neurobehavioral Performance.

    PubMed

    Reifman, Jaques; Kumar, Kamal; Wesensten, Nancy J; Tountas, Nikolaos A; Balkin, Thomas J; Ramakrishnan, Sridhar

    2016-12-01

    Computational tools that predict the effects of daily sleep/wake amounts on neurobehavioral performance are critical components of fatigue management systems, allowing for the identification of periods during which individuals are at increased risk for performance errors. However, none of the existing computational tools is publicly available, and the commercially available tools do not account for the beneficial effects of caffeine on performance, limiting their practical utility. Here, we introduce 2B-Alert Web, an open-access tool for predicting neurobehavioral performance, which accounts for the effects of sleep/wake schedules, time of day, and caffeine consumption, while incorporating the latest scientific findings in sleep restriction, sleep extension, and recovery sleep. We combined our validated Unified Model of Performance and our validated caffeine model to form a single, integrated modeling framework instantiated as a Web-enabled tool. 2B-Alert Web allows users to input daily sleep/wake schedules and caffeine consumption (dosage and time) to obtain group-average predictions of neurobehavioral performance based on psychomotor vigilance tasks. 2B-Alert Web is accessible at: https://2b-alert-web.bhsai.org. The 2B-Alert Web tool allows users to obtain predictions for mean response time, mean reciprocal response time, and number of lapses. The graphing tool allows for simultaneous display of up to seven different sleep/wake and caffeine schedules. The schedules and corresponding predicted outputs can be saved as a Microsoft Excel file; the corresponding plots can be saved as an image file. The schedules and predictions are erased when the user logs off, thereby maintaining privacy and confidentiality. The publicly accessible 2B-Alert Web tool is available for operators, schedulers, and neurobehavioral scientists as well as the general public to determine the impact of any given sleep/wake schedule, caffeine consumption, and time of day on performance of a group of individuals. This evidence-based tool can be used as a decision aid to design effective work schedules, guide the design of future sleep restriction and caffeine studies, and increase public awareness of the effects of sleep amounts, time of day, and caffeine on alertness. © 2016 Associated Professional Sleep Societies, LLC.

  14. Systemic Sustainability in RtI Using Intervention-Based Scheduling Methodologies

    ERIC Educational Resources Information Center

    Dallas, William P.

    2017-01-01

    This study evaluated a scheduling methodology referred to as intervention-based scheduling to address the problem of practice regarding the fidelity of implementing Response to Intervention (RtI) in an existing school schedule design. Employing panel data, this study used fixed-effects regressions and first differences ordinary least squares (OLS)…

  15. Development of an irrigation scheduling software based on model predicted crop water stress

    USDA-ARS?s Scientific Manuscript database

    Modern irrigation scheduling methods are generally based on sensor-monitored soil moisture regimes rather than crop water stress which is difficult to measure in real-time, but can be computed using agricultural system models. In this study, an irrigation scheduling software based on RZWQM2 model pr...

  16. Scheduler Design Criteria: Requirements and Considerations

    NASA Technical Reports Server (NTRS)

    Lee, Hanbong

    2016-01-01

    This presentation covers fundamental requirements and considerations for developing schedulers in airport operations. We first introduce performance and functional requirements for airport surface schedulers. Among various optimization problems in airport operations, we focus on airport surface scheduling problem, including runway and taxiway operations. We then describe a basic methodology for airport surface scheduling such as node-link network model and scheduling algorithms previously developed. Next, we explain how to design a mathematical formulation in more details, which consists of objectives, decision variables, and constraints. Lastly, we review other considerations, including optimization tools, computational performance, and performance metrics for evaluation.

  17. Process-based Cost Estimation for Ramjet/Scramjet Engines

    NASA Technical Reports Server (NTRS)

    Singh, Brijendra; Torres, Felix; Nesman, Miles; Reynolds, John

    2003-01-01

    Process-based cost estimation plays a key role in effecting cultural change that integrates distributed science, technology and engineering teams to rapidly create innovative and affordable products. Working together, NASA Glenn Research Center and Boeing Canoga Park have developed a methodology of process-based cost estimation bridging the methodologies of high-level parametric models and detailed bottoms-up estimation. The NASA GRC/Boeing CP process-based cost model provides a probabilistic structure of layered cost drivers. High-level inputs characterize mission requirements, system performance, and relevant economic factors. Design alternatives are extracted from a standard, product-specific work breakdown structure to pre-load lower-level cost driver inputs and generate the cost-risk analysis. As product design progresses and matures the lower level more detailed cost drivers can be re-accessed and the projected variation of input values narrowed, thereby generating a progressively more accurate estimate of cost-risk. Incorporated into the process-based cost model are techniques for decision analysis, specifically, the analytic hierarchy process (AHP) and functional utility analysis. Design alternatives may then be evaluated not just on cost-risk, but also user defined performance and schedule criteria. This implementation of full-trade study support contributes significantly to the realization of the integrated development environment. The process-based cost estimation model generates development and manufacturing cost estimates. The development team plans to expand the manufacturing process base from approximately 80 manufacturing processes to over 250 processes. Operation and support cost modeling is also envisioned. Process-based estimation considers the materials, resources, and processes in establishing cost-risk and rather depending on weight as an input, actually estimates weight along with cost and schedule.

  18. Expert Design Advisor

    DTIC Science & Technology

    1990-10-01

    to economic, technological, spatial or logistic concerns, or involve training, man-machine interfaces, or integration into existing systems. Once the...probabilistic reasoning, mixed analysis- and simulation-oriented, mixed computation- and communication-oriented, nonpreemptive static priority...scheduling base, nonrandomized, preemptive static priority scheduling base, randomized, simulation-oriented, and static scheduling base. The selection of both

  19. Optimal Experimental Design for Model Discrimination

    PubMed Central

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it possible to determine these values, and thereby identify an optimal experimental design. After describing the method, it is demonstrated in two content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that design optimization has the potential to increase the informativeness of the experimental method. PMID:19618983

  20. Modeling and Control of a Fixed Wing Tilt-Rotor Tri-Copter

    NASA Astrophysics Data System (ADS)

    Summers, Alexander

    The following thesis considers modeling and control of a fixed wing tilt-rotor tri-copter. An emphasis of the conceptual design is made toward payload transport. Aerodynamic panel code and CAD design provide the base aerodynamic, geometric, mass, and inertia properties. A set of non-linear dynamics are created considering gravity, aerodynamics in vertical takeoff and landing (VTOL) and forward flight, and propulsion applied to a three degree of freedom system. A transition strategy, that removes trajectory planning by means of scheduled inputs, is theorized. Three discrete controllers, utilizing separate control techniques, are applied to ensure stability in the aerodynamic regions of VTOL, transition, and forward flight. The controller techniques include linear quadratic regulation, full state integral action, gain scheduling, and proportional integral derivative (PID) flight control. Simulation of the model control system for flight from forward to backward transition is completed with mass and center of gravity variation.

  1. A Novel Energy Efficient Topology Control Scheme Based on a Coverage-Preserving and Sleep Scheduling Model for Sensor Networks

    PubMed Central

    Shi, Binbin; Wei, Wei; Wang, Yihuai; Shu, Wanneng

    2016-01-01

    In high-density sensor networks, scheduling some sensor nodes to be in the sleep mode while other sensor nodes remain active for monitoring or forwarding packets is an effective control scheme to conserve energy. In this paper, a Coverage-Preserving Control Scheduling Scheme (CPCSS) based on a cloud model and redundancy degree in sensor networks is proposed. Firstly, the normal cloud model is adopted for calculating the similarity degree between the sensor nodes in terms of their historical data, and then all nodes in each grid of the target area can be classified into several categories. Secondly, the redundancy degree of a node is calculated according to its sensing area being covered by the neighboring sensors. Finally, a centralized approximation algorithm based on the partition of the target area is designed to obtain the approximate minimum set of nodes, which can retain the sufficient coverage of the target region and ensure the connectivity of the network at the same time. The simulation results show that the proposed CPCSS can balance the energy consumption and optimize the coverage performance of the sensor network. PMID:27754405

  2. A Novel Energy Efficient Topology Control Scheme Based on a Coverage-Preserving and Sleep Scheduling Model for Sensor Networks.

    PubMed

    Shi, Binbin; Wei, Wei; Wang, Yihuai; Shu, Wanneng

    2016-10-14

    In high-density sensor networks, scheduling some sensor nodes to be in the sleep mode while other sensor nodes remain active for monitoring or forwarding packets is an effective control scheme to conserve energy. In this paper, a Coverage-Preserving Control Scheduling Scheme (CPCSS) based on a cloud model and redundancy degree in sensor networks is proposed. Firstly, the normal cloud model is adopted for calculating the similarity degree between the sensor nodes in terms of their historical data, and then all nodes in each grid of the target area can be classified into several categories. Secondly, the redundancy degree of a node is calculated according to its sensing area being covered by the neighboring sensors. Finally, a centralized approximation algorithm based on the partition of the target area is designed to obtain the approximate minimum set of nodes, which can retain the sufficient coverage of the target region and ensure the connectivity of the network at the same time. The simulation results show that the proposed CPCSS can balance the energy consumption and optimize the coverage performance of the sensor network.

  3. A Generalized Timeline Representation, Services, and Interface for Automating Space Mission Operations

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Johnston, Mark; Frank, Jeremy; Giuliano, Mark; Kavelaars, Alicia; Lenzen, Christoph; Policella, Nicola

    2012-01-01

    Numerous automated and semi-automated planning & scheduling systems have been developed for space applications. Most of these systems are model-based in that they encode domain knowledge necessary to predict spacecraft state and resources based on initial conditions and a proposed activity plan. The spacecraft state and resources as often modeled as a series of timelines, with a timeline or set of timelines to represent a state or resource key in the operations of the spacecraft. In this paper, we first describe a basic timeline representation that can represent a set of state, resource, timing, and transition constraints. We describe a number of planning and scheduling systems designed for space applications (and in many cases deployed for use of ongoing missions) and describe how they do and do not map onto this timeline model.

  4. Power management and distribution considerations for a lunar base

    NASA Technical Reports Server (NTRS)

    Kenny, Barbara H.; Coleman, Anthony S.

    1991-01-01

    Design philosophies and technology needs for the power management and distribution (PMAD) portion of a lunar base power system are discussed. A process is described whereby mission planners may proceed from a knowledge of the PMAD functions and mission performance requirements to a definition of design options and technology needs. Current research efforts at the NASA LRC to meet the PMAD system needs for a Lunar base are described. Based on the requirements, the lunar base PMAD is seen as best being accomplished by a utility like system, although with some additional demands including autonomous operation and scheduling and accurate, predictive modeling during the design process.

  5. Design and evaluation of a theory-based, culturally relevant outreach model for breast and cervical cancer screening for Latina immigrants

    PubMed Central

    White, Kari; Garces, Isabel C.; Bandura, Lisa; McGuire, Allison A.; Scarinci, Isabel C.

    2013-01-01

    Objectives Breast and cervical cancer are common among Latinas, but screening rates among foreign-born Latinas are relatively low. In this article we describe the design and implementation of a theory-based (PEN-3) outreach program to promote breast and cervical cancer screening to Latina immigrants, and evaluate the program’s effectiveness. Methods We used data from self-administered questionnaires completed at six annual outreach events to examine the sociodemographic characteristics of attendees and evaluate whether the program reached the priority population – foreign-born Latina immigrants with limited access to health care and screening services. To evaluate the program’s effectiveness in connecting women to screening, we examined the proportion and characteristics of women who scheduled and attended Pap smear and mammography appointments. Results Among the 782 Latinas who attended the outreach program, 60% and 83% had not had a Pap smear or mammogram, respectively, in at least a year. Overall, 80% scheduled a Pap smear and 78% scheduled a mammogram. Women without insurance, who did not know where to get screening and had not been screened in the last year were more likely to schedule appointments (p < 0.05). Among women who scheduled appointments, 65% attended their Pap smear and 79% attended the mammogram. We did not identify significant differences in sociodemographic characteristics associated with appointment attendance. Conclusions Using a theoretical approach to outreach design and implementation, it is possible to reach a substantial number of Latina immigrants and connect them to cancer screening services. PMID:22870569

  6. Radiobiological modeling of two stereotactic body radiotherapy schedules in patients with stage I peripheral non-small cell lung cancer.

    PubMed

    Huang, Bao-Tian; Lin, Zhu; Lin, Pei-Xian; Lu, Jia-Yang; Chen, Chuang-Zhen

    2016-06-28

    This study aims to compare the radiobiological response of two stereotactic body radiotherapy (SBRT) schedules for patients with stage I peripheral non-small cell lung cancer (NSCLC) using radiobiological modeling methods. Volumetric modulated arc therapy (VMAT)-based SBRT plans were designed using two dose schedules of 1 × 34 Gy (34 Gy in 1 fraction) and 4 × 12 Gy (48 Gy in 4 fractions) for 19 patients diagnosed with primary stage I NSCLC. Dose to the gross target volume (GTV), planning target volume (PTV), lung and chest wall (CW) were converted to biologically equivalent dose in 2 Gy fraction (EQD2) for comparison. Five different radiobiological models were employed to predict the tumor control probability (TCP) value. Three additional models were utilized to estimate the normal tissue complication probability (NTCP) value for the lung and the modified equivalent uniform dose (mEUD) value to the CW. Our result indicates that the 1 × 34 Gy dose schedule provided a higher EQD2 dose to the tumor, lung and CW. Radiobiological modeling revealed that the TCP value for the tumor, NTCP value for the lung and mEUD value for the CW were 7.4% (in absolute value), 7.2% (in absolute value) and 71.8% (in relative value) higher on average, respectively, using the 1 × 34 Gy dose schedule.

  7. Phase I Design for Completely or Partially Ordered Treatment Schedules

    PubMed Central

    Wages, Nolan A.; O’Quigley, John; Conaway, Mark R.

    2013-01-01

    The majority of methods for the design of Phase I trials in oncology are based upon a single course of therapy, yet in actual practice it may be the case that there is more than one treatment schedule for any given dose. Therefore, the probability of observing a dose-limiting toxicity (DLT) may depend upon both the total amount of the dose given, as well as the frequency with which it is administered. The objective of the study then becomes to find an acceptable combination of both dose and schedule. Past literature on designing these trials has entailed the assumption that toxicity increases monotonically with both dose and schedule. In this article, we relax this assumption for schedules and present a dose-schedule finding design that can be generalized to situations in which we know the ordering between all schedules and those in which we do not. We present simulation results that compare our method to other suggested dose-schedule finding methodology. PMID:24114957

  8. Design of a candidate flutter suppression control law for DAST ARW-2. [Drones for Aerodynamic and Structural Testing Aeroelastic Research Wing

    NASA Technical Reports Server (NTRS)

    Adams, W. M., Jr.; Tiffany, S. H.

    1983-01-01

    A control law is developed to suppress symmetric flutter for a mathematical model of an aeroelastic research vehicle. An implementable control law is attained by including modified LQG (linear quadratic Gaussian) design techniques, controller order reduction, and gain scheduling. An alternate (complementary) design approach is illustrated for one flight condition wherein nongradient-based constrained optimization techniques are applied to maximize controller robustness.

  9. The LSST Scheduler from design to construction

    NASA Astrophysics Data System (ADS)

    Delgado, Francisco; Reuter, Michael A.

    2016-07-01

    The Large Synoptic Survey Telescope (LSST) will be a highly robotic facility, demanding a very high efficiency during its operation. To achieve this, the LSST Scheduler has been envisioned as an autonomous software component of the Observatory Control System (OCS), that selects the sequence of targets in real time. The Scheduler will drive the survey using optimization of a dynamic cost function of more than 200 parameters. Multiple science programs produce thousands of candidate targets for each observation, and multiple telemetry measurements are received to evaluate the external and the internal conditions of the observatory. The design of the LSST Scheduler started early in the project supported by Model Based Systems Engineering, detailed prototyping and scientific validation of the survey capabilities required. In order to build such a critical component, an agile development path in incremental releases is presented, integrated to the development plan of the Operations Simulator (OpSim) to allow constant testing, integration and validation in a simulated OCS environment. The final product is a Scheduler that is also capable of running 2000 times faster than real time in simulation mode for survey studies and scientific validation during commissioning and operations.

  10. Performance Modeling in CUDA Streams - A Means for High-Throughput Data Processing.

    PubMed

    Li, Hao; Yu, Di; Kumar, Anand; Tu, Yi-Cheng

    2014-10-01

    Push-based database management system (DBMS) is a new type of data processing software that streams large volume of data to concurrent query operators. The high data rate of such systems requires large computing power provided by the query engine. In our previous work, we built a push-based DBMS named G-SDMS to harness the unrivaled computational capabilities of modern GPUs. A major design goal of G-SDMS is to support concurrent processing of heterogenous query processing operations and enable resource allocation among such operations. Understanding the performance of operations as a result of resource consumption is thus a premise in the design of G-SDMS. With NVIDIA's CUDA framework as the system implementation platform, we present our recent work on performance modeling of CUDA kernels running concurrently under a runtime mechanism named CUDA stream . Specifically, we explore the connection between performance and resource occupancy of compute-bound kernels and develop a model that can predict the performance of such kernels. Furthermore, we provide an in-depth anatomy of the CUDA stream mechanism and summarize the main kernel scheduling disciplines in it. Our models and derived scheduling disciplines are verified by extensive experiments using synthetic and real-world CUDA kernels.

  11. Reinventing The Design Process: Teams and Models

    NASA Technical Reports Server (NTRS)

    Wall, Stephen D.

    1999-01-01

    The future of space mission designing will be dramatically different from the past. Formerly, performance-driven paradigms emphasized data return with cost and schedule being secondary issues. Now and in the future, costs are capped and schedules fixed-these two variables must be treated as independent in the design process. Accordingly, JPL has redesigned its design process. At the conceptual level, design times have been reduced by properly defining the required design depth, improving the linkages between tools, and managing team dynamics. In implementation-phase design, system requirements will be held in crosscutting models, linked to subsystem design tools through a central database that captures the design and supplies needed configuration management and control. Mission goals will then be captured in timelining software that drives the models, testing their capability to execute the goals. Metrics are used to measure and control both processes and to ensure that design parameters converge through the design process within schedule constraints. This methodology manages margins controlled by acceptable risk levels. Thus, teams can evolve risk tolerance (and cost) as they would any engineering parameter. This new approach allows more design freedom for a longer time, which tends to encourage revolutionary and unexpected improvements in design.

  12. Energy-driven scheduling algorithm for nanosatellite energy harvesting maximization

    NASA Astrophysics Data System (ADS)

    Slongo, L. K.; Martínez, S. V.; Eiterer, B. V. B.; Pereira, T. G.; Bezerra, E. A.; Paiva, K. V.

    2018-06-01

    The number of tasks that a satellite may execute in orbit is strongly related to the amount of energy its Electrical Power System (EPS) is able to harvest and to store. The manner the stored energy is distributed within the satellite has also a great impact on the CubeSat's overall efficiency. Most CubeSat's EPS do not prioritize energy constraints in their formulation. Unlike that, this work proposes an innovative energy-driven scheduling algorithm based on energy harvesting maximization policy. The energy harvesting circuit is mathematically modeled and the solar panel I-V curves are presented for different temperature and irradiance levels. Considering the models and simulations, the scheduling algorithm is designed to keep solar panels working close to their maximum power point by triggering tasks in the appropriate form. Tasks execution affects battery voltage, which is coupled to the solar panels through a protection circuit. A software based Perturb and Observe strategy allows defining the tasks to be triggered. The scheduling algorithm is tested in FloripaSat, which is an 1U CubeSat. A test apparatus is proposed to emulate solar irradiance variation, considering the satellite movement around the Earth. Tests have been conducted to show that the scheduling algorithm improves the CubeSat energy harvesting capability by 4.48% in a three orbit experiment and up to 8.46% in a single orbit cycle in comparison with the CubeSat operating without the scheduling algorithm.

  13. Student Scheduling in a Year-Round Middle School. A Simulation Notebook.

    ERIC Educational Resources Information Center

    Whitley, Alfred C.

    This paper presents a model of a successful student scheduling pattern for a 45-15 year-round middle school (grades 6-8). The model allows for scheduling 100 percent of resource lab teaching time for all the student population in attendance at any one time, and formulates a house design and team teaching structure that facilitates smooth ingress…

  14. Automating Mid- and Long-Range Scheduling for NASA's Deep Space Network

    NASA Technical Reports Server (NTRS)

    Johnston, Mark D.; Tran, Daniel; Arroyo, Belinda; Sorensen, Sugi; Tay, Peter; Carruth, Butch; Coffman, Adam; Wallace, Mike

    2012-01-01

    NASA has recently deployed a new mid-range scheduling system for the antennas of the Deep Space Network (DSN), called Service Scheduling Software, or S(sup 3). This system is architected as a modern web application containing a central scheduling database integrated with a collaborative environment, exploiting the same technologies as social web applications but applied to a space operations context. This is highly relevant to the DSN domain since the network schedule of operations is developed in a peer-to-peer negotiation process among all users who utilize the DSN (representing 37 projects including international partners and ground-based science and calibration users). The initial implementation of S(sup 3) is complete and the system has been operational since July 2011. S(sup 3) has been used for negotiating schedules since April 2011, including the baseline schedules for three launching missions in late 2011. S(sup 3) supports a distributed scheduling model, in which changes can potentially be made by multiple users based on multiple schedule "workspaces" or versions of the schedule. This has led to several challenges in the design of the scheduling database, and of a change proposal workflow that allows users to concur with or to reject proposed schedule changes, and then counter-propose with alternative or additional suggested changes. This paper describes some key aspects of the S(sup 3) system and lessons learned from its operational deployment to date, focusing on the challenges of multi-user collaborative scheduling in a practical and mission-critical setting. We will also describe the ongoing project to extend S(sup 3) to encompass long-range planning, downtime analysis, and forecasting, as the next step in developing a single integrated DSN scheduling tool suite to cover all time ranges.

  15. Protocol, Engineering Research Center, University of California, Santa Barbara

    DTIC Science & Technology

    2005-12-01

    minimizing the energy consumption in idle periods. We have designed an asynchronous wakeup schedule based on the theory of block designs. The idea is...performance of ad hoc networks through innovative packet scheduling (Baker). "* Developed a number of novel schemes to ensure loop freedom in on demand routing...network nodes to schedule their transmissions to avoid collisions (Garcia-Luna-Aceves). "* Designed and analyzed the Hybrid Activation Multiple Access (HAMA

  16. Automating Mid- and Long-Range Scheduling for the NASA Deep Space Network

    NASA Technical Reports Server (NTRS)

    Johnston, Mark D.; Tran, Daniel

    2012-01-01

    NASA has recently deployed a new mid-range scheduling system for the antennas of the Deep Space Network (DSN), called Service Scheduling Software, or S(sup 3). This system was designed and deployed as a modern web application containing a central scheduling database integrated with a collaborative environment, exploiting the same technologies as social web applications but applied to a space operations context. This is highly relevant to the DSN domain since the network schedule of operations is developed in a peer-to-peer negotiation process among all users of the DSN. These users represent not only NASA's deep space missions, but also international partners and ground-based science and calibration users. The initial implementation of S(sup 3) is complete and the system has been operational since July 2011. This paper describes some key aspects of the S(sup 3) system and on the challenges of modeling complex scheduling requirements and the ongoing extension of S(sup 3) to encompass long-range planning, downtime analysis, and forecasting, as the next step in developing a single integrated DSN scheduling tool suite to cover all time ranges.

  17. Joint Cross-Layer Design for Wireless QoS Content Delivery

    NASA Astrophysics Data System (ADS)

    Chen, Jie; Lv, Tiejun; Zheng, Haitao

    2005-12-01

    In this paper, we propose a joint cross-layer design for wireless quality-of-service (QoS) content delivery. Central to our proposed cross-layer design is the concept of adaptation. Adaptation represents the ability to adjust protocol stacks and applications to respond to channel variations. We focus our cross-layer design especially on the application, media access control (MAC), and physical layers. The network is designed based on our proposed fast frequency-hopping orthogonal frequency division multiplex (OFDM) technique. We also propose a QoS-awareness scheduler and a power adaptation transmission scheme operating at both the base station and mobile sides. The proposed MAC scheduler coordinates the transmissions of an IP base station and mobile nodes. The scheduler also selects appropriate transmission formats and packet priorities for individual users based on current channel conditions and the users' QoS requirements. The test results show that our cross-layer design provides an excellent framework for wireless QoS content delivery.

  18. Power Grid Maintenance Scheduling Intelligence Arrangement Supporting System Based on Power Flow Forecasting

    NASA Astrophysics Data System (ADS)

    Xie, Chang; Wen, Jing; Liu, Wenying; Wang, Jiaming

    With the development of intelligent dispatching, the intelligence level of network control center full-service urgent need to raise. As an important daily work of network control center, the application of maintenance scheduling intelligent arrangement to achieve high-quality and safety operation of power grid is very important. By analyzing the shortages of the traditional maintenance scheduling software, this paper designs a power grid maintenance scheduling intelligence arrangement supporting system based on power flow forecasting, which uses the advanced technologies in maintenance scheduling, such as artificial intelligence, online security checking, intelligent visualization techniques. It implements the online security checking of maintenance scheduling based on power flow forecasting and power flow adjusting based on visualization, in order to make the maintenance scheduling arrangement moreintelligent and visual.

  19. Aircraft Energy Conservation during Airport Ground Operations

    DTIC Science & Technology

    1982-03-01

    minimized. The model can be run in a non -optimizing mode to simulate movements along pre-assigned taxi paths. 8-6 The model is also designed ...5.5 5.6 5.7 Engine Designation by Airline and Aircraft Type IAD 2-6 Engine Designation by Airline and Aircraft Type DCA 2-7 Fuel Flow Rates...B.2 CY 1979 Aircraft Operations at IAD and DCA Airports . . 3-5 B.3 1979 Scheduled and Non -Scheduled Departures from IAD by Aircraft Type and

  20. Simultaneously optimizing dose and schedule of a new cytotoxic agent.

    PubMed

    Braun, Thomas M; Thall, Peter F; Nguyen, Hoang; de Lima, Marcos

    2007-01-01

    Traditionally, phase I clinical trial designs are based upon one predefined course of treatment while varying among patients the dose given at each administration. In actual medical practice, patients receive a schedule comprised of several courses of treatment, and some patients may receive one or more dose reductions or delays during treatment. Consequently, the overall risk of toxicity for each patient is a function of both actual schedule of treatment and the differing doses used at each adminstration. Our goal is to provide a practical phase I clinical trial design that more accurately reflects actual medical practice by accounting for both dose per administration and schedule. We propose an outcome-adaptive Bayesian design that simultaneously optimizes both dose and schedule in terms of the overall risk of toxicity, based on time-to-toxicity outcomes. We use computer simulation as a tool to calibrate design parameters. We describe a phase I trial in allogeneic bone marrow transplantation that was designed and is currently being conducted using our new method. Our computer simulations demonstrate that our method outperforms any method that searches for an optimal dose but does not allow schedule to vary, both in terms of the probability of identifying optimal (dose, schedule) combinations, and the numbers of patients assigned to those combinations in the trial. Our design requires greater sample sizes than those seen in traditional phase I studies due to the larger number of treatment combinations examined. Our design also assumes that the effects of multiple administrations are independent of each other and that the hazard of toxicity is the same for all administrations. Our design is the first for phase I clinical trials that is sufficiently flexible and practical to truly reflect clinical practice by varying both dose and the timing and number of administrations given to each patient.

  1. Imaging Tasks Scheduling for High-Altitude Airship in Emergency Condition Based on Energy-Aware Strategy

    PubMed Central

    Zhimeng, Li; Chuan, He; Dishan, Qiu; Jin, Liu; Manhao, Ma

    2013-01-01

    Aiming to the imaging tasks scheduling problem on high-altitude airship in emergency condition, the programming models are constructed by analyzing the main constraints, which take the maximum task benefit and the minimum energy consumption as two optimization objectives. Firstly, the hierarchy architecture is adopted to convert this scheduling problem into three subproblems, that is, the task ranking, value task detecting, and energy conservation optimization. Then, the algorithms are designed for the sub-problems, and the solving results are corresponding to feasible solution, efficient solution, and optimization solution of original problem, respectively. This paper makes detailed introduction to the energy-aware optimization strategy, which can rationally adjust airship's cruising speed based on the distribution of task's deadline, so as to decrease the total energy consumption caused by cruising activities. Finally, the application results and comparison analysis show that the proposed strategy and algorithm are effective and feasible. PMID:23864822

  2. Resource Management in Constrained Dynamic Situations

    NASA Astrophysics Data System (ADS)

    Seok, Jinwoo

    Resource management is considered in this dissertation for systems with limited resources, possibly combined with other system constraints, in unpredictably dynamic environments. Resources may represent fuel, power, capabilities, energy, and so on. Resource management is important for many practical systems; usually, resources are limited, and their use must be optimized. Furthermore, systems are often constrained, and constraints must be satisfied for safe operation. Simplistic resource management can result in poor use of resources and failure of the system. Furthermore, many real-world situations involve dynamic environments. Many traditional problems are formulated based on the assumptions of given probabilities or perfect knowledge of future events. However, in many cases, the future is completely unknown, and information on or probabilities about future events are not available. In other words, we operate in unpredictably dynamic situations. Thus, a method is needed to handle dynamic situations without knowledge of the future, but few formal methods have been developed to address them. Thus, the goal is to design resource management methods for constrained systems, with limited resources, in unpredictably dynamic environments. To this end, resource management is organized hierarchically into two levels: 1) planning, and 2) control. In the planning level, the set of tasks to be performed is scheduled based on limited resources to maximize resource usage in unpredictably dynamic environments. In the control level, the system controller is designed to follow the schedule by considering all the system constraints for safe and efficient operation. Consequently, this dissertation is mainly divided into two parts: 1) planning level design, based on finite state machines, and 2) control level methods, based on model predictive control. We define a recomposable restricted finite state machine to handle limited resource situations and unpredictably dynamic environments for the planning level. To obtain a policy, dynamic programing is applied, and to obtain a solution, limited breadth-first search is applied to the recomposable restricted finite state machine. A multi-function phased array radar resource management problem and an unmanned aerial vehicle patrolling problem are treated using recomposable restricted finite state machines. Then, we use model predictive control for the control level, because it allows constraint handling and setpoint tracking for the schedule. An aircraft power system management problem is treated that aims to develop an integrated control system for an aircraft gas turbine engine and electrical power system using rate-based model predictive control. Our results indicate that at the planning level, limited breadth-first search for recomposable restricted finite state machines generates good scheduling solutions in limited resource situations and unpredictably dynamic environments. The importance of cooperation in the planning level is also verified. At the control level, a rate-based model predictive controller allows good schedule tracking and safe operations. The importance of considering the system constraints and interactions between the subsystems is indicated. For the best resource management in constrained dynamic situations, the planning level and the control level need to be considered together.

  3. Heartbeat-based error diagnosis framework for distributed embedded systems

    NASA Astrophysics Data System (ADS)

    Mishra, Swagat; Khilar, Pabitra Mohan

    2012-01-01

    Distributed Embedded Systems have significant applications in automobile industry as steer-by-wire, fly-by-wire and brake-by-wire systems. In this paper, we provide a general framework for fault detection in a distributed embedded real time system. We use heartbeat monitoring, check pointing and model based redundancy to design a scalable framework that takes care of task scheduling, temperature control and diagnosis of faulty nodes in a distributed embedded system. This helps in diagnosis and shutting down of faulty actuators before the system becomes unsafe. The framework is designed and tested using a new simulation model consisting of virtual nodes working on a message passing system.

  4. Heartbeat-based error diagnosis framework for distributed embedded systems

    NASA Astrophysics Data System (ADS)

    Mishra, Swagat; Khilar, Pabitra Mohan

    2011-12-01

    Distributed Embedded Systems have significant applications in automobile industry as steer-by-wire, fly-by-wire and brake-by-wire systems. In this paper, we provide a general framework for fault detection in a distributed embedded real time system. We use heartbeat monitoring, check pointing and model based redundancy to design a scalable framework that takes care of task scheduling, temperature control and diagnosis of faulty nodes in a distributed embedded system. This helps in diagnosis and shutting down of faulty actuators before the system becomes unsafe. The framework is designed and tested using a new simulation model consisting of virtual nodes working on a message passing system.

  5. Integration of Optimal Scheduling with Case-Based Planning.

    DTIC Science & Technology

    1995-08-01

    integrates Case-Based Reasoning (CBR) and Rule-Based Reasoning (RBR) systems. ’ Tachyon : A Constraint-Based Temporal Reasoning Model and Its...Implementation’ provides an overview of the Tachyon temporal’s reasoning system and discusses its possible applications. ’Dual-Use Applications of Tachyon : From...Force Structure Modeling to Manufacturing Scheduling’ discusses the application of Tachyon to real world problems, specifically military force deployment and manufacturing scheduling.

  6. Model based systems engineering (MBSE) applied to Radio Aurora Explorer (RAX) CubeSat mission operational scenarios

    NASA Astrophysics Data System (ADS)

    Spangelo, S. C.; Cutler, J.; Anderson, L.; Fosse, E.; Cheng, L.; Yntema, R.; Bajaj, M.; Delp, C.; Cole, B.; Soremekum, G.; Kaslow, D.

    Small satellites are more highly resource-constrained by mass, power, volume, delivery timelines, and financial cost relative to their larger counterparts. Small satellites are operationally challenging because subsystem functions are coupled and constrained by the limited available commodities (e.g. data, energy, and access times to ground resources). Furthermore, additional operational complexities arise because small satellite components are physically integrated, which may yield thermal or radio frequency interference. In this paper, we extend our initial Model Based Systems Engineering (MBSE) framework developed for a small satellite mission by demonstrating the ability to model different behaviors and scenarios. We integrate several simulation tools to execute SysML-based behavior models, including subsystem functions and internal states of the spacecraft. We demonstrate utility of this approach to drive the system analysis and design process. We demonstrate applicability of the simulation environment to capture realistic satellite operational scenarios, which include energy collection, the data acquisition, and downloading to ground stations. The integrated modeling environment enables users to extract feasibility, performance, and robustness metrics. This enables visualization of both the physical states (e.g. position, attitude) and functional states (e.g. operating points of various subsystems) of the satellite for representative mission scenarios. The modeling approach presented in this paper offers satellite designers and operators the opportunity to assess the feasibility of vehicle and network parameters, as well as the feasibility of operational schedules. This will enable future missions to benefit from using these models throughout the full design, test, and fly cycle. In particular, vehicle and network parameters and schedules can be verified prior to being implemented, during mission operations, and can also be updated in near real-time with oper- tional performance feedback.

  7. A Model-based B2B (Batch to Batch) Control for An Industrial Batch Polymerization Process

    NASA Astrophysics Data System (ADS)

    Ogawa, Morimasa

    This paper describes overview of a model-based B2B (batch to batch) control for an industrial batch polymerization process. In order to control the reaction temperature precisely, several methods based on the rigorous process dynamics model are employed at all design stage of the B2B control, such as modeling and parameter estimation of the reaction kinetics which is one of the important part of the process dynamics model. The designed B2B control consists of the gain scheduled I-PD/II2-PD control (I-PD with double integral control), the feed-forward compensation at the batch start time, and the model adaptation utilizing the results of the last batch operation. Throughout the actual batch operations, the B2B control provides superior control performance compared with that of conventional control methods.

  8. FASTER - A tool for DSN forecasting and scheduling

    NASA Technical Reports Server (NTRS)

    Werntz, David; Loyola, Steven; Zendejas, Silvino

    1993-01-01

    FASTER (Forecasting And Scheduling Tool for Earth-based Resources) is a suite of tools designed for forecasting and scheduling JPL's Deep Space Network (DSN). The DSN is a set of antennas and other associated resources that must be scheduled for satellite communications, astronomy, maintenance, and testing. FASTER consists of MS-Windows based programs that replace two existing programs (RALPH and PC4CAST). FASTER was designed to be more flexible, maintainable, and user friendly. FASTER makes heavy use of commercial software to allow for customization by users. FASTER implements scheduling as a two pass process: the first pass calculates a predictive profile of resource utilization; the second pass uses this information to calculate a cost function used in a dynamic programming optimization step. This information allows the scheduler to 'look ahead' at activities that are not as yet scheduled. FASTER has succeeded in allowing wider access to data and tools, reducing the amount of effort expended and increasing the quality of analysis.

  9. Vaccination adherence: Review and proposed model.

    PubMed

    Abahussin, Asma A; Albarrak, Ahmed I

    The prevalence of childhood vaccine-preventable diseases can be significantly reduced through adherence to confirmed vaccination schedules. However, many barriers to vaccination compliance exist, including a lack of awareness regarding the importance of vaccines, missing due dates, and fear of complications from vaccinations. The aim of this study is to review the existing tools and publications regarding vaccination adherence, and to propose a design for a vaccination adherence application (app) for smartphones. Android and iOS apps designed for vaccination reminders have been reviewed to examine six elements: educational factor; customizing features; reminder tools; peer education facilitations; feedback, and the language of apps' interface and content. The literature from PubMed has been reviewed for studies addressing reminder systems or tools including apps. The study has revealed insufficient (n=6) technology-based interventions for increasing childhood vaccination rates by reminding parents in comparison to the fast growth in technology, out of which are two publications discussed mobile apps. Ten apps have been found in apps stores; only one out of them was designed for the Saudi vaccination schedule in Arabic language with some weaknesses. The study proposed a design for a vaccination reminder app that includes a number of features in order to overcome the limitations discussed in the studied reminders, apps, and systems. The design supports the Arabic language and the Saudi vaccination schedule; parental education including peer education; a variety of reminder methods, and the capability to track vaccinations and refer to the app as a personal health record. The study discussed a design for a vaccination reminder app that satisfies the specific requirements for better compliance to children's immunization schedules based on reviewing the existing apps and publications. The proposed design includes element to educate parents and answer their concerns about vaccines. It involves their peers and can encourage the exchange of experiences and overcome vaccine fears. In addition, it could form a convenient child personal health record. Copyright © 2016. Published by Elsevier Ltd.

  10. The LSST OCS scheduler design

    NASA Astrophysics Data System (ADS)

    Delgado, Francisco; Schumacher, German

    2014-08-01

    The Large Synoptic Survey Telescope (LSST) is a complex system of systems with demanding performance and operational requirements. The nature of its scientific goals requires a special Observatory Control System (OCS) and particularly a very specialized automatic Scheduler. The OCS Scheduler is an autonomous software component that drives the survey, selecting the detailed sequence of visits in real time, taking into account multiple science programs, the current external and internal conditions, and the history of observations. We have developed a SysML model for the OCS Scheduler that fits coherently in the OCS and LSST integrated model. We have also developed a prototype of the Scheduler that implements the scheduling algorithms in the simulation environment provided by the Operations Simulator, where the environment and the observatory are modeled with real weather data and detailed kinematics parameters. This paper expands on the Scheduler architecture and the proposed algorithms to achieve the survey goals.

  11. On Reducing Delay in Mesh-Based P2P Streaming: A Mesh-Push Approach

    NASA Astrophysics Data System (ADS)

    Liu, Zheng; Xue, Kaiping; Hong, Peilin

    The peer-assisted streaming paradigm has been widely employed to distribute live video data on the internet recently. In general, the mesh-based pull approach is more robust and efficient than the tree-based push approach. However, pull protocol brings about longer streaming delay, which is caused by the handshaking process of advertising buffer map message, sending request message and scheduling of the data block. In this paper, we propose a new approach, mesh-push, to address this issue. Different from the traditional pull approach, mesh-push implements block scheduling algorithm at sender side, where the block transmission is initiated by the sender rather than by the receiver. We first formulate the optimal upload bandwidth utilization problem, then present the mesh-push approach, in which a token protocol is designed to avoid block redundancy; a min-cost flow model is employed to derive the optimal scheduling for the push peer; and a push peer selection algorithm is introduced to reduce control overhead. Finally, we evaluate mesh-push through simulation, the results of which show mesh-push outperforms the pull scheduling in streaming delay, and achieves comparable delivery ratio at the same time.

  12. Human factors issues in the design of user interfaces for planning and scheduling

    NASA Technical Reports Server (NTRS)

    Murphy, Elizabeth D.

    1991-01-01

    The purpose is to provide and overview of human factors issues that impact the effectiveness of user interfaces to automated scheduling tools. The following methods are employed: (1) a survey of planning and scheduling tools; (2) the identification and analysis of human factors issues; (3) the development of design guidelines based on human factors literature; and (4) the generation of display concepts to illustrate guidelines.

  13. An embedded multi-core parallel model for real-time stereo imaging

    NASA Astrophysics Data System (ADS)

    He, Wenjing; Hu, Jian; Niu, Jingyu; Li, Chuanrong; Liu, Guangyu

    2018-04-01

    The real-time processing based on embedded system will enhance the application capability of stereo imaging for LiDAR and hyperspectral sensor. The task partitioning and scheduling strategies for embedded multiprocessor system starts relatively late, compared with that for PC computer. In this paper, aimed at embedded multi-core processing platform, a parallel model for stereo imaging is studied and verified. After analyzing the computing amount, throughout capacity and buffering requirements, a two-stage pipeline parallel model based on message transmission is established. This model can be applied to fast stereo imaging for airborne sensors with various characteristics. To demonstrate the feasibility and effectiveness of the parallel model, a parallel software was designed using test flight data, based on the 8-core DSP processor TMS320C6678. The results indicate that the design performed well in workload distribution and had a speed-up ratio up to 6.4.

  14. Scheduler for monitoring objects orbiting earth using satellite-based telescopes

    DOEpatents

    Olivier, Scot S; Pertica, Alexander J; Riot, Vincent J; De Vries, Willem H; Bauman, Brian J; Nikolaev, Sergei; Henderson, John R; Phillion, Donald W

    2015-04-28

    An ephemeris refinement system includes satellites with imaging devices in earth orbit to make observations of space-based objects ("target objects") and a ground-based controller that controls the scheduling of the satellites to make the observations of the target objects and refines orbital models of the target objects. The ground-based controller determines when the target objects of interest will be near enough to a satellite for that satellite to collect an image of the target object based on an initial orbital model for the target objects. The ground-based controller directs the schedules to be uploaded to the satellites, and the satellites make observations as scheduled and download the observations to the ground-based controller. The ground-based controller then refines the initial orbital models of the target objects based on the locations of the target objects that are derived from the observations.

  15. Performance Modeling in CUDA Streams - A Means for High-Throughput Data Processing

    PubMed Central

    Li, Hao; Yu, Di; Kumar, Anand; Tu, Yi-Cheng

    2015-01-01

    Push-based database management system (DBMS) is a new type of data processing software that streams large volume of data to concurrent query operators. The high data rate of such systems requires large computing power provided by the query engine. In our previous work, we built a push-based DBMS named G-SDMS to harness the unrivaled computational capabilities of modern GPUs. A major design goal of G-SDMS is to support concurrent processing of heterogenous query processing operations and enable resource allocation among such operations. Understanding the performance of operations as a result of resource consumption is thus a premise in the design of G-SDMS. With NVIDIA’s CUDA framework as the system implementation platform, we present our recent work on performance modeling of CUDA kernels running concurrently under a runtime mechanism named CUDA stream. Specifically, we explore the connection between performance and resource occupancy of compute-bound kernels and develop a model that can predict the performance of such kernels. Furthermore, we provide an in-depth anatomy of the CUDA stream mechanism and summarize the main kernel scheduling disciplines in it. Our models and derived scheduling disciplines are verified by extensive experiments using synthetic and real-world CUDA kernels. PMID:26566545

  16. A Genetic Algorithm Tool (splicer) for Complex Scheduling Problems and the Space Station Freedom Resupply Problem

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Valenzuela-Rendon, Manuel

    1993-01-01

    The Space Station Freedom will require the supply of items in a regular fashion. A schedule for the delivery of these items is not easy to design due to the large span of time involved and the possibility of cancellations and changes in shuttle flights. This paper presents the basic concepts of a genetic algorithm model, and also presents the results of an effort to apply genetic algorithms to the design of propellant resupply schedules. As part of this effort, a simple simulator and an encoding by which a genetic algorithm can find near optimal schedules have been developed. Additionally, this paper proposes ways in which robust schedules, i.e., schedules that can tolerate small changes, can be found using genetic algorithms.

  17. System for NIS Forecasting Based on Ensembles Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-01-02

    BMA-NIS is a package/library designed to be called by a script (e.g. Perl or Python). The software itself is written in the language of R. The software assists electric power delivery systems in planning resource availability and demand, based on historical data and current data variables. Net Interchange Schedule (NIS) is the algebraic sum of all energy scheduled to flow into or out of a balancing area during any interval. Accurate forecasts for NIS are important so that the Area Control Error (ACE) stays within an acceptable limit. To date, there are many approaches for forecasting NIS but all nonemore » of these are based on single models that can be sensitive to time of day and day of week effects.« less

  18. The nurse scheduling problem: a goal programming and nonlinear optimization approaches

    NASA Astrophysics Data System (ADS)

    Hakim, L.; Bakhtiar, T.; Jaharuddin

    2017-01-01

    Nurses scheduling is an activity of allocating nurses to conduct a set of tasks at certain room at a hospital or health centre within a certain period. One of obstacles in the nurse scheduling is the lack of resources in order to fulfil the needs of the hospital. Nurse scheduling which is undertaken manually will be at risk of not fulfilling some nursing rules set by the hospital. Therefore, this study aimed to perform scheduling models that satisfy all the specific rules set by the management of Bogor State Hospital. We have developed three models to overcome the scheduling needs. Model 1 is designed to schedule nurses who are solely assigned to a certain inpatient unit and Model 2 is constructed to manage nurses who are assigned to an inpatient room as well as at Polyclinic room as conjunct nurses. As the assignment of nurses on each shift is uneven, then we propose Model 3 to minimize the variance of the workload in order to achieve equitable assignment on every shift. The first two models are formulated in goal programming framework, while the last model is in nonlinear optimization form.

  19. Choosing a software design method for real-time Ada applications: JSD process inversion as a means to tailor a design specification to the performance requirements and target machine

    NASA Technical Reports Server (NTRS)

    Withey, James V.

    1986-01-01

    The validity of real-time software is determined by its ability to execute on a computer within the time constraints of the physical system it is modeling. In many applications the time constraints are so critical that the details of process scheduling are elevated to the requirements analysis phase of the software development cycle. It is not uncommon to find specifications for a real-time cyclic executive program included to assumed in such requirements. It was found that prelininary designs structured around this implementation abscure the data flow of the real world system that is modeled and that it is consequently difficult and costly to maintain, update and reuse the resulting software. A cyclic executive is a software component that schedules and implicitly synchronizes the real-time software through periodic and repetitive subroutine calls. Therefore a design method is sought that allows the deferral of process scheduling to the later stages of design. The appropriate scheduling paradigm must be chosen given the performance constraints, the largest environment and the software's lifecycle. The concept of process inversion is explored with respect to the cyclic executive.

  20. Robust decentralized power system controller design: Integrated approach

    NASA Astrophysics Data System (ADS)

    Veselý, Vojtech

    2017-09-01

    A unique approach to the design of gain scheduled controller (GSC) is presented. The proposed design procedure is based on the Bellman-Lyapunov equation, guaranteed cost and robust stability conditions using the parameter dependent quadratic stability approach. The obtained feasible design procedures for robust GSC design are in the form of BMI with guaranteed convex stability conditions. The obtained design results and their properties are illustrated in the simultaneously design of controllers for simple model (6-order) turbogenerator. The results of the obtained design procedure are a PI automatic voltage regulator (AVR) for synchronous generator, a PI governor controller and a power system stabilizer for excitation system.

  1. Robust design of a 2-DOF GMV controller: a direct self-tuning and fuzzy scheduling approach.

    PubMed

    Silveira, Antonio S; Rodríguez, Jaime E N; Coelho, Antonio A R

    2012-01-01

    This paper presents a study on self-tuning control strategies with generalized minimum variance control in a fixed two degree of freedom structure-or simply GMV2DOF-within two adaptive perspectives. One, from the process model point of view, using a recursive least squares estimator algorithm for direct self-tuning design, and another, using a Mamdani fuzzy GMV2DOF parameters scheduling technique based on analytical and physical interpretations from robustness analysis of the system. Both strategies are assessed by simulation and real plants experimentation environments composed of a damped pendulum and an under development wind tunnel from the Department of Automation and Systems of the Federal University of Santa Catarina. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Power-based Shift Schedule for Pure Electric Vehicle with a Two-speed Automatic Transmission

    NASA Astrophysics Data System (ADS)

    Wang, Jiaqi; Liu, Yanfang; Liu, Qiang; Xu, Xiangyang

    2016-11-01

    This paper introduces a comprehensive shift schedule for a two-speed automatic transmission of pure electric vehicle. Considering about driving ability and efficiency performance of electric vehicles, the power-based shift schedule is proposed with three principles. This comprehensive shift schedule regards the vehicle current speed and motor load power as input parameters to satisfy the vehicle driving power demand with lowest energy consumption. A simulation model has been established to verify the dynamic and economic performance of comprehensive shift schedule. Compared with traditional dynamic and economic shift schedules, simulation results indicate that the power-based shift schedule is superior to traditional shift schedules.

  3. Solving a mathematical model integrating unequal-area facilities layout and part scheduling in a cellular manufacturing system by a genetic algorithm.

    PubMed

    Ebrahimi, Ahmad; Kia, Reza; Komijan, Alireza Rashidi

    2016-01-01

    In this article, a novel integrated mixed-integer nonlinear programming model is presented for designing a cellular manufacturing system (CMS) considering machine layout and part scheduling problems simultaneously as interrelated decisions. The integrated CMS model is formulated to incorporate several design features including part due date, material handling time, operation sequence, processing time, an intra-cell layout of unequal-area facilities, and part scheduling. The objective function is to minimize makespan, tardiness penalties, and material handling costs of inter-cell and intra-cell movements. Two numerical examples are solved by the Lingo software to illustrate the results obtained by the incorporated features. In order to assess the effects and importance of integration of machine layout and part scheduling in designing a CMS, two approaches, sequentially and concurrent are investigated and the improvement resulted from a concurrent approach is revealed. Also, due to the NP-hardness of the integrated model, an efficient genetic algorithm is designed. As a consequence, computational results of this study indicate that the best solutions found by GA are better than the solutions found by B&B in much less time for both sequential and concurrent approaches. Moreover, the comparisons between the objective function values (OFVs) obtained by sequential and concurrent approaches demonstrate that the OFV improvement is averagely around 17 % by GA and 14 % by B&B.

  4. A methodology for spacecraft technology insertion analysis balancing benefit, cost, and risk

    NASA Astrophysics Data System (ADS)

    Bearden, David Allen

    Emerging technologies are changing the way space missions are developed and implemented. Technology development programs are proceeding with the goal of enhancing spacecraft performance and reducing mass and cost. However, it is often the case that technology insertion assessment activities, in the interest of maximizing performance and/or mass reduction, do not consider synergistic system-level effects. Furthermore, even though technical risks are often identified as a large cost and schedule driver, many design processes ignore effects of cost and schedule uncertainty. This research is based on the hypothesis that technology selection is a problem of balancing interrelated (and potentially competing) objectives. Current spacecraft technology selection approaches are summarized, and a Methodology for Evaluating and Ranking Insertion of Technology (MERIT) that expands on these practices to attack otherwise unsolved problems is demonstrated. MERIT combines the modern techniques of technology maturity measures, parametric models, genetic algorithms, and risk assessment (cost and schedule) in a unique manner to resolve very difficult issues including: user-generated uncertainty, relationships between cost/schedule and complexity, and technology "portfolio" management. While the methodology is sufficiently generic that it may in theory be applied to a number of technology insertion problems, this research focuses on application to the specific case of small (<500 kg) satellite design. Small satellite missions are of particular interest because they are often developed under rigid programmatic (cost and schedule) constraints and are motivated to introduce advanced technologies into the design. MERIT is demonstrated for programs procured under varying conditions and constraints such as stringent performance goals, not-to-exceed costs, or hard schedule requirements. MERIT'S contributions to the engineering community are its: unique coupling of the aspects of performance, cost, and schedule; assessment of system level impacts of technology insertion; procedures for estimating uncertainties (risks) associated with advanced technology; and application of heuristics to facilitate informed system-level technology utilization decisions earlier in the conceptual design phase. MERIT extends the state of the art in technology insertion assessment selection practice and, if adopted, may aid designers in determining the configuration of complex systems that meet essential requirements in a timely, cost-effective manner.

  5. Cost and schedule estimation study report

    NASA Technical Reports Server (NTRS)

    Condon, Steve; Regardie, Myrna; Stark, Mike; Waligora, Sharon

    1993-01-01

    This report describes the analysis performed and the findings of a study of the software development cost and schedule estimation models used by the Flight Dynamics Division (FDD), Goddard Space Flight Center. The study analyzes typical FDD projects, focusing primarily on those developed since 1982. The study reconfirms the standard SEL effort estimation model that is based on size adjusted for reuse; however, guidelines for the productivity and growth parameters in the baseline effort model have been updated. The study also produced a schedule prediction model based on empirical data that varies depending on application type. Models for the distribution of effort and schedule by life-cycle phase are also presented. Finally, this report explains how to use these models to plan SEL projects.

  6. HRT-UML: a design method for hard real-time systems based on the UML notation

    NASA Astrophysics Data System (ADS)

    D'Alessandro, Massimo; Mazzini, Silvia; di Natale, Marco; Lipari, Giuseppe

    2002-07-01

    The Hard Real-Time-Unified Modelling Language (HRT-UML) method aims at providing a comprehensive solution to the modeling of Hard Real Time systems. The experience shows that the design of Hard Real-Time systems needs methodologies suitable for the modeling and analysis of aspects related to time, schedulability and performance. In the context of the European Aerospace community a reference method for design is Hierarchical Object Oriented Design (HOOD) and in particular its extension for the modeling of hard real time systems, Hard Real-Time-Hierarchical Object Oriented Design (HRT-HOOD), recommended by the European Space Agency (ESA) for the development of on-board systems. On the other hand in recent years the Unified Modelling Language (UML) has been gaining a very large acceptance in a wide range of domains, all over the world, becoming a de-facto international standard. Tool vendors are very active in this potentially big market. In the Aerospace domain the common opinion is that UML, as a general notation, is not suitable for Hard Real Time systems, even if its importance is recognized as a standard and as a technological trend in the near future. These considerations suggest the possibility of replacing the HRT-HOOD method with a customized version of UML, that incorporates the advantages of both standards and complements the weak points. This approach has the clear advantage of making HRT-HOOD converge on a more powerful and expressive modeling notation. The paper identifies a mapping of the HRT-HOOD semantics into the UML one, and proposes a UML extension profile, that we call HRT-UML, based on the UML standard extension mechanisms, to fully represent HRT-HOOD design concepts. Finally it discusses the relationships between our profile and the UML profile for schedulability, performance and time, adopted by OMG in November 2001.

  7. Application of modern control design methodology to oblique wing research aircraft

    NASA Technical Reports Server (NTRS)

    Vincent, James H.

    1991-01-01

    A Linear Quadratic Regulator synthesis technique was used to design an explicit model following control system for the Oblique Wing Research Aircraft (OWRA). The forward path model (Maneuver Command Generator) was designed to incorporate the desired flying qualities and response decoupling. The LQR synthesis was based on the use of generalized controls, and it was structured to provide a proportional/integral error regulator with feedforward compensation. An unexpected consequence of this design approach was the ability to decouple the control synthesis into separate longitudinal and lateral directional designs. Longitudinal and lateral directional control laws were generated for each of the nine design flight conditions, and gain scheduling requirements were addressed. A fully coupled 6 degree of freedom open loop model of the OWRA along with the longitudinal and lateral directional control laws was used to assess the closed loop performance of the design. Evaluations were performed for each of the nine design flight conditions.

  8. Opportunities and pitfalls in clinical proof-of-concept: principles and examples.

    PubMed

    Chen, Chao

    2018-04-01

    Clinical proof-of-concept trials crucially inform major resource deployment decisions. This paper discusses several mechanisms for enhancing their rigour and efficiency. The importance of careful consideration when using a surrogate endpoint is illustrated; situational effectiveness of run-in patient enrichment is explored; a versatile tool is introduced to ensure a strong pharmacological underpinning; the benefits of dose-titration are revealed by simulation; and the importance of adequately scheduled observations is shown. The general process of model-based trial design and analysis is described and several examples demonstrate the value in historical data, simulation-guided design, model-based analysis and trial adaptation informed by interim analysis. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Operational VGOS Scheduling

    NASA Astrophysics Data System (ADS)

    Searle, Anthony; Petrachenko, Bill

    2016-12-01

    The VLBI Global Observing System (VGOS) has been designed to take advantage of advances in data recording speeds and storage capacity, allowing for smaller and faster antennas, wider bandwidths, and shorter observation durations. Here, schedules for a ``realistic" VGOS network, frequency sequences, and expanded source lists are presented using a new source-based scheduling algorithm. The VGOS aim for continuous observations presents new operational challenges. As the source-based strategy is independent of the observing network, there are operational advantages which allow for more flexible scheduling of continuous VLBI observations. Using VieVS, simulations of several schedules are presented and compared with previous VGOS studies.

  10. Methods to estimate irrigated reference crop evapotranspiration - a review.

    PubMed

    Kumar, R; Jat, M K; Shankar, V

    2012-01-01

    Efficient water management of crops requires accurate irrigation scheduling which, in turn, requires the accurate measurement of crop water requirement. Irrigation is applied to replenish depleted moisture for optimum plant growth. Reference evapotranspiration plays an important role for the determination of water requirements for crops and irrigation scheduling. Various models/approaches varying from empirical to physically base distributed are available for the estimation of reference evapotranspiration. Mathematical models are useful tools to estimate the evapotranspiration and water requirement of crops, which is essential information required to design or choose best water management practices. In this paper the most commonly used models/approaches, which are suitable for the estimation of daily water requirement for agricultural crops grown in different agro-climatic regions, are reviewed. Further, an effort has been made to compare the accuracy of various widely used methods under different climatic conditions.

  11. The Mine Locomotive Wireless Network Strategy Based on Successive Interference Cancellation

    PubMed Central

    Wu, Liaoyuan; Han, Jianghong; Wei, Xing; Shi, Lei; Ding, Xu

    2015-01-01

    We consider a wireless network strategy based on successive interference cancellation (SIC) for mine locomotives. We firstly build the original mathematical model for the strategy which is a non-convex model. Then, we examine this model intensively, and figure out that there are certain regulations embedded in it. Based on these findings, we are able to reformulate the model into a new form and design a simple algorithm which can assign each locomotive with a proper transmitting scheme during the whole schedule procedure. Simulation results show that the outcomes obtained through this algorithm are improved by around 50% compared with those that do not apply the SIC technique. PMID:26569240

  12. a Quadtree Organization Construction and Scheduling Method for Urban 3d Model Based on Weight

    NASA Astrophysics Data System (ADS)

    Yao, C.; Peng, G.; Song, Y.; Duan, M.

    2017-09-01

    The increasement of Urban 3D model precision and data quantity puts forward higher requirements for real-time rendering of digital city model. Improving the organization, management and scheduling of 3D model data in 3D digital city can improve the rendering effect and efficiency. This paper takes the complexity of urban models into account, proposes a Quadtree construction and scheduling rendering method for Urban 3D model based on weight. Divide Urban 3D model into different rendering weights according to certain rules, perform Quadtree construction and schedule rendering according to different rendering weights. Also proposed an algorithm for extracting bounding box extraction based on model drawing primitives to generate LOD model automatically. Using the algorithm proposed in this paper, developed a 3D urban planning&management software, the practice has showed the algorithm is efficient and feasible, the render frame rate of big scene and small scene are both stable at around 25 frames.

  13. Teaching strategies and student achievement in high school block scheduled biology classes

    NASA Astrophysics Data System (ADS)

    Louden, Cynthia Knapp

    The objectives of this study included determining whether teachers in block or traditionally scheduled biology classes (1) implement inquiry-based instruction more often or with different methods, (2) understand the concept of inquiry-based instruction as it is described in the National Science Standards, (3) have classes with significantly different student achievement, and (4) believe that their school schedule facilitates their use of inquiry-based instruction in the classroom. Biology teachers in block and non-block scheduled classes were interviewed, surveyed, and observed to determine the degree to which they implement inquiry-based instructional practices in their classrooms. State biology exams were used to indicate student achievement. Teachers in block scheduled and traditional classes used inquiry-based instruction with nearly the same frequency. Approximately 30% of all teachers do not understand the concept of inquiry-based instruction as described by the National Science Standards. No significant achievement differences between block and traditionally scheduled biology classes were found using ANCOVA analyses and a nonequivalent control-group quasi-experimental design. Using the same analysis techniques, significant achievement differences were found between biology classes with teachers who used inquiry-based instruction frequently and infrequently. Teachers in block schedules believed that their schedules facilitated inquiry-based instruction more than teachers in traditional schedules.

  14. Scheduling and Topology Design in Networks with Directional Antennas

    DTIC Science & Technology

    2017-05-19

    emergency response networks was recently studied in [14] and [15]. This work examines the topology control problem in group - based wireless networks that...Broadcast Fig. 7: Max-min throughput ⇢ versus number of nodes for non -uniform edge capacities [14] T. Suzuki, et al. “Directional Antenna Control based...Scheduling and Topology Design in Networks with Directional Antennas Thomas Stahlbuhk, Nathaniel M. Jones, Brooke Shrader Lincoln Laboratory

  15. Beam Design and User Scheduling for Nonorthogonal Multiple Access With Multiple Antennas Based on Pareto Optimality

    NASA Astrophysics Data System (ADS)

    Seo, Junyeong; Sung, Youngchul

    2018-06-01

    In this paper, an efficient transmit beam design and user scheduling method is proposed for multi-user (MU) multiple-input single-output (MISO) non-orthogonal multiple access (NOMA) downlink, based on Pareto-optimality. The proposed beam design and user scheduling method groups simultaneously-served users into multiple clusters with practical two users in each cluster, and then applies spatical zeroforcing (ZF) across clusters to control inter-cluster interference (ICI) and Pareto-optimal beam design with successive interference cancellation (SIC) to two users in each cluster to remove interference to strong users and leverage signal-to-interference-plus-noise ratios (SINRs) of interference-experiencing weak users. The proposed method has flexibility to control the rates of strong and weak users and numerical results show that the proposed method yields good performance.

  16. Winter Simulation Conference, Miami Beach, Fla., December 4-6, 1978, Proceedings. Volumes 1 & 2

    NASA Technical Reports Server (NTRS)

    Highland, H. J. (Editor); Nielsen, N. R.; Hull, L. G.

    1978-01-01

    The papers report on the various aspects of simulation such as random variate generation, simulation optimization, ranking and selection of alternatives, model management, documentation, data bases, and instructional methods. Simulation studies in a wide variety of fields are described, including system design and scheduling, government and social systems, agriculture, computer systems, the military, transportation, corporate planning, ecosystems, health care, manufacturing and industrial systems, computer networks, education, energy, production planning and control, financial models, behavioral models, information systems, and inventory control.

  17. Model-Based Design of Long-Distance Tracer Transport Experiments in Plants.

    PubMed

    Bühler, Jonas; von Lieres, Eric; Huber, Gregor J

    2018-01-01

    Studies of long-distance transport of tracer isotopes in plants offer a high potential for functional phenotyping, but so far measurement time is a bottleneck because continuous time series of at least 1 h are required to obtain reliable estimates of transport properties. Hence, usual throughput values are between 0.5 and 1 samples h -1 . Here, we propose to increase sample throughput by introducing temporal gaps in the data acquisition of each plant sample and measuring multiple plants one after each other in a rotating scheme. In contrast to common time series analysis methods, mechanistic tracer transport models allow the analysis of interrupted time series. The uncertainties of the model parameter estimates are used as a measure of how much information was lost compared to complete time series. A case study was set up to systematically investigate different experimental schedules for different throughput scenarios ranging from 1 to 12 samples h -1 . Selected designs with only a small amount of data points were found to be sufficient for an adequate parameter estimation, implying that the presented approach enables a substantial increase of sample throughput. The presented general framework for automated generation and evaluation of experimental schedules allows the determination of a maximal sample throughput and the respective optimal measurement schedule depending on the required statistical reliability of data acquired by future experiments.

  18. On the Numerical Formulation of Parametric Linear Fractional Transformation (LFT) Uncertainty Models for Multivariate Matrix Polynomial Problems

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine M.

    1998-01-01

    Robust control system analysis and design is based on an uncertainty description, called a linear fractional transformation (LFT), which separates the uncertain (or varying) part of the system from the nominal system. These models are also useful in the design of gain-scheduled control systems based on Linear Parameter Varying (LPV) methods. Low-order LFT models are difficult to form for problems involving nonlinear parameter variations. This paper presents a numerical computational method for constructing and LFT model for a given LPV model. The method is developed for multivariate polynomial problems, and uses simple matrix computations to obtain an exact low-order LFT representation of the given LPV system without the use of model reduction. Although the method is developed for multivariate polynomial problems, multivariate rational problems can also be solved using this method by reformulating the rational problem into a polynomial form.

  19. Error Recovery in the Time-Triggered Paradigm with FTT-CAN.

    PubMed

    Marques, Luis; Vasconcelos, Verónica; Pedreiras, Paulo; Almeida, Luís

    2018-01-11

    Data networks are naturally prone to interferences that can corrupt messages, leading to performance degradation or even to critical failure of the corresponding distributed system. To improve resilience of critical systems, time-triggered networks are frequently used, based on communication schedules defined at design-time. These networks offer prompt error detection, but slow error recovery that can only be compensated with bandwidth overprovisioning. On the contrary, the Flexible Time-Triggered (FTT) paradigm uses online traffic scheduling, which enables a compromise between error detection and recovery that can achieve timely recovery with a fraction of the needed bandwidth. This article presents a new method to recover transmission errors in a time-triggered Controller Area Network (CAN) network, based on the Flexible Time-Triggered paradigm, namely FTT-CAN. The method is based on using a server (traffic shaper) to regulate the retransmission of corrupted or omitted messages. We show how to design the server to simultaneously: (1) meet a predefined reliability goal, when considering worst case error recovery scenarios bounded probabilistically by a Poisson process that models the fault arrival rate; and, (2) limit the direct and indirect interference in the message set, preserving overall system schedulability. Extensive simulations with multiple scenarios, based on practical and randomly generated systems, show a reduction of two orders of magnitude in the average bandwidth taken by the proposed error recovery mechanism, when compared with traditional approaches available in the literature based on adding extra pre-defined transmission slots.

  20. Error Recovery in the Time-Triggered Paradigm with FTT-CAN

    PubMed Central

    Pedreiras, Paulo; Almeida, Luís

    2018-01-01

    Data networks are naturally prone to interferences that can corrupt messages, leading to performance degradation or even to critical failure of the corresponding distributed system. To improve resilience of critical systems, time-triggered networks are frequently used, based on communication schedules defined at design-time. These networks offer prompt error detection, but slow error recovery that can only be compensated with bandwidth overprovisioning. On the contrary, the Flexible Time-Triggered (FTT) paradigm uses online traffic scheduling, which enables a compromise between error detection and recovery that can achieve timely recovery with a fraction of the needed bandwidth. This article presents a new method to recover transmission errors in a time-triggered Controller Area Network (CAN) network, based on the Flexible Time-Triggered paradigm, namely FTT-CAN. The method is based on using a server (traffic shaper) to regulate the retransmission of corrupted or omitted messages. We show how to design the server to simultaneously: (1) meet a predefined reliability goal, when considering worst case error recovery scenarios bounded probabilistically by a Poisson process that models the fault arrival rate; and, (2) limit the direct and indirect interference in the message set, preserving overall system schedulability. Extensive simulations with multiple scenarios, based on practical and randomly generated systems, show a reduction of two orders of magnitude in the average bandwidth taken by the proposed error recovery mechanism, when compared with traditional approaches available in the literature based on adding extra pre-defined transmission slots. PMID:29324723

  1. New LMI based gain-scheduling control for recovering contact-free operation of a magnetically levitated rotor

    NASA Astrophysics Data System (ADS)

    Wang, M.; Cole, M. O. T.; Keogh, P. S.

    2017-11-01

    A new approach for the recovery of contact-free levitation of a rotor supported by active magnetic bearings (AMB) is assessed through control strategy design, system modelling and experimental verification. The rotor is considered to make contact with a touchdown bearing (TDB), which may lead to entrapment in a bi-stable nonlinear response. A linear matrix inequality (LMI) based gain-scheduling H∞ control technique is introduced to recover the rotor to a contact-free state. The controller formulation involves a time-varying effective stiffness parameter, which can be evaluated in terms of forces transmitted through the TDB. Rather than measuring these forces directly, an observer is introduced with a model of the base structure to transform base acceleration signals using polytopic coordinates for controller adjustment. Force transmission to the supporting base structure will occur either through an AMB alone without contact, or through the AMB and TDB with contact and this must be accounted for in the observer design. The controller is verified experimentally in terms of (a) non-contact robust stability and vibration suppression performance; (b) control action for contact-free recovery at typical running speeds with various unbalance and TDB misalignment conditions; and (c) coast-down experimental tests. The results demonstrate the effectiveness of the AMB control action whenever it operates within its dynamic load capacity.

  2. Modelling the protocol stack in NCS with deterministic and stochastic petri net

    NASA Astrophysics Data System (ADS)

    Hui, Chen; Chunjie, Zhou; Weifeng, Zhu

    2011-06-01

    Protocol stack is the basis of the networked control systems (NCS). Full or partial reconfiguration of protocol stack offers both optimised communication service and system performance. Nowadays, field testing is unrealistic to determine the performance of reconfigurable protocol stack; and the Petri net formal description technique offers the best combination of intuitive representation, tool support and analytical capabilities. Traditionally, separation between the different layers of the OSI model has been a common practice. Nevertheless, such a layered modelling analysis framework of protocol stack leads to the lack of global optimisation for protocol reconfiguration. In this article, we proposed a general modelling analysis framework for NCS based on the cross-layer concept, which is to establish an efficiency system scheduling model through abstracting the time constraint, the task interrelation, the processor and the bus sub-models from upper and lower layers (application, data link and physical layer). Cross-layer design can help to overcome the inadequacy of global optimisation based on information sharing between protocol layers. To illustrate the framework, we take controller area network (CAN) as a case study. The simulation results of deterministic and stochastic Petri-net (DSPN) model can help us adjust the message scheduling scheme and obtain better system performance.

  3. A Multi-layer Dynamic Model for Coordination Based Group Decision Making in Water Resource Allocation and Scheduling

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Zhang, Xingnan; Li, Chenming; Wang, Jianying

    Management of group decision-making is an important issue in water source management development. In order to overcome the defects in lacking of effective communication and cooperation in the existing decision-making models, this paper proposes a multi-layer dynamic model for coordination in water resource allocation and scheduling based group decision making. By introducing the scheme-recognized cooperative satisfaction index and scheme-adjusted rationality index, the proposed model can solve the problem of poor convergence of multi-round decision-making process in water resource allocation and scheduling. Furthermore, the problem about coordination of limited resources-based group decision-making process can be solved based on the effectiveness of distance-based group of conflict resolution. The simulation results show that the proposed model has better convergence than the existing models.

  4. Hybrid scheduling mechanisms for Next-generation Passive Optical Networks based on network coding

    NASA Astrophysics Data System (ADS)

    Zhao, Jijun; Bai, Wei; Liu, Xin; Feng, Nan; Maier, Martin

    2014-10-01

    Network coding (NC) integrated into Passive Optical Networks (PONs) is regarded as a promising solution to achieve higher throughput and energy efficiency. To efficiently support multimedia traffic under this new transmission mode, novel NC-based hybrid scheduling mechanisms for Next-generation PONs (NG-PONs) including energy management, time slot management, resource allocation, and Quality-of-Service (QoS) scheduling are proposed in this paper. First, we design an energy-saving scheme that is based on Bidirectional Centric Scheduling (BCS) to reduce the energy consumption of both the Optical Line Terminal (OLT) and Optical Network Units (ONUs). Next, we propose an intra-ONU scheduling and an inter-ONU scheduling scheme, which takes NC into account to support service differentiation and QoS assurance. The presented simulation results show that BCS achieves higher energy efficiency under low traffic loads, clearly outperforming the alternative NC-based Upstream Centric Scheduling (UCS) scheme. Furthermore, BCS is shown to provide better QoS assurance.

  5. Gain-scheduling multivariable LPV control of an irrigation canal system.

    PubMed

    Bolea, Yolanda; Puig, Vicenç

    2016-07-01

    The purpose of this paper is to present a multivariable linear parameter varying (LPV) controller with a gain scheduling Smith Predictor (SP) scheme applicable to open-flow canal systems. This LPV controller based on SP is designed taking into account the uncertainty in the estimation of delay and the variation of plant parameters according to the operating point. This new methodology can be applied to a class of delay systems that can be represented by a set of models that can be factorized into a rational multivariable model in series with left/right diagonal (multiple) delays, such as, the case of irrigation canals. A multiple pool canal system is used to test and validate the proposed control approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Trajectory-Based Takeoff Time Predictions Applied to Tactical Departure Scheduling: Concept Description, System Design, and Initial Observations

    NASA Technical Reports Server (NTRS)

    Engelland, Shawn A.; Capps, Alan

    2011-01-01

    Current aircraft departure release times are based on manual estimates of aircraft takeoff times. Uncertainty in takeoff time estimates may result in missed opportunities to merge into constrained en route streams and lead to lost throughput. However, technology exists to improve takeoff time estimates by using the aircraft surface trajectory predictions that enable air traffic control tower (ATCT) decision support tools. NASA s Precision Departure Release Capability (PDRC) is designed to use automated surface trajectory-based takeoff time estimates to improve en route tactical departure scheduling. This is accomplished by integrating an ATCT decision support tool with an en route tactical departure scheduling decision support tool. The PDRC concept and prototype software have been developed, and an initial test was completed at air traffic control facilities in Dallas/Fort Worth. This paper describes the PDRC operational concept, system design, and initial observations.

  7. Cross-Layer Adaptive Feedback Scheduling of Wireless Control Systems

    PubMed Central

    Xia, Feng; Ma, Longhua; Peng, Chen; Sun, Youxian; Dong, Jinxiang

    2008-01-01

    There is a trend towards using wireless technologies in networked control systems. However, the adverse properties of the radio channels make it difficult to design and implement control systems in wireless environments. To attack the uncertainty in available communication resources in wireless control systems closed over WLAN, a cross-layer adaptive feedback scheduling (CLAFS) scheme is developed, which takes advantage of the co-design of control and wireless communications. By exploiting cross-layer design, CLAFS adjusts the sampling periods of control systems at the application layer based on information about deadline miss ratio and transmission rate from the physical layer. Within the framework of feedback scheduling, the control performance is maximized through controlling the deadline miss ratio. Key design parameters of the feedback scheduler are adapted to dynamic changes in the channel condition. An event-driven invocation mechanism for the feedback scheduler is also developed. Simulation results show that the proposed approach is efficient in dealing with channel capacity variations and noise interference, thus providing an enabling technology for control over WLAN. PMID:27879934

  8. Facilitating preemptive hardware system design using partial reconfiguration techniques.

    PubMed

    Dondo Gazzano, Julio; Rincon, Fernando; Vaderrama, Carlos; Villanueva, Felix; Caba, Julian; Lopez, Juan Carlos

    2014-01-01

    In FPGA-based control system design, partial reconfiguration is especially well suited to implement preemptive systems. In real-time systems, the deadline for critical task can compel the preemption of noncritical one. Besides, an asynchronous event can demand immediate attention and, then, force launching a reconfiguration process for high-priority task implementation. If the asynchronous event is previously scheduled, an explicit activation of the reconfiguration process is performed. If the event cannot be previously programmed, such as in dynamically scheduled systems, an implicit activation to the reconfiguration process is demanded. This paper provides a hardware-based approach to explicit and implicit activation of the partial reconfiguration process in dynamically reconfigurable SoCs and includes all the necessary tasks to cope with this issue. Furthermore, the reconfiguration service introduced in this work allows remote invocation of the reconfiguration process and then the remote integration of off-chip components. A model that offers component location transparency is also presented to enhance and facilitate system integration.

  9. Facilitating Preemptive Hardware System Design Using Partial Reconfiguration Techniques

    PubMed Central

    Rincon, Fernando; Vaderrama, Carlos; Villanueva, Felix; Caba, Julian; Lopez, Juan Carlos

    2014-01-01

    In FPGA-based control system design, partial reconfiguration is especially well suited to implement preemptive systems. In real-time systems, the deadline for critical task can compel the preemption of noncritical one. Besides, an asynchronous event can demand immediate attention and, then, force launching a reconfiguration process for high-priority task implementation. If the asynchronous event is previously scheduled, an explicit activation of the reconfiguration process is performed. If the event cannot be previously programmed, such as in dynamically scheduled systems, an implicit activation to the reconfiguration process is demanded. This paper provides a hardware-based approach to explicit and implicit activation of the partial reconfiguration process in dynamically reconfigurable SoCs and includes all the necessary tasks to cope with this issue. Furthermore, the reconfiguration service introduced in this work allows remote invocation of the reconfiguration process and then the remote integration of off-chip components. A model that offers component location transparency is also presented to enhance and facilitate system integration. PMID:24672292

  10. Estimation of distribution algorithm with path relinking for the blocking flow-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Shao, Zhongshi; Pi, Dechang; Shao, Weishi

    2018-05-01

    This article presents an effective estimation of distribution algorithm, named P-EDA, to solve the blocking flow-shop scheduling problem (BFSP) with the makespan criterion. In the P-EDA, a Nawaz-Enscore-Ham (NEH)-based heuristic and the random method are combined to generate the initial population. Based on several superior individuals provided by a modified linear rank selection, a probabilistic model is constructed to describe the probabilistic distribution of the promising solution space. The path relinking technique is incorporated into EDA to avoid blindness of the search and improve the convergence property. A modified referenced local search is designed to enhance the local exploitation. Moreover, a diversity-maintaining scheme is introduced into EDA to avoid deterioration of the population. Finally, the parameters of the proposed P-EDA are calibrated using a design of experiments approach. Simulation results and comparisons with some well-performing algorithms demonstrate the effectiveness of the P-EDA for solving BFSP.

  11. A Prototype System for a Computer-Based Statewide Film Library Network: A Model for Operation. Final Report.

    ERIC Educational Resources Information Center

    Bidwell, Charles M.; Auricchio, Dominick

    The project set out to establish an operational film scheduling network to improve service to New York State teachers using 16mm educational films. The Network is designed to serve local libraries located in Boards of Cooperative Educational Services (BOCES), regional libraries, and a statewide Syracuse University Film Rental Library (SUFRL). The…

  12. Probabilistic modeling of condition-based maintenance strategies and quantification of its benefits for airliners

    NASA Astrophysics Data System (ADS)

    Pattabhiraman, Sriram

    Airplane fuselage structures are designed with the concept of damage tolerance, wherein small damage are allowed to remain on the airplane, and damage that otherwise affect the safety of the structure are repaired. The damage critical to the safety of the fuselage are repaired by scheduling maintenance at pre-determined intervals. Scheduling maintenance is an interesting trade-off between damage tolerance and cost. Tolerance of larger damage would require less frequent maintenance and hence, a lower cost, to maintain a certain level of reliability. Alternatively, condition-based maintenance techniques have been developed using on-board sensors, which track damage continuously and request maintenance only when the damage size crosses a particular threshold. This effects a tolerance of larger damage than scheduled maintenance, leading to savings in cost. This work quantifies the savings of condition-based maintenance over scheduled maintenance. The work also quantifies converting the cost savings into weight savings. Structural health monitoring will need time to be able to establish itself as a stand-alone system for maintenance, due to concerns on its diagnosis accuracy and reliability. This work also investigates the effect of synchronizing structural health monitoring system with scheduled maintenance. This work uses on-board SHM equipment skip structural airframe maintenance (a subsect of scheduled maintenance), whenever deemed unnecessary while maintain a desired level of safety of structure. The work will also predict the necessary maintenance for a fleet of airplanes, based on the current damage status of the airplanes. The work also analyses the possibility of false alarm, wherein maintenance is being requested with no critical damage on the airplane. The work use SHM as a tool to identify lemons in a fleet of airplanes. Lemons are those airplanes that would warrant more maintenance trips than the average behavior of the fleet.

  13. Design and Analysis of Self-Adapted Task Scheduling Strategies in Wireless Sensor Networks

    PubMed Central

    Guo, Wenzhong; Xiong, Naixue; Chao, Han-Chieh; Hussain, Sajid; Chen, Guolong

    2011-01-01

    In a wireless sensor network (WSN), the usage of resources is usually highly related to the execution of tasks which consume a certain amount of computing and communication bandwidth. Parallel processing among sensors is a promising solution to provide the demanded computation capacity in WSNs. Task allocation and scheduling is a typical problem in the area of high performance computing. Although task allocation and scheduling in wired processor networks has been well studied in the past, their counterparts for WSNs remain largely unexplored. Existing traditional high performance computing solutions cannot be directly implemented in WSNs due to the limitations of WSNs such as limited resource availability and the shared communication medium. In this paper, a self-adapted task scheduling strategy for WSNs is presented. First, a multi-agent-based architecture for WSNs is proposed and a mathematical model of dynamic alliance is constructed for the task allocation problem. Then an effective discrete particle swarm optimization (PSO) algorithm for the dynamic alliance (DPSO-DA) with a well-designed particle position code and fitness function is proposed. A mutation operator which can effectively improve the algorithm’s ability of global search and population diversity is also introduced in this algorithm. Finally, the simulation results show that the proposed solution can achieve significant better performance than other algorithms. PMID:22163971

  14. Resource management and scheduling policy based on grid for AIoT

    NASA Astrophysics Data System (ADS)

    Zou, Yiqin; Quan, Li

    2017-07-01

    This paper has a research on resource management and scheduling policy based on grid technology for Agricultural Internet of Things (AIoT). Facing the situation of a variety of complex and heterogeneous agricultural resources in AIoT, it is difficult to represent them in a unified way. But from an abstract perspective, there are some common models which can express their characteristics and features. Based on this, we proposed a high-level model called Agricultural Resource Hierarchy Model (ARHM), which can be used for modeling various resources. It introduces the agricultural resource modeling method based on this model. Compared with traditional application-oriented three-layer model, ARHM can hide the differences of different applications and make all applications have a unified interface layer and be implemented without distinction. Furthermore, it proposes a Web Service Resource Framework (WSRF)-based resource management method and the encapsulation structure for it. Finally, it focuses on the discussion of multi-agent-based AG resource scheduler, which is a collaborative service provider pattern in multiple agricultural production domains.

  15. Scheduling Real-Time Mixed-Criticality Jobs

    NASA Astrophysics Data System (ADS)

    Baruah, Sanjoy K.; Bonifaci, Vincenzo; D'Angelo, Gianlorenzo; Li, Haohan; Marchetti-Spaccamela, Alberto; Megow, Nicole; Stougie, Leen

    Many safety-critical embedded systems are subject to certification requirements; some systems may be required to meet multiple sets of certification requirements, from different certification authorities. Certification requirements in such "mixed-criticality" systems give rise to interesting scheduling problems, that cannot be satisfactorily addressed using techniques from conventional scheduling theory. In this paper, we study a formal model for representing such mixed-criticality workloads. We demonstrate first the intractability of determining whether a system specified in this model can be scheduled to meet all its certification requirements, even for systems subject to two sets of certification requirements. Then we quantify, via the metric of processor speedup factor, the effectiveness of two techniques, reservation-based scheduling and priority-based scheduling, that are widely used in scheduling such mixed-criticality systems, showing that the latter of the two is superior to the former. We also show that the speedup factors are tight for these two techniques.

  16. Enhancing Adoption of Irrigation Scheduling to Sustain the Viability of Fruit and Nut Crops in California

    NASA Astrophysics Data System (ADS)

    Fulton, A.; Snyder, R.; Hillyer, C.; English, M.; Sanden, B.; Munk, D.

    2012-04-01

    Enhancing Adoption of Irrigation Scheduling to Sustain the Viability of Fruit and Nut Crops in California Allan Fulton, Richard Snyder, Charles Hillyer, Marshall English, Blake Sanden, and Dan Munk Adoption of scientific methods to decide when to irrigate and how much water to apply to a crop has increased over the last three decades in California. In 1988, less than 4.3 percent of US farmers employed some type of science-based technique to assist in making irrigation scheduling decisions (USDA, 1995). An ongoing survey in California, representing an industry irrigating nearly 0.4 million planted almond hectares, indicates adoption rates ranging from 38 to 55 percent of either crop evapotranspiration (ETc), soil moisture monitoring, plant water status, or some combination of these irrigation scheduling techniques to assist with making irrigation management decisions (California Almond Board, 2011). High capital investment to establish fruit and nut crops, sensitivity to over and under-irrigation on crop performance and longevity, and increasing costs and competition for water have all contributed to increased adoption of scientific irrigation scheduling methods. These trends in adoption are encouraging and more opportunities exist to develop improved irrigation scheduling tools, especially computer decision-making models. In 2009 and 2010, an "On-line Irrigation Scheduling Advisory Service" (OISO, 2012), also referred to as Online Irrigation Management (IMO), was used and evaluated in commercial walnut, almond, and French prune orchards in the northern Sacramento Valley of California. This specific model has many features described as the "Next Generation of Irrigation Schedulers" (Hillyer, 2010). While conventional irrigation management involves simply irrigating as needed to avoid crop stress, this IMO is designed to control crop stress, which requires: (i) precise control of crop water availability (rather than controlling applied water); (ii) quantifying crop stress in order to manage it in heterogeneous fields; and (iii) predicting crop responses to water stress. The capacities of this IMO include: 1. Modeling of the disposition of applied water in spatially variable fields; 2. Conjunctive scheduling for multiple fields, rather than scheduling each field independently; 3. Long range forecasting of crop water requirements to better utilize limited water or limited delivery system capacity: and 4. Explicit modeling of the uncertainties of water use and crop yield. This was one of the first efforts to employ a "Next Generation" type computer irrigation scheduling advisory model or IMO in orchard crops. This paper discusses experiences with introducing this model to fruit and nut growers of various size and scale in the northern Sacramento Valley of California and the accuracy of its forecasts of irrigation needs in fruit and nut crops. Strengths and opportunities to forge ahead in the development of a "Next Generation" irrigation scheduler were identified from this on-farm evaluation.

  17. Modeling Temporal Processes in Early Spacecraft Design: Application of Discrete-Event Simulations for Darpa's F6 Program

    NASA Technical Reports Server (NTRS)

    Dubos, Gregory F.; Cornford, Steven

    2012-01-01

    While the ability to model the state of a space system over time is essential during spacecraft operations, the use of time-based simulations remains rare in preliminary design. The absence of the time dimension in most traditional early design tools can however become a hurdle when designing complex systems whose development and operations can be disrupted by various events, such as delays or failures. As the value delivered by a space system is highly affected by such events, exploring the trade space for designs that yield the maximum value calls for the explicit modeling of time.This paper discusses the use of discrete-event models to simulate spacecraft development schedule as well as operational scenarios and on-orbit resources in the presence of uncertainty. It illustrates how such simulations can be utilized to support trade studies, through the example of a tool developed for DARPA's F6 program to assist the design of "fractionated spacecraft".

  18. Research on the ITOC based scheduling system for ship piping production

    NASA Astrophysics Data System (ADS)

    Li, Rui; Liu, Yu-Jun; Hamada, Kunihiro

    2010-12-01

    Manufacturing of ship piping systems is one of the major production activities in shipbuilding. The schedule of pipe production has an important impact on the master schedule of shipbuilding. In this research, the ITOC concept was introduced to solve the scheduling problems of a piping factory, and an intelligent scheduling system was developed. The system, in which a product model, an operation model, a factory model, and a knowledge database of piping production were integrated, automated the planning process and production scheduling. Details of the above points were discussed. Moreover, an application of the system in a piping factory, which achieved a higher level of performance as measured by tardiness, lead time, and inventory, was demonstrated.

  19. Distributed decision-making in electric power system transmission maintenance scheduling using multi-agent systems (MAS)

    NASA Astrophysics Data System (ADS)

    Zhang, Zhong

    In this work, motivated by the need to coordinate transmission maintenance scheduling among a multiplicity of self-interested entities in restructured power industry, a distributed decision support framework based on multiagent negotiation systems (MANS) is developed. An innovative risk-based transmission maintenance optimization procedure is introduced. Several models for linking condition monitoring information to the equipment's instantaneous failure probability are presented, which enable quantitative evaluation of the effectiveness of maintenance activities in terms of system cumulative risk reduction. Methodologies of statistical processing, equipment deterioration evaluation and time-dependent failure probability calculation are also described. A novel framework capable of facilitating distributed decision-making through multiagent negotiation is developed. A multiagent negotiation model is developed and illustrated that accounts for uncertainty and enables social rationality. Some issues of multiagent negotiation convergence and scalability are discussed. The relationships between agent-based negotiation and auction systems are also identified. A four-step MAS design methodology for constructing multiagent systems for power system applications is presented. A generic multiagent negotiation system, capable of inter-agent communication and distributed decision support through inter-agent negotiations, is implemented. A multiagent system framework for facilitating the automated integration of condition monitoring information and maintenance scheduling for power transformers is developed. Simulations of multiagent negotiation-based maintenance scheduling among several independent utilities are provided. It is shown to be a viable alternative solution paradigm to the traditional centralized optimization approach in today's deregulated environment. This multiagent system framework not only facilitates the decision-making among competing power system entities, but also provides a tool to use in studying competitive industry relative to monopolistic industry.

  20. An adaptive optimal control for smart structures based on the subspace tracking identification technique

    NASA Astrophysics Data System (ADS)

    Ripamonti, Francesco; Resta, Ferruccio; Borroni, Massimo; Cazzulani, Gabriele

    2014-04-01

    A new method for the real-time identification of mechanical system modal parameters is used in order to design different adaptive control logics aiming to reduce the vibrations in a carbon fiber plate smart structure. It is instrumented with three piezoelectric actuators, three accelerometers and three strain gauges. The real-time identification is based on a recursive subspace tracking algorithm whose outputs are elaborated by an ARMA model. A statistical approach is finally applied to choose the modal parameter correct values. These are given in input to model-based control logics such as a gain scheduling and an adaptive LQR control.

  1. Design and implementation of an experiment scheduling system for the ACTS satellite

    NASA Technical Reports Server (NTRS)

    Ringer, Mark J.

    1994-01-01

    The Advanced Communication Technology Satellite (ACTS) was launched on the 12th of September 1993 aboard STS-51. All events since that time have proceeded as planned with user operations commencing on December 6th, 1993. ACTS is a geosynchronous satellite designed to extend the state of the art in communication satellite design and is available to experimenters on a 'time/bandwidth available' basis. The ACTS satellite requires the advance scheduling of experimental activities based upon a complex set of resource, state, and activity constraints in order to ensure smooth operations. This paper describes the software system developed to schedule experiments for ACTS.

  2. Study on reservoir time-varying design flood of inflow based on Poisson process with time-dependent parameters

    NASA Astrophysics Data System (ADS)

    Li, Jiqing; Huang, Jing; Li, Jianchang

    2018-06-01

    The time-varying design flood can make full use of the measured data, which can provide the reservoir with the basis of both flood control and operation scheduling. This paper adopts peak over threshold method for flood sampling in unit periods and Poisson process with time-dependent parameters model for simulation of reservoirs time-varying design flood. Considering the relationship between the model parameters and hypothesis, this paper presents the over-threshold intensity, the fitting degree of Poisson distribution and the design flood parameters are the time-varying design flood unit period and threshold discriminant basis, deduced Longyangxia reservoir time-varying design flood process at 9 kinds of design frequencies. The time-varying design flood of inflow is closer to the reservoir actual inflow conditions, which can be used to adjust the operating water level in flood season and make plans for resource utilization of flood in the basin.

  3. Computer-aided software development process design

    NASA Technical Reports Server (NTRS)

    Lin, Chi Y.; Levary, Reuven R.

    1989-01-01

    The authors describe an intelligent tool designed to aid managers of software development projects in planning, managing, and controlling the development process of medium- to large-scale software projects. Its purpose is to reduce uncertainties in the budget, personnel, and schedule planning of software development projects. It is based on dynamic model for the software development and maintenance life-cycle process. This dynamic process is composed of a number of time-varying, interacting developmental phases, each characterized by its intended functions and requirements. System dynamics is used as a modeling methodology. The resulting Software LIfe-Cycle Simulator (SLICS) and the hybrid expert simulation system of which it is a subsystem are described.

  4. A user interface for a knowledge-based planning and scheduling system

    NASA Technical Reports Server (NTRS)

    Mulvehill, Alice M.

    1988-01-01

    The objective of EMPRESS (Expert Mission Planning and Replanning Scheduling System) is to support the planning and scheduling required to prepare science and application payloads for flight aboard the US Space Shuttle. EMPRESS was designed and implemented in Zetalisp on a 3600 series Symbolics Lisp machine. Initially, EMPRESS was built as a concept demonstration system. The system has since been modified and expanded to ensure that the data have integrity. Issues underlying the design and development of the EMPRESS-I interface, results from a system usability assessment, and consequent modifications are described.

  5. A Hybrid Procedural/Deductive Executive for Autonomous Spacecraft

    NASA Technical Reports Server (NTRS)

    Pell, Barney; Gamble, Edward B.; Gat, Erann; Kessing, Ron; Kurien, James; Millar, William; Nayak, P. Pandurang; Plaunt, Christian; Williams, Brian C.; Lau, Sonie (Technical Monitor)

    1998-01-01

    The New Millennium Remote Agent (NMRA) will be the first AI system to control an actual spacecraft. The spacecraft domain places a strong premium on autonomy and requires dynamic recoveries and robust concurrent execution, all in the presence of tight real-time deadlines, changing goals, scarce resource constraints, and a wide variety of possible failures. To achieve this level of execution robustness, we have integrated a procedural executive based on generic procedures with a deductive model-based executive. A procedural executive provides sophisticated control constructs such as loops, parallel activity, locks, and synchronization which are used for robust schedule execution, hierarchical task decomposition, and routine configuration management. A deductive executive provides algorithms for sophisticated state inference and optimal failure recover), planning. The integrated executive enables designers to code knowledge via a combination of procedures and declarative models, yielding a rich modeling capability suitable to the challenges of real spacecraft control. The interface between the two executives ensures both that recovery sequences are smoothly merged into high-level schedule execution and that a high degree of reactivity is retained to effectively handle additional failures during recovery.

  6. Education of a model student.

    PubMed

    Novikoff, Timothy P; Kleinberg, Jon M; Strogatz, Steven H

    2012-02-07

    A dilemma faced by teachers, and increasingly by designers of educational software, is the trade-off between teaching new material and reviewing what has already been taught. Complicating matters, review is useful only if it is neither too soon nor too late. Moreover, different students need to review at different rates. We present a mathematical model that captures these issues in idealized form. The student's needs are modeled as constraints on the schedule according to which educational material and review are spaced over time. Our results include algorithms to construct schedules that adhere to various spacing constraints, and bounds on the rate at which new material can be introduced under these schedules.

  7. Fault Diagnosis approach based on a model-based reasoner and a functional designer for a wind turbine. An approach towards self-maintenance

    NASA Astrophysics Data System (ADS)

    Echavarria, E.; Tomiyama, T.; van Bussel, G. J. W.

    2007-07-01

    The objective of this on-going research is to develop a design methodology to increase the availability for offshore wind farms, by means of an intelligent maintenance system capable of responding to faults by reconfiguring the system or subsystems, without increasing service visits, complexity, or costs. The idea is to make use of the existing functional redundancies within the system and sub-systems to keep the wind turbine operational, even at a reduced capacity if necessary. Re-configuration is intended to be a built-in capability to be used as a repair strategy, based on these existing functionalities provided by the components. The possible solutions can range from using information from adjacent wind turbines, such as wind speed and direction, to setting up different operational modes, for instance re-wiring, re-connecting, changing parameters or control strategy. The methodology described in this paper is based on qualitative physics and consists of a fault diagnosis system based on a model-based reasoner (MBR), and on a functional redundancy designer (FRD). Both design tools make use of a function-behaviour-state (FBS) model. A design methodology based on the re-configuration concept to achieve self-maintained wind turbines is an interesting and promising approach to reduce stoppage rate, failure events, maintenance visits, and to maintain energy output possibly at reduced rate until the next scheduled maintenance.

  8. Exploratory Model Analysis of the Space Based Infrared System (SBIRS) Low Global Scheduler Problem

    DTIC Science & Technology

    1999-12-01

    solution. The non- linear least squares model is defined as Y = f{e,t) where: 0 =M-element parameter vector Y =N-element vector of all data t...NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS EXPLORATORY MODEL ANALYSIS OF THE SPACE BASED INFRARED SYSTEM (SBIRS) LOW GLOBAL SCHEDULER...December 1999 3. REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE EXPLORATORY MODEL ANALYSIS OF THE SPACE BASED INFRARED SYSTEM

  9. Constructing an Efficient Self-Tuning Aircraft Engine Model for Control and Health Management Applications

    NASA Technical Reports Server (NTRS)

    Armstrong, Jeffrey B.; Simon, Donald L.

    2012-01-01

    Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulations.Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulatns.

  10. Decomposability and scalability in space-based observatory scheduling

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Smith, Stephen F.

    1992-01-01

    In this paper, we discuss issues of problem and model decomposition within the HSTS scheduling framework. HSTS was developed and originally applied in the context of the Hubble Space Telescope (HST) scheduling problem, motivated by the limitations of the current solution and, more generally, the insufficiency of classical planning and scheduling approaches in this problem context. We first summarize the salient architectural characteristics of HSTS and their relationship to previous scheduling and AI planning research. Then, we describe some key problem decomposition techniques supported by HSTS and underlying our integrated planning and scheduling approach, and we discuss the leverage they provide in solving space-based observatory scheduling problems.

  11. Model Checking Real Time Java Using Java PathFinder

    NASA Technical Reports Server (NTRS)

    Lindstrom, Gary; Mehlitz, Peter C.; Visser, Willem

    2005-01-01

    The Real Time Specification for Java (RTSJ) is an augmentation of Java for real time applications of various degrees of hardness. The central features of RTSJ are real time threads; user defined schedulers; asynchronous events, handlers, and control transfers; a priority inheritance based default scheduler; non-heap memory areas such as immortal and scoped, and non-heap real time threads whose execution is not impeded by garbage collection. The Robust Software Systems group at NASA Ames Research Center has JAVA PATHFINDER (JPF) under development, a Java model checker. JPF at its core is a state exploring JVM which can examine alternative paths in a Java program (e.g., via backtracking) by trying all nondeterministic choices, including thread scheduling order. This paper describes our implementation of an RTSJ profile (subset) in JPF, including requirements, design decisions, and current implementation status. Two examples are analyzed: jobs on a multiprogramming operating system, and a complex resource contention example involving autonomous vehicles crossing an intersection. The utility of JPF in finding logic and timing errors is illustrated, and the remaining challenges in supporting all of RTSJ are assessed.

  12. Applications of artificial intelligence 1993: Knowledge-based systems in aerospace and industry; Proceedings of the Meeting, Orlando, FL, Apr. 13-15, 1993

    NASA Technical Reports Server (NTRS)

    Fayyad, Usama M. (Editor); Uthurusamy, Ramasamy (Editor)

    1993-01-01

    The present volume on applications of artificial intelligence with regard to knowledge-based systems in aerospace and industry discusses machine learning and clustering, expert systems and optimization techniques, monitoring and diagnosis, and automated design and expert systems. Attention is given to the integration of AI reasoning systems and hardware description languages, care-based reasoning, knowledge, retrieval, and training systems, and scheduling and planning. Topics addressed include the preprocessing of remotely sensed data for efficient analysis and classification, autonomous agents as air combat simulation adversaries, intelligent data presentation for real-time spacecraft monitoring, and an integrated reasoner for diagnosis in satellite control. Also discussed are a knowledge-based system for the design of heat exchangers, reuse of design information for model-based diagnosis, automatic compilation of expert systems, and a case-based approach to handling aircraft malfunctions.

  13. A genetic algorithm-based job scheduling model for big data analytics.

    PubMed

    Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei

    Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.

  14. Calibration and Validation of the COCOMO II.1997.0 Cost/Schedule Estimating Model to the Space and Missile Systems Center Database

    DTIC Science & Technology

    1997-09-01

    Daly chose five models (REVIC, PRICE-S, SEER, System-4, and SPQR /20) to estimate schedule for 21 separate projects from the Electronic System Division...PRICE-S, two variants of COCOMO, System-3, SPQR /20, SASET, SoftCost-Ada) to 11 eight Ada specific programs. Ada was specifically designed for and is

  15. Reactive Scheduling in Multipurpose Batch Plants

    NASA Astrophysics Data System (ADS)

    Narayani, A.; Shaik, Munawar A.

    2010-10-01

    Scheduling is an important operation in process industries for improving resource utilization resulting in direct economic benefits. It has a two-fold objective of fulfilling customer orders within the specified time as well as maximizing the plant profit. Unexpected disturbances such as machine breakdown, arrival of rush orders and cancellation of orders affect the schedule of the plant. Reactive scheduling is generation of a new schedule which has minimum deviation from the original schedule in spite of the occurrence of unexpected events in the plant operation. Recently, Shaik & Floudas (2009) proposed a novel unified model for short-term scheduling of multipurpose batch plants using unit-specific event-based continuous time representation. In this paper, we extend the model of Shaik & Floudas (2009) to handle reactive scheduling.

  16. Multi-Satellite Scheduling Approach for Dynamic Areal Tasks Triggered by Emergent Disasters

    NASA Astrophysics Data System (ADS)

    Niu, X. N.; Zhai, X. J.; Tang, H.; Wu, L. X.

    2016-06-01

    The process of satellite mission scheduling, which plays a significant role in rapid response to emergent disasters, e.g. earthquake, is used to allocate the observation resources and execution time to a series of imaging tasks by maximizing one or more objectives while satisfying certain given constraints. In practice, the information obtained of disaster situation changes dynamically, which accordingly leads to the dynamic imaging requirement of users. We propose a satellite scheduling model to address dynamic imaging tasks triggered by emergent disasters. The goal of proposed model is to meet the emergency response requirements so as to make an imaging plan to acquire rapid and effective information of affected area. In the model, the reward of the schedule is maximized. To solve the model, we firstly present a dynamic segmenting algorithm to partition area targets. Then the dynamic heuristic algorithm embedding in a greedy criterion is designed to obtain the optimal solution. To evaluate the model, we conduct experimental simulations in the scene of Wenchuan Earthquake. The results show that the simulated imaging plan can schedule satellites to observe a wider scope of target area. We conclude that our satellite scheduling model can optimize the usage of satellite resources so as to obtain images in disaster response in a more timely and efficient manner.

  17. Monitoring objects orbiting earth using satellite-based telescopes

    DOEpatents

    Olivier, Scot S.; Pertica, Alexander J.; Riot, Vincent J.; De Vries, Willem H.; Bauman, Brian J.; Nikolaev, Sergei; Henderson, John R.; Phillion, Donald W.

    2015-06-30

    An ephemeris refinement system includes satellites with imaging devices in earth orbit to make observations of space-based objects ("target objects") and a ground-based controller that controls the scheduling of the satellites to make the observations of the target objects and refines orbital models of the target objects. The ground-based controller determines when the target objects of interest will be near enough to a satellite for that satellite to collect an image of the target object based on an initial orbital model for the target objects. The ground-based controller directs the schedules to be uploaded to the satellites, and the satellites make observations as scheduled and download the observations to the ground-based controller. The ground-based controller then refines the initial orbital models of the target objects based on the locations of the target objects that are derived from the observations.

  18. A Market-Based Approach to Multi-factory Scheduling

    NASA Astrophysics Data System (ADS)

    Vytelingum, Perukrishnen; Rogers, Alex; MacBeth, Douglas K.; Dutta, Partha; Stranjak, Armin; Jennings, Nicholas R.

    In this paper, we report on the design of a novel market-based approach for decentralised scheduling across multiple factories. Specifically, because of the limitations of scheduling in a centralised manner - which requires a center to have complete and perfect information for optimality and the truthful revelation of potentially commercially private preferences to that center - we advocate an informationally decentralised approach that is both agile and dynamic. In particular, this work adopts a market-based approach for decentralised scheduling by considering the different stakeholders representing different factories as self-interested, profit-motivated economic agents that trade resources for the scheduling of jobs. The overall schedule of these jobs is then an emergent behaviour of the strategic interaction of these trading agents bidding for resources in a market based on limited information and their own preferences. Using a simple (zero-intelligence) bidding strategy, we empirically demonstrate that our market-based approach achieves a lower bound efficiency of 84%. This represents a trade-off between a reasonable level of efficiency (compared to a centralised approach) and the desirable benefits of a decentralised solution.

  19. A Dynamic Scheduling Method of Earth-Observing Satellites by Employing Rolling Horizon Strategy

    PubMed Central

    Dishan, Qiu; Chuan, He; Jin, Liu; Manhao, Ma

    2013-01-01

    Focused on the dynamic scheduling problem for earth-observing satellites (EOS), an integer programming model is constructed after analyzing the main constraints. The rolling horizon (RH) strategy is proposed according to the independent arriving time and deadline of the imaging tasks. This strategy is designed with a mixed triggering mode composed of periodical triggering and event triggering, and the scheduling horizon is decomposed into a series of static scheduling intervals. By optimizing the scheduling schemes in each interval, the dynamic scheduling of EOS is realized. We also propose three dynamic scheduling algorithms by the combination of the RH strategy and various heuristic algorithms. Finally, the scheduling results of different algorithms are compared and the presented methods in this paper are demonstrated to be efficient by extensive experiments. PMID:23690742

  20. A dynamic scheduling method of Earth-observing satellites by employing rolling horizon strategy.

    PubMed

    Dishan, Qiu; Chuan, He; Jin, Liu; Manhao, Ma

    2013-01-01

    Focused on the dynamic scheduling problem for earth-observing satellites (EOS), an integer programming model is constructed after analyzing the main constraints. The rolling horizon (RH) strategy is proposed according to the independent arriving time and deadline of the imaging tasks. This strategy is designed with a mixed triggering mode composed of periodical triggering and event triggering, and the scheduling horizon is decomposed into a series of static scheduling intervals. By optimizing the scheduling schemes in each interval, the dynamic scheduling of EOS is realized. We also propose three dynamic scheduling algorithms by the combination of the RH strategy and various heuristic algorithms. Finally, the scheduling results of different algorithms are compared and the presented methods in this paper are demonstrated to be efficient by extensive experiments.

  1. How do current irrigation practices perform? Evaluation of different irrigation scheduling approaches based on experiements and crop model simulations

    NASA Astrophysics Data System (ADS)

    Seidel, Sabine J.; Werisch, Stefan; Barfus, Klemens; Wagner, Michael; Schütze, Niels; Laber, Hermann

    2014-05-01

    The increasing worldwide water scarcity, costs and negative off-site effects of irrigation are leading to the necessity of developing methods of irrigation that increase water productivity. Various approaches are available for irrigation scheduling. Traditionally schedules are calculated based on soil water balance (SWB) calculations using some measure of reference evaporation and empirical crop coeffcients. These crop-specific coefficients are provided by the FAO but are also available for different regions (e.g. Germany). The approach is simple but there are several inaccuracies due to simplifications and limitations such as poor transferability. Crop growth models - which simulate the main physiological plant processes through a set of assumptions and calibration parameter - are widely used to support decision making, but also for yield gap or scenario analyses. One major advantage of mechanistic models compared to empirical approaches is their spatial and temporal transferability. Irrigation scheduling can also be based on measurements of soil water tension which is closely related to plant stress. Advantages of precise and easy measurements are able to be automated but face difficulties of finding the place where to probe especially in heterogenous soils. In this study, a two-year field experiment was used to extensively evaluate the three mentioned irrigation scheduling approaches regarding their efficiency on irrigation water application with the aim to promote better agronomic practices in irrigated horticulture. To evaluate the tested irrigation scheduling approaches, an extensive plant and soil water data collection was used to precisely calibrate the mechanistic crop model Daisy. The experiment was conducted with white cabbage (Brassica oleracea L.) on a sandy loamy field in 2012/13 near Dresden, Germany. Hereby, three irrigation scheduling approaches were tested: (i) two schedules were estimated based on SWB calculations using different crop coefficients, and (ii) one treatment was automatically drip irrigated using tensiometers (irrigation of 15 mm at a soil tension of -250 hPa at 30 cm soil depth). In treatment (iii), the irrigation schedule was estimated (using the same critera as in the tension-based treatment) applying the model Daisy partially calibrated against data of 2012. Moreover, one control treatment was minimally irrigated. Measured yield was highest for the tension-based treatment with a low irrigation water input (8.5 DM t/ha, 120 mm). Both SWB treatments showed lower yields and higher irrigation water input (both 8.3 DM t/ha, 306 and 410 mm). The simulation model based treatment yielded lower (7.5 DM t/ha, 106 mm) mainly due to drought stress caused by inaccurate simulation of the soil water dynamics and thus an overestimation of the soil moisture. The evaluation using the calibrated model estimated heavy deep percolation under both SWB treatments. Targeting the challenge to increase water productivity, soil water tension-based irrigation should be favoured. Irrigation scheduling based on SWB calculation requires accurate estimates of crop coefficients. A robust calibration of mechanistic crop models implies a high effort and can be recommended to farmers only to some extent but enables comprehensive crop growth and site analyses.

  2. Metroplex Optimization Model Expansion and Analysis: The Airline Fleet, Route, and Schedule Optimization Model (AFRS-OM)

    NASA Technical Reports Server (NTRS)

    Sherry, Lance; Ferguson, John; Hoffman, Karla; Donohue, George; Beradino, Frank

    2012-01-01

    This report describes the Airline Fleet, Route, and Schedule Optimization Model (AFRS-OM) that is designed to provide insights into airline decision-making with regards to markets served, schedule of flights on these markets, the type of aircraft assigned to each scheduled flight, load factors, airfares, and airline profits. The main inputs to the model are hedged fuel prices, airport capacity limits, and candidate markets. Embedded in the model are aircraft performance and associated cost factors, and willingness-to-pay (i.e. demand vs. airfare curves). Case studies demonstrate the application of the model for analysis of the effects of increased capacity and changes in operating costs (e.g. fuel prices). Although there are differences between airports (due to differences in the magnitude of travel demand and sensitivity to airfare), the system is more sensitive to changes in fuel prices than capacity. Further, the benefits of modernization in the form of increased capacity could be undermined by increases in hedged fuel prices

  3. A standard protocol for describing individual-based and agent-based models

    USGS Publications Warehouse

    Grimm, Volker; Berger, Uta; Bastiansen, Finn; Eliassen, Sigrunn; Ginot, Vincent; Giske, Jarl; Goss-Custard, John; Grand, Tamara; Heinz, Simone K.; Huse, Geir; Huth, Andreas; Jepsen, Jane U.; Jorgensen, Christian; Mooij, Wolf M.; Muller, Birgit; Pe'er, Guy; Piou, Cyril; Railsback, Steven F.; Robbins, Andrew M.; Robbins, Martha M.; Rossmanith, Eva; Ruger, Nadja; Strand, Espen; Souissi, Sami; Stillman, Richard A.; Vabo, Rune; Visser, Ute; DeAngelis, Donald L.

    2006-01-01

    Simulation models that describe autonomous individual organisms (individual based models, IBM) or agents (agent-based models, ABM) have become a widely used tool, not only in ecology, but also in many other disciplines dealing with complex systems made up of autonomous entities. However, there is no standard protocol for describing such simulation models, which can make them difficult to understand and to duplicate. This paper presents a proposed standard protocol, ODD, for describing IBMs and ABMs, developed and tested by 28 modellers who cover a wide range of fields within ecology. This protocol consists of three blocks (Overview, Design concepts, and Details), which are subdivided into seven elements: Purpose, State variables and scales, Process overview and scheduling, Design concepts, Initialization, Input, and Submodels. We explain which aspects of a model should be described in each element, and we present an example to illustrate the protocol in use. In addition, 19 examples are available in an Online Appendix. We consider ODD as a first step for establishing a more detailed common format of the description of IBMs and ABMs. Once initiated, the protocol will hopefully evolve as it becomes used by a sufficiently large proportion of modellers.

  4. Autonomous power expert system

    NASA Technical Reports Server (NTRS)

    Walters, Jerry L.; Petrik, Edward J.; Roth, Mary Ellen; Truong, Long Van; Quinn, Todd; Krawczonek, Walter M.

    1990-01-01

    The Autonomous Power Expert (APEX) system was designed to monitor and diagnose fault conditions that occur within the Space Station Freedom Electrical Power System (SSF/EPS) Testbed. APEX is designed to interface with SSF/EPS testbed power management controllers to provide enhanced autonomous operation and control capability. The APEX architecture consists of three components: (1) a rule-based expert system, (2) a testbed data acquisition interface, and (3) a power scheduler interface. Fault detection, fault isolation, justification of probable causes, recommended actions, and incipient fault analysis are the main functions of the expert system component. The data acquisition component requests and receives pertinent parametric values from the EPS testbed and asserts the values into a knowledge base. Power load profile information is obtained from a remote scheduler through the power scheduler interface component. The current APEX design and development work is discussed. Operation and use of APEX by way of the user interface screens is also covered.

  5. Scheduling in the context of resident duty hour reform

    PubMed Central

    2014-01-01

    Fuelled by concerns about resident health and patient safety, there is a general trend in many jurisdictions toward limiting the maximum duration of consecutive work to between 14 and 16 hours. The goal of this article is to assist institutions and residency programs to make a smooth transition from the previous 24- to 36-hour call system to this new model. We will first give an overview of the main types of coverage systems and their relative merits when considering various aspects of patient care and resident pedagogy. We will then suggest a practical step-by-step approach to designing, implementing, and monitoring a scheduling system centred on clinical and educational needs in the context of resident duty hour reform. The importance of understanding the impetus for change and of assessing the need for overall workflow restructuring will be explored throughout this process. Finally, as a practical example, we will describe a large, university-based teaching hospital network’s transition from a traditional call-based system to a novel schedule that incorporates the new 16-hour duty limit. PMID:25561221

  6. Real-time scheduling using minimum search

    NASA Technical Reports Server (NTRS)

    Tadepalli, Prasad; Joshi, Varad

    1992-01-01

    In this paper we consider a simple model of real-time scheduling. We present a real-time scheduling system called RTS which is based on Korf's Minimin algorithm. Experimental results show that the schedule quality initially improves with the amount of look-ahead search and tapers off quickly. So it sppears that reasonably good schedules can be produced with a relatively shallow search.

  7. Artificial Immune Algorithm for Subtask Industrial Robot Scheduling in Cloud Manufacturing

    NASA Astrophysics Data System (ADS)

    Suma, T.; Murugesan, R.

    2018-04-01

    The current generation of manufacturing industry requires an intelligent scheduling model to achieve an effective utilization of distributed manufacturing resources, which motivated us to work on an Artificial Immune Algorithm for subtask robot scheduling in cloud manufacturing. This scheduling model enables a collaborative work between the industrial robots in different manufacturing centers. This paper discussed two optimizing objectives which includes minimizing the cost and load balance of industrial robots through scheduling. To solve these scheduling problems, we used the algorithm based on Artificial Immune system. The parameters are simulated with MATLAB and the results compared with the existing algorithms. The result shows better performance than existing.

  8. MaGate Simulator: A Simulation Environment for a Decentralized Grid Scheduler

    NASA Astrophysics Data System (ADS)

    Huang, Ye; Brocco, Amos; Courant, Michele; Hirsbrunner, Beat; Kuonen, Pierre

    This paper presents a simulator for of a decentralized modular grid scheduler named MaGate. MaGate’s design emphasizes scheduler interoperability by providing intelligent scheduling serving the grid community as a whole. Each MaGate scheduler instance is able to deal with dynamic scheduling conditions, with continuously arriving grid jobs. Received jobs are either allocated on local resources, or delegated to other MaGates for remote execution. The proposed MaGate simulator is based on GridSim toolkit and Alea simulator, and abstracts the features and behaviors of complex fundamental grid elements, such as grid jobs, grid resources, and grid users. Simulation of scheduling tasks is supported by a grid network overlay simulator executing distributed ant-based swarm intelligence algorithms to provide services such as group communication and resource discovery. For evaluation, a comparison of behaviors of different collaborative policies among a community of MaGates is provided. Results support the use of the proposed approach as a functional ready grid scheduler simulator.

  9. Integrated flight/propulsion control design for a STOVL aircraft using H-infinity control design techniques

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay; Ouzts, Peter J.

    1991-01-01

    Results are presented from an application of H-infinity control design methodology to a centralized integrated flight propulsion control (IFPC) system design for a supersonic Short Takeoff and Vertical Landing (STOVL) fighter aircraft in transition flight. The emphasis is on formulating the H-infinity control design problem such that the resulting controller provides robustness to modeling uncertainties and model parameter variations with flight condition. Experience gained from a preliminary H-infinity based IFPC design study performed earlier is used as the basis to formulate the robust H-infinity control design problem and improve upon the previous design. Detailed evaluation results are presented for a reduced order controller obtained from the improved H-infinity control design showing that the control design meets the specified nominal performance objectives as well as provides stability robustness for variations in plant system dynamics with changes in aircraft trim speed within the transition flight envelope. A controller scheduling technique which accounts for changes in plant control effectiveness with variation in trim conditions is developed and off design model performance results are presented.

  10. Experimental comparison of conventional and nonlinear model-based control of a mixing tank

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haeggblom, K.E.

    1993-11-01

    In this case study concerning control of a laboratory-scale mixing tank, conventional multiloop single-input single-output (SISO) control is compared with model-based'' control where the nonlinearity and multivariable characteristics of the process are explicitly taken into account. It is shown, especially if the operating range of the process is large, that the two outputs (level and temperature) cannot be adequately controlled by multiloop SISO control even if gain scheduling is used. By nonlinear multiple-input multiple-output (MIMO) control, on the other hand, very good control performance is obtained. The basic approach to nonlinear control used in this study is first to transformmore » the process into a globally linear and decoupled system, and then to design controllers for this system. Because of the properties of the resulting MIMO system, the controller design is very easy. Two nonlinear control system designs based on a steady-state and a dynamic model, respectively, are considered. In the dynamic case, both setpoint tracking and disturbance rejection can be addressed separately.« less

  11. Optimizing integrated airport surface and terminal airspace operations under uncertainty

    NASA Astrophysics Data System (ADS)

    Bosson, Christabelle S.

    In airports and surrounding terminal airspaces, the integration of surface, arrival and departure scheduling and routing have the potential to improve the operations efficiency. Moreover, because both the airport surface and the terminal airspace are often altered by random perturbations, the consideration of uncertainty in flight schedules is crucial to improve the design of robust flight schedules. Previous research mainly focused on independently solving arrival scheduling problems, departure scheduling problems and surface management scheduling problems and most of the developed models are deterministic. This dissertation presents an alternate method to model the integrated operations by using a machine job-shop scheduling formulation. A multistage stochastic programming approach is chosen to formulate the problem in the presence of uncertainty and candidate solutions are obtained by solving sample average approximation problems with finite sample size. The developed mixed-integer-linear-programming algorithm-based scheduler is capable of computing optimal aircraft schedules and routings that reflect the integration of air and ground operations. The assembled methodology is applied to a Los Angeles case study. To show the benefits of integrated operations over First-Come-First-Served, a preliminary proof-of-concept is conducted for a set of fourteen aircraft evolving under deterministic conditions in a model of the Los Angeles International Airport surface and surrounding terminal areas. Using historical data, a representative 30-minute traffic schedule and aircraft mix scenario is constructed. The results of the Los Angeles application show that the integration of air and ground operations and the use of a time-based separation strategy enable both significant surface and air time savings. The solution computed by the optimization provides a more efficient routing and scheduling than the First-Come-First-Served solution. Additionally, a data driven analysis is performed for the Los Angeles environment and probabilistic distributions of pertinent uncertainty sources are obtained. A sensitivity analysis is then carried out to assess the methodology performance and find optimal sampling parameters. Finally, simulations of increasing traffic density in the presence of uncertainty are conducted first for integrated arrivals and departures, then for integrated surface and air operations. To compare the optimization results and show the benefits of integrated operations, two aircraft separation methods are implemented that offer different routing options. The simulations of integrated air operations and the simulations of integrated air and surface operations demonstrate that significant traveling time savings, both total and individual surface and air times, can be obtained when more direct routes are allowed to be traveled even in the presence of uncertainty. The resulting routings induce however extra take off delay for departing flights. As a consequence, some flights cannot meet their initial assigned runway slot which engenders runway position shifting when comparing resulting runway sequences computed under both deterministic and stochastic conditions. The optimization is able to compute an optimal runway schedule that represents an optimal balance between total schedule delays and total travel times.

  12. Smart monitoring system based on adaptive current control for superconducting cable test.

    PubMed

    Arpaia, Pasquale; Ballarino, Amalia; Daponte, Vincenzo; Montenero, Giuseppe; Svelto, Cesare

    2014-12-01

    A smart monitoring system for superconducting cable test is proposed with an adaptive current control of a superconducting transformer secondary. The design, based on Fuzzy Gain Scheduling, allows the controller parameters to adapt continuously, and finely, to the working variations arising from transformer nonlinear dynamics. The control system is integrated in a fully digital control loop, with all the related benefits, i.e., high noise rejection, ease of implementation/modification, and so on. In particular, an accurate model of the system, controlled by a Fuzzy Gain Scheduler of the superconducting transformer, was achieved by an experimental campaign through the working domain at several current ramp rates. The model performance was characterized by simulation, under all the main operating conditions, in order to guide the controller design. Finally, the proposed monitoring system was experimentally validated at European Organization for Nuclear Research (CERN) in comparison to the state-of-the-art control system [P. Arpaia, L. Bottura, G. Montenero, and S. Le Naour, "Performance improvement of a measurement station for superconducting cable test," Rev. Sci. Instrum. 83, 095111 (2012)] of the Facility for the Research on Superconducting Cables, achieving a significant performance improvement: a reduction in the system overshoot by 50%, with a related attenuation of the corresponding dynamic residual error (both absolute and RMS) up to 52%.

  13. Gain Scheduling for the Orion Launch Abort Vehicle Controller

    NASA Technical Reports Server (NTRS)

    McNamara, Sara J.; Restrepo, Carolina I.; Madsen, Jennifer M.; Medina, Edgar A.; Proud, Ryan W.; Whitley, Ryan J.

    2011-01-01

    One of NASAs challenges for the Orion vehicle is the control system design for the Launch Abort Vehicle (LAV), which is required to abort safely at any time during the atmospheric ascent portion of ight. The focus of this paper is the gain design and scheduling process for a controller that covers the wide range of vehicle configurations and flight conditions experienced during the full envelope of potential abort trajectories from the pad to exo-atmospheric flight. Several factors are taken into account in the automation process for tuning the gains including the abort effectors, the environmental changes and the autopilot modes. Gain scheduling is accomplished using a linear quadratic regulator (LQR) approach for the decoupled, simplified linear model throughout the operational envelope in time, altitude and Mach number. The derived gains are then implemented into the full linear model for controller requirement validation. Finally, the gains are tested and evaluated in a non-linear simulation using the vehicles ight software to ensure performance requirements are met. An overview of the LAV controller design and a description of the linear plant models are presented. Examples of the most significant challenges with the automation of the gain tuning process are then discussed. In conclusion, the paper will consider the lessons learned through out the process, especially in regards to automation, and examine the usefulness of the gain scheduling tool and process developed as applicable to non-Orion vehicles.

  14. Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE). Volume 3: The GREEDY algorithm

    NASA Technical Reports Server (NTRS)

    Dupnick, E.; Wiggins, D.

    1980-01-01

    The functional specifications, functional design and flow, and the program logic of the GREEDY computer program are described. The GREEDY program is a submodule of the Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE) program and has been designed as a continuation of the shuttle Mission Payloads (MPLS) program. The MPLS uses input payload data to form a set of feasible payload combinations; from these, GREEDY selects a subset of combinations (a traffic model) so all payloads can be included without redundancy. The program also provides the user a tutorial option so that he can choose an alternate traffic model in case a particular traffic model is unacceptable.

  15. Deep space network resource scheduling approach and application

    NASA Technical Reports Server (NTRS)

    Eggemeyer, William C.; Bowling, Alan

    1987-01-01

    Deep Space Network (DSN) resource scheduling is the process of distributing ground-based facilities to track multiple spacecraft. The Jet Propulsion Laboratory has carried out extensive research to find ways of automating this process in an effort to reduce time and manpower costs. This paper presents a resource-scheduling system entitled PLAN-IT with a description of its design philosophy. The PLAN-IT's current on-line usage and limitations in scheduling the resources of the DSN are discussed, along with potential enhancements for DSN application.

  16. Linux Kernel Co-Scheduling For Bulk Synchronous Parallel Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Terry R

    2011-01-01

    This paper describes a kernel scheduling algorithm that is based on co-scheduling principles and that is intended for parallel applications running on 1000 cores or more where inter-node scalability is key. Experimental results for a Linux implementation on a Cray XT5 machine are presented.1 The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.

  17. Etude et simulation du protocole TTEthernet sur un sous-systeme de gestion de vols et adaptation de la planification des tâches a des fins de simulation

    NASA Astrophysics Data System (ADS)

    Abidi, Dhafer

    TTEthernet is a deterministic network technology that makes enhancements to Layer 2 Quality-of-Service (QoS) for Ethernet. The components that implement its services enrich the Ethernet functionality with distributed fault-tolerant synchronization, robust temporal partitioning bandwidth and synchronous communication with fixed latency and low jitter. TTEthernet services can facilitate the design of scalable, robust, less complex distributed systems and architectures tolerant to faults. Simulation is nowadays an essential step in critical systems design process and represents a valuable support for validation and performance evaluation. CoRE4INET is a project bringing together all TTEthernet simulation models currently available. It is based on the extension of models of OMNeT ++ INET framework. Our objective is to study and simulate the TTEthernet protocol on a flight management subsystem (FMS). The idea is to use CoRE4INET to design the simulation model of the target system. The problem is that CoRE4INET does not offer a task scheduling tool for TTEthernet network. To overcome this problem we propose an adaptation for simulation purposes of a task scheduling approach based on formal specification of network constraints. The use of Yices solver allowed the translation of the formal specification into an executable program to generate the desired transmission plan. A case study allowed us at the end to assess the impact of the arrangement of Time-Triggered frames offsets on the performance of each type of the system traffic.

  18. Education of a model student

    PubMed Central

    Novikoff, Timothy P.; Kleinberg, Jon M.; Strogatz, Steven H.

    2012-01-01

    A dilemma faced by teachers, and increasingly by designers of educational software, is the trade-off between teaching new material and reviewing what has already been taught. Complicating matters, review is useful only if it is neither too soon nor too late. Moreover, different students need to review at different rates. We present a mathematical model that captures these issues in idealized form. The student’s needs are modeled as constraints on the schedule according to which educational material and review are spaced over time. Our results include algorithms to construct schedules that adhere to various spacing constraints, and bounds on the rate at which new material can be introduced under these schedules. PMID:22308334

  19. CHIMERA II - A real-time multiprocessing environment for sensor-based robot control

    NASA Technical Reports Server (NTRS)

    Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.

    1989-01-01

    A multiprocessing environment for a wide variety of sensor-based robot system, providing the flexibility, performance, and UNIX-compatible interface needed for fast development of real-time code is addressed. The requirements imposed on the design of a programming environment for sensor-based robotic control is outlined. The details of the current hardware configuration are presented, along with the details of the CHIMERA II software. Emphasis is placed on the kernel, low-level interboard communication, user interface, extended file system, user-definable and dynamically selectable real-time schedulers, remote process synchronization, and generalized interprocess communication. A possible implementation of a hierarchical control model, the NASA/NBS standard reference model for telerobot control system is demonstrated.

  20. Market-Based Approaches to Managing Science Return from Planetary Missions

    NASA Technical Reports Server (NTRS)

    Wessen, Randii R.; Porter, David; Hanson, Robin

    1996-01-01

    A research plan is described for the design and testing of a method for the planning and negotiation of science observations. The research plan is presented in relation to the fact that the current method, which involves a hierarchical process of science working groups, is unsuitable for the planning of the Cassini mission. The research plan involves the market-based approach in which participants are allocated budgets of scheduling points. The points are used to provide an intensity of preference for the observations being scheduled. In this way, the schedulers do not have to limit themselves to solving major conflicts, but try to maximize the number of scheduling points that result in a conflict-free timeline. Incentives are provided for the participants by the fixed budget concerning their tradeoff decisions. A degree of feedback is provided in the process so that the schedulers may rebid based on the current timeline.

  1. Increasing operating room productivity by duration categories and a newsvendor model.

    PubMed

    Lehtonen, Juha-Matti; Torkki, Paulus; Peltokorpi, Antti; Moilanen, Teemu

    2013-01-01

    Previous studies approach surgery scheduling mainly from the mathematical modeling perspective which is often hard to apply in a practical environment. The aim of this study is to develop a practical scheduling system that considers the advantages of both surgery categorization and newsvendor model to surgery scheduling. The research was carried out in a Finnish orthopaedic specialist centre that performs only joint replacement surgery. Four surgery categorization scenarios were defined and their productivity analyzed by simulation and newsvendor model. Detailed analyses of surgery durations and the use of more accurate case categories and their combinations in scheduling improved OR productivity 11.3 percent when compared to the base case. Planning to have one OR team to work longer led to remarkable decrease in scheduling inefficiency. In surgical services, productivity and cost-efficiency can be improved by utilizing historical data in case scheduling and by increasing flexibility in personnel management. The study increases the understanding of practical scheduling methods used to improve efficiency in surgical services.

  2. Effectiveness of cytopenia prophylaxis for different filgrastim and pegfilgrastim schedules in a chemotherapy mouse model

    PubMed Central

    Scholz, Markus; Ackermann, Manuela; Emmrich, Frank; Loeffler, Markus; Kamprad, Manja

    2009-01-01

    Objectives Recombinant human granulocyte colony-stimulating factor (rhG-CSF) is widely used to treat neutropenia during cytotoxic chemotherapy. The optimal scheduling of rhG-CSF is unknown and can hardly be tested in clinical studies due to numerous therapy parameters affecting outcome (chemotherapeutic regimen, rhG-CSF schedules, individual covariables). Motivated by biomathematical model simulations, we aim to investigate different rhG-CSF schedules in a preclinical chemotherapy mouse model. Methods The time course of hematotoxicity was studied in CD-1 mice after cyclophosphamide (CP) administration. Filgrastim was applied concomitantly in a 2 × 3-factorial design of two dosing options (2 × 20 μg and 4 × 10 μg) and three timing options (directly, one, and two days after CP). Alternatively, a single dose of 40 μg pegfilgrastim was applied at the three timing options. The resulting cytopenia was compared among the schedules. Results Dosing and timing had a significant influence on the effectiveness of filgrastim schedules whereas for pegfilgrastim the timing effect was irrelevant. The best filgrastim and pegfilgrastim schedules exhibited equivalent toxicity. Monocytes dynamics performed analogously to granulocytes. All schedules showed roughly the same lymphotoxicity. Conclusion We conclude that effectiveness of filgrastim application depends heavily on its scheduling during chemotherapy. There is an optimum of timing. Dose splitting is better than concentrated applications. Effectiveness of pegfilgrastim is less dependent on timing. PMID:19707393

  3. Effectiveness of cytopenia prophylaxis for different filgrastim and pegfilgrastim schedules in a chemotherapy mouse model.

    PubMed

    Scholz, Markus; Ackermann, Manuela; Emmrich, Frank; Loeffler, Markus; Kamprad, Manja

    2009-01-01

    Recombinant human granulocyte colony-stimulating factor (rhG-CSF) is widely used to treat neutropenia during cytotoxic chemotherapy. The optimal scheduling of rhG-CSF is unknown and can hardly be tested in clinical studies due to numerous therapy parameters affecting outcome (chemotherapeutic regimen, rhG-CSF schedules, individual covariables). Motivated by biomathematical model simulations, we aim to investigate different rhG-CSF schedules in a preclinical chemotherapy mouse model. The time course of hematotoxicity was studied in CD-1 mice after cyclophosphamide (CP) administration. Filgrastim was applied concomitantly in a 2 x 3-factorial design of two dosing options (2 x 20 mug and 4 x 10 mug) and three timing options (directly, one, and two days after CP). Alternatively, a single dose of 40 mug pegfilgrastim was applied at the three timing options. The resulting cytopenia was compared among the schedules. Dosing and timing had a significant influence on the effectiveness of filgrastim schedules whereas for pegfilgrastim the timing effect was irrelevant. The best filgrastim and pegfilgrastim schedules exhibited equivalent toxicity. Monocytes dynamics performed analogously to granulocytes. All schedules showed roughly the same lymphotoxicity. We conclude that effectiveness of filgrastim application depends heavily on its scheduling during chemotherapy. There is an optimum of timing. Dose splitting is better than concentrated applications. Effectiveness of pegfilgrastim is less dependent on timing.

  4. Scheduling algorithm for data relay satellite optical communication based on artificial intelligent optimization

    NASA Astrophysics Data System (ADS)

    Zhao, Wei-hu; Zhao, Jing; Zhao, Shang-hong; Li, Yong-jun; Wang, Xiang; Dong, Yi; Dong, Chen

    2013-08-01

    Optical satellite communication with the advantages of broadband, large capacity and low power consuming broke the bottleneck of the traditional microwave satellite communication. The formation of the Space-based Information System with the technology of high performance optical inter-satellite communication and the realization of global seamless coverage and mobile terminal accessing are the necessary trend of the development of optical satellite communication. Considering the resources, missions and restraints of Data Relay Satellite Optical Communication System, a model of optical communication resources scheduling is established and a scheduling algorithm based on artificial intelligent optimization is put forwarded. According to the multi-relay-satellite, multi-user-satellite, multi-optical-antenna and multi-mission with several priority weights, the resources are scheduled reasonable by the operation: "Ascertain Current Mission Scheduling Time" and "Refresh Latter Mission Time-Window". The priority weight is considered as the parameter of the fitness function and the scheduling project is optimized by the Genetic Algorithm. The simulation scenarios including 3 relay satellites with 6 optical antennas, 12 user satellites and 30 missions, the simulation result reveals that the algorithm obtain satisfactory results in both efficiency and performance and resources scheduling model and the optimization algorithm are suitable in multi-relay-satellite, multi-user-satellite, and multi-optical-antenna recourses scheduling problem.

  5. Understanding the antiangiogenic effect of metronomic chemotherapy through a simple mathematical model

    NASA Astrophysics Data System (ADS)

    Rodrigues, Diego S.; Mancera, Paulo F. A.; Pinho, Suani T. R.

    2016-12-01

    Despite the current and increasingly successful fight against cancer, there are some important questions concerning the efficiency of its treatment - in particular, the design of oncology chemotherapy protocols. Seeking efficiency, schedules based on more frequent, low-doses of drugs, known as metronomic chemotherapy, have been proposed as an alternative to the classical standard protocol of chemotherapy administration. The in silico approach may be very useful for providing a comparative analysis of these two kinds of protocols. In so doing, we found that metronomic schedules are more effective in eliminating tumour cells mainly due to their chemotherapeutic action on endothelial cells and that more frequent, low drug doses also entail outcomes in which the survival time of patient is increased.

  6. A survey of ground operations tools developed to simulate the pointing of space telescopes and the design for WISE

    NASA Technical Reports Server (NTRS)

    Fabinsky, Beth

    2006-01-01

    WISE, the Wide Field Infrared Survey Explorer, is scheduled for launch in June 2010. The mission operations system for WISE requires a software modeling tool to help plan, integrate and simulate all spacecraft pointing and verify that no attitude constraints are violated. In the course of developing the requirements for this tool, an investigation was conducted into the design of similar tools for other space-based telescopes. This paper summarizes the ground software and processes used to plan and validate pointing for a selection of space telescopes; with this information as background, the design for WISE is presented.

  7. A comparative evaluation of the effect of Internet-based CME delivery format on satisfaction, knowledge and confidence.

    PubMed

    Curran, Vernon R; Fleet, Lisa J; Kirby, Fran

    2010-01-29

    Internet-based instruction in continuing medical education (CME) has been associated with favorable outcomes. However, more direct comparative studies of different Internet-based interventions, instructional methods, presentation formats, and approaches to implementation are needed. The purpose of this study was to conduct a comparative evaluation of two Internet-based CME delivery formats and the effect on satisfaction, knowledge and confidence outcomes. Evaluative outcomes of two differing formats of an Internet-based CME course with identical subject matter were compared. A Scheduled Group Learning format involved case-based asynchronous discussions with peers and a facilitator over a scheduled 3-week delivery period. An eCME On Demand format did not include facilitated discussion and was not based on a schedule; participants could start and finish at any time. A retrospective, pre-post evaluation study design comparing identical satisfaction, knowledge and confidence outcome measures was conducted. Participants in the Scheduled Group Learning format reported significantly higher mean satisfaction ratings in some areas, performed significantly higher on a post-knowledge assessment and reported significantly higher post-confidence scores than participants in the eCME On Demand format that was not scheduled and did not include facilitated discussion activity. The findings support the instructional benefits of a scheduled delivery format and facilitated asynchronous discussion in Internet-based CME.

  8. Reusable Rocket Engine Operability Modeling and Analysis

    NASA Technical Reports Server (NTRS)

    Christenson, R. L.; Komar, D. R.

    1998-01-01

    This paper describes the methodology, model, input data, and analysis results of a reusable launch vehicle engine operability study conducted with the goal of supporting design from an operations perspective. Paralleling performance analyses in schedule and method, this requires the use of metrics in a validated operations model useful for design, sensitivity, and trade studies. Operations analysis in this view is one of several design functions. An operations concept was developed given an engine concept and the predicted operations and maintenance processes incorporated into simulation models. Historical operations data at a level of detail suitable to model objectives were collected, analyzed, and formatted for use with the models, the simulations were run, and results collected and presented. The input data used included scheduled and unscheduled timeline and resource information collected into a Space Transportation System (STS) Space Shuttle Main Engine (SSME) historical launch operations database. Results reflect upon the importance not only of reliable hardware but upon operations and corrective maintenance process improvements.

  9. Information system and website design to support theautomotive manufacture ERP system

    NASA Astrophysics Data System (ADS)

    Amran, T. G.; Azmi, N.; Surjawati, A. A.

    2017-12-01

    This research is to create an on-time production system design with Heijunka model so that the product diversity for all models could meet time and capacity requirements, own production flexibility, high quality, meet the customers’ demands, realistic in production as well as creating a web-based local components’ order information system that supports the Enterprise Resource Planning (ERP) system. The Heijunka model for equalization with heuristic and stochastic model has been implemented for productions up to 3000 units by implementing Suzuki International Manufacturing. The inefficiency in the local order information system demanded the need for a new information system design that is integrated in ERP. Kaizen needs to be done is the Supplier Network that all vendors can download and utilize those data to deliver the components to the company and for vendors’ internal uses as well. The model design is presumed effective where the model is able to be utilized as a solution so that the production can run according to the schedule and presumed efficient were the model is able to show the reduction of loss time and stock.

  10. LPV gain-scheduled control of SCR aftertreatment systems

    NASA Astrophysics Data System (ADS)

    Meisami-Azad, Mona; Mohammadpour, Javad; Grigoriadis, Karolos M.; Harold, Michael P.; Franchek, Matthew A.

    2012-01-01

    Hydrocarbons, carbon monoxide and some of other polluting emissions produced by diesel engines are usually lower than those produced by gasoline engines. While great strides have been made in the exhaust aftertreatment of vehicular pollutants, the elimination of nitrogen oxide (NO x ) from diesel vehicles is still a challenge. The primary reason is that diesel combustion is a fuel-lean process, and hence there is significant unreacted oxygen in the exhaust. Selective catalytic reduction (SCR) is a well-developed technology for power plants and has been recently employed for reducing NO x emissions from automotive sources and in particular, heavy-duty diesel engines. In this article, we develop a linear parameter-varying (LPV) feedforward/feedback control design method for the SCR aftertreatment system to decrease NO x emissions while keeping ammonia slippage to a desired low level downstream the catalyst. The performance of the closed-loop system obtained from the interconnection of the SCR system and the output feedback LPV control strategy is then compared with other control design methods including sliding mode, and observer-based static state-feedback parameter-varying control. To reduce the computational complexity involved in the control design process, the number of LPV parameters in the developed quasi-LPV (qLPV) model is reduced by applying the principal component analysis technique. An LPV feedback/feedforward controller is then designed for the qLPV model with reduced number of scheduling parameters. The designed full-order controller is further simplified to a first-order transfer function with a parameter-varying gain and pole. Finally, simulation results using both a low-order model and a high-fidelity and high-order model of SCR reactions in GT-POWER interfaced with MATLAB/SIMULINK illustrate the high NO x conversion efficiency of the closed-loop SCR system using the proposed parameter-varying control law.

  11. Creative Classroom Assignment Through Database Management.

    ERIC Educational Resources Information Center

    Shah, Vivek; Bryant, Milton

    1987-01-01

    The Faculty Scheduling System (FSS), a database management system designed to give administrators the ability to schedule faculty in a fast and efficient manner is described. The FSS, developed using dBASE III, requires an IBM compatible microcomputer with a minimum of 256K memory. (MLW)

  12. IpexT: Integrated Planning and Execution for Military Satellite Tele-Communications

    NASA Technical Reports Server (NTRS)

    Plaunt, Christian; Rajan, Kanna

    2004-01-01

    The next generation of military communications satellites may be designed as a fast packet-switched constellation of spacecraft able to withstand substantial bandwidth capacity fluctuation in the face of dynamic resource utilization and rapid environmental changes including jamming of communication frequencies and unstable weather phenomena. We are in the process of designing an integrated scheduling and execution tool which will aid in the analysis of the design parameters needed for building such a distributed system for nominal and battlefield communications. This paper discusses the design of such a system based on a temporal constraint posting planner/scheduler and a smart executive which can cope with a dynamic environment to make a more optimal utilization of bandwidth than the current circuit switched based approach.

  13. Re-Engineering Complex Legacy Systems at NASA

    NASA Technical Reports Server (NTRS)

    Ruszkowski, James; Meshkat, Leila

    2010-01-01

    The Flight Production Process (FPP) Re-engineering project has established a Model-Based Systems Engineering (MBSE) methodology and the technological infrastructure for the design and development of a reference, product-line architecture as well as an integrated workflow model for the Mission Operations System (MOS) for human space exploration missions at NASA Johnson Space Center. The design and architectural artifacts have been developed based on the expertise and knowledge of numerous Subject Matter Experts (SMEs). The technological infrastructure developed by the FPP Re-engineering project has enabled the structured collection and integration of this knowledge and further provides simulation and analysis capabilities for optimization purposes. A key strength of this strategy has been the judicious combination of COTS products with custom coding. The lean management approach that has led to the success of this project is based on having a strong vision for the whole lifecycle of the project and its progress over time, a goal-based design and development approach, a small team of highly specialized people in areas that are critical to the project, and an interactive approach for infusing new technologies into existing processes. This project, which has had a relatively small amount of funding, is on the cutting edge with respect to the utilization of model-based design and systems engineering. An overarching challenge that was overcome by this project was to convince upper management of the needs and merits of giving up more conventional design methodologies (such as paper-based documents and unwieldy and unstructured flow diagrams and schedules) in favor of advanced model-based systems engineering approaches.

  14. Electro-thermal battery model identification for automotive applications

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Yurkovich, S.; Guezennec, Y.; Yurkovich, B. J.

    This paper describes a model identification procedure for identifying an electro-thermal model of lithium ion batteries used in automotive applications. The dynamic model structure adopted is based on an equivalent circuit model whose parameters are scheduled on the state-of-charge, temperature, and current direction. Linear spline functions are used as the functional form for the parametric dependence. The model identified in this way is valid inside a large range of temperatures and state-of-charge, so that the resulting model can be used for automotive applications such as on-board estimation of the state-of-charge and state-of-health. The model coefficients are identified using a multiple step genetic algorithm based optimization procedure designed for large scale optimization problems. The validity of the procedure is demonstrated experimentally for an A123 lithium ion iron-phosphate battery.

  15. Steel Alloy Hot Roll Simulations and Through-Thickness Variation Using Dislocation Density-Based Modeling

    NASA Astrophysics Data System (ADS)

    Jansen Van Rensburg, G. J.; Kok, S.; Wilke, D. N.

    2017-10-01

    Different roll pass reduction schedules have different effects on the through-thickness properties of hot-rolled metal slabs. In order to assess or improve a reduction schedule using the finite element method, a material model is required that captures the relevant deformation mechanisms and physics. The model should also report relevant field quantities to assess variations in material state through the thickness of a simulated rolled metal slab. In this paper, a dislocation density-based material model with recrystallization is presented and calibrated on the material response of a high-strength low-alloy steel. The model has the ability to replicate and predict material response to a fair degree thanks to the physically motivated mechanisms it is built on. An example study is also presented to illustrate the possible effect different reduction schedules could have on the through-thickness material state and the ability to assess these effects based on finite element simulations.

  16. Analysis and design of gain scheduled control systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Shamma, Jeff S.

    1988-01-01

    Gain scheduling, as an idea, is to construct a global feedback control system for a time varying and/or nonlinear plant from a collection of local time invariant designs. However in the absence of a sound analysis, these designs come with no guarantees on the robustness, performance, or even nominal stability of the overall gain schedule design. Such an analysis is presented for three types of gain scheduling situations: (1) a linear parameter varying plant scheduling on its exogenous parameters, (2) a nonlinear plant scheduling on a prescribed reference trajectory, and (3) a nonlinear plant scheduling on the current plant output. Conditions are given which guarantee that the stability, robustness, and performance properties of the fixed operating point designs carry over to the global gain scheduled designs, such as the scheduling variable should vary slowly and capture the plants nonlinearities. Finally, an alternate design framework is proposed which removes the slowing varying restriction or gain scheduled systems. This framework addresses some fundamental feedback issues previously ignored in standard gain.

  17. Schedule Risks Due to Delays in Advanced Technology Development

    NASA Technical Reports Server (NTRS)

    Reeves, John D. Jr.; Kayat, Kamal A.; Lim, Evan

    2008-01-01

    This paper discusses a methodology and modeling capability that probabilistically evaluates the likelihood and impacts of delays in advanced technology development prior to the start of design, development, test, and evaluation (DDT&E) of complex space systems. The challenges of understanding and modeling advanced technology development considerations are first outlined, followed by a discussion of the problem in the context of lunar surface architecture analysis. The current and planned methodologies to address the problem are then presented along with sample analyses and results. The methodology discussed herein provides decision-makers a thorough understanding of the schedule impacts resulting from the inclusion of various enabling advanced technology assumptions within system design.

  18. Design and application of BIM based digital sand table for construction management

    NASA Astrophysics Data System (ADS)

    Fuquan, JI; Jianqiang, LI; Weijia, LIU

    2018-05-01

    This paper explores the design and application of BIM based digital sand table for construction management. Aiming at the demands and features of construction management plan for bridge and tunnel engineering, the key functional features of digital sand table should include three-dimensional GIS, model navigation, virtual simulation, information layers, and data exchange, etc. That involving the technology of 3D visualization and 4D virtual simulation of BIM, breakdown structure of BIM model and project data, multi-dimensional information layers, and multi-source data acquisition and interaction. Totally, the digital sand table is a visual and virtual engineering information integrated terminal, under the unified data standard system. Also, the applications shall contain visual constructing scheme, virtual constructing schedule, and monitoring of construction, etc. Finally, the applicability of several basic software to the digital sand table is analyzed.

  19. Research on Scheduling Algorithm for Multi-satellite and Point Target Task on Swinging Mode

    NASA Astrophysics Data System (ADS)

    Wang, M.; Dai, G.; Peng, L.; Song, Z.; Chen, G.

    2012-12-01

    Nowadays, using satellite in space to observe ground is an important and major method to obtain ground information. With the development of the scientific technology in the field of space, many fields such as military and economic and other areas have more and more requirement of space technology because of the benefits of the satellite's widespread, timeliness and unlimited of area and country. And at the same time, because of the wide use of all kinds of satellites, sensors, repeater satellites and ground receiving stations, ground control system are now facing great challenge. Therefore, how to make the best value of satellite resources so as to make full use of them becomes an important problem of ground control system. Satellite scheduling is to distribute the resource to all tasks without conflict to obtain the scheduling result so as to complete as many tasks as possible to meet user's requirement under considering the condition of the requirement of satellites, sensors and ground receiving stations. Considering the size of the task, we can divide tasks into point task and area task. This paper only considers point targets. In this paper, a description of satellite scheduling problem and a chief introduction of the theory of satellite scheduling are firstly made. We also analyze the restriction of resource and task in scheduling satellites. The input and output flow of scheduling process are also chiefly described in the paper. On the basis of these analyses, we put forward a scheduling model named as multi-variable optimization model for multi-satellite and point target task on swinging mode. In the multi-variable optimization model, the scheduling problem is transformed the parametric optimization problem. The parameter we wish to optimize is the swinging angle of every time-window. In the view of the efficiency and accuracy, some important problems relating the satellite scheduling such as the angle relation between satellites and ground targets, positive and negative swinging angle and the computation of time window are analyzed and discussed. And many strategies to improve the efficiency of this model are also put forward. In order to solve the model, we bring forward the conception of activity sequence map. By using the activity sequence map, the activity choice and the start time of the activity can be divided. We also bring forward three neighborhood operators to search the result space. The front movement remaining time and the back movement remaining time are used to analyze the feasibility to generate solution from neighborhood operators. Lastly, the algorithm to solve the problem and model is put forward based genetic algorithm. Population initialization, crossover operator, mutation operator, individual evaluation, collision decrease operator, select operator and collision elimination operator is designed in the paper. Finally, the scheduling result and the simulation for a practical example on 5 satellites and 100 point targets with swinging mode is given, and the scheduling performances are also analyzed while the swinging angle in 0, 5, 10, 15, 25. It can be shown by the result that the model and the algorithm are more effective than those ones without swinging mode.

  20. Dataflow Design Tool: User's Manual

    NASA Technical Reports Server (NTRS)

    Jones, Robert L., III

    1996-01-01

    The Dataflow Design Tool is a software tool for selecting a multiprocessor scheduling solution for a class of computational problems. The problems of interest are those that can be described with a dataflow graph and are intended to be executed repetitively on a set of identical processors. Typical applications include signal processing and control law problems. The software tool implements graph-search algorithms and analysis techniques based on the dataflow paradigm. Dataflow analyses provided by the software are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool provides performance optimization through the inclusion of artificial precedence constraints among the schedulable tasks. The user interface and tool capabilities are described. Examples are provided to demonstrate the analysis, scheduling, and optimization functions facilitated by the tool.

  1. A design fix to supervisory control for fault-tolerant scheduling of real-time multiprocessor systems with aperiodic tasks

    NASA Astrophysics Data System (ADS)

    Devaraj, Rajesh; Sarkar, Arnab; Biswas, Santosh

    2015-11-01

    In the article 'Supervisory control for fault-tolerant scheduling of real-time multiprocessor systems with aperiodic tasks', Park and Cho presented a systematic way of computing a largest fault-tolerant and schedulable language that provides information on whether the scheduler (i.e., supervisor) should accept or reject a newly arrived aperiodic task. The computation of such a language is mainly dependent on the task execution model presented in their paper. However, the task execution model is unable to capture the situation when the fault of a processor occurs even before the task has arrived. Consequently, a task execution model that does not capture this fact may possibly be assigned for execution on a faulty processor. This problem has been illustrated with an appropriate example. Then, the task execution model of Park and Cho has been modified to strengthen the requirement that none of the tasks are assigned for execution on a faulty processor.

  2. Constraint based scheduling for the Goddard Space Flight Center distributed Active Archive Center's data archive and distribution system

    NASA Technical Reports Server (NTRS)

    Short, Nick, Jr.; Bedet, Jean-Jacques; Bodden, Lee; Boddy, Mark; White, Jim; Beane, John

    1994-01-01

    The Goddard Space Flight Center (GSFC) Distributed Active Archive Center (DAAC) has been operational since October 1, 1993. Its mission is to support the Earth Observing System (EOS) by providing rapid access to EOS data and analysis products, and to test Earth Observing System Data and Information System (EOSDIS) design concepts. One of the challenges is to ensure quick and easy retrieval of any data archived within the DAAC's Data Archive and Distributed System (DADS). Over the 15-year life of EOS project, an estimated several Petabytes (10(exp 15)) of data will be permanently stored. Accessing that amount of information is a formidable task that will require innovative approaches. As a precursor of the full EOS system, the GSFC DAAC with a few Terabits of storage, has implemented a prototype of a constraint-based task and resource scheduler to improve the performance of the DADS. This Honeywell Task and Resource Scheduler (HTRS), developed by Honeywell Technology Center in cooperation the Information Science and Technology Branch/935, the Code X Operations Technology Program, and the GSFC DAAC, makes better use of limited resources, prevents backlog of data, provides information about resources bottlenecks and performance characteristics. The prototype which is developed concurrently with the GSFC Version 0 (V0) DADS, models DADS activities such as ingestion and distribution with priority, precedence, resource requirements (disk and network bandwidth) and temporal constraints. HTRS supports schedule updates, insertions, and retrieval of task information via an Application Program Interface (API). The prototype has demonstrated with a few examples, the substantial advantages of using HTRS over scheduling algorithms such as a First In First Out (FIFO) queue. The kernel scheduling engine for HTRS, called Kronos, has been successfully applied to several other domains such as space shuttle mission scheduling, demand flow manufacturing, and avionics communications scheduling.

  3. Mission planning optimization of video satellite for ground multi-object staring imaging

    NASA Astrophysics Data System (ADS)

    Cui, Kaikai; Xiang, Junhua; Zhang, Yulin

    2018-03-01

    This study investigates the emergency scheduling problem of ground multi-object staring imaging for a single video satellite. In the proposed mission scenario, the ground objects require a specified duration of staring imaging by the video satellite. The planning horizon is not long, i.e., it is usually shorter than one orbit period. A binary decision variable and the imaging order are used as the design variables, and the total observation revenue combined with the influence of the total attitude maneuvering time is regarded as the optimization objective. Based on the constraints of the observation time windows, satellite attitude adjustment time, and satellite maneuverability, a constraint satisfaction mission planning model is established for ground object staring imaging by a single video satellite. Further, a modified ant colony optimization algorithm with tabu lists (Tabu-ACO) is designed to solve this problem. The proposed algorithm can fully exploit the intelligence and local search ability of ACO. Based on full consideration of the mission characteristics, the design of the tabu lists can reduce the search range of ACO and improve the algorithm efficiency significantly. The simulation results show that the proposed algorithm outperforms the conventional algorithm in terms of optimization performance, and it can obtain satisfactory scheduling results for the mission planning problem.

  4. User-Assisted Store Recycling for Dynamic Task Graph Schedulers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurt, Mehmet Can; Krishnamoorthy, Sriram; Agrawal, Gagan

    The emergence of the multi-core era has led to increased interest in designing effective yet practical parallel programming models. Models based on task graphs that operate on single-assignment data are attractive in several ways: they can support dynamic applications and precisely represent the available concurrency. However, they also require nuanced algorithms for scheduling and memory management for efficient execution. In this paper, we consider memory-efficient dynamic scheduling of task graphs. Specifically, we present a novel approach for dynamically recycling the memory locations assigned to data items as they are produced by tasks. We develop algorithms to identify memory-efficient store recyclingmore » functions by systematically evaluating the validity of a set of (user-provided or automatically generated) alternatives. Because recycling function can be input data-dependent, we have also developed support for continued correct execution of a task graph in the presence of a potentially incorrect store recycling function. Experimental evaluation demonstrates that our approach to automatic store recycling incurs little to no overheads, achieves memory usage comparable to the best manually derived solutions, often produces recycling functions valid across problem sizes and input parameters, and efficiently recovers from an incorrect choice of store recycling functions.« less

  5. An oracle: antituberculosis pharmacokinetics-pharmacodynamics, clinical correlation, and clinical trial simulations to predict the future.

    PubMed

    Pasipanodya, Jotam; Gumbo, Tawanda

    2011-01-01

    Antimicrobial pharmacokinetic-pharmacodynamic (PK/PD) science and clinical trial simulations have not been adequately applied to the design of doses and dose schedules of antituberculosis regimens because many researchers are skeptical about their clinical applicability. We compared findings of preclinical PK/PD studies of current first-line antituberculosis drugs to findings from several clinical publications that included microbiologic outcome and pharmacokinetic data or had a dose-scheduling design. Without exception, the antimicrobial PK/PD parameters linked to optimal effect were similar in preclinical models and in tuberculosis patients. Thus, exposure-effect relationships derived in the preclinical models can be used in the design of optimal antituberculosis doses, by incorporating population pharmacokinetics of the drugs and MIC distributions in Monte Carlo simulations. When this has been performed, doses and dose schedules of rifampin, isoniazid, pyrazinamide, and moxifloxacin with the potential to shorten antituberculosis therapy have been identified. In addition, different susceptibility breakpoints than those in current use have been identified. These steps outline a more rational approach than that of current methods for designing regimens and predicting outcome so that both new and older antituberculosis agents can shorten therapy duration.

  6. System-level power optimization for real-time distributed embedded systems

    NASA Astrophysics Data System (ADS)

    Luo, Jiong

    Power optimization is one of the crucial design considerations for modern electronic systems. In this thesis, we present several system-level power optimization techniques for real-time distributed embedded systems, based on dynamic voltage scaling, dynamic power management, and management of peak power and variance of the power profile. Dynamic voltage scaling has been widely acknowledged as an important and powerful technique to trade off dynamic power consumption and delay. Efficient dynamic voltage scaling requires effective variable-voltage scheduling mechanisms that can adjust voltages and clock frequencies adaptively based on workloads and timing constraints. For this purpose, we propose static variable-voltage scheduling algorithms utilizing criticalpath driven timing analysis for the case when tasks are assumed to have uniform switching activities, as well as energy-gradient driven slack allocation for a more general scenario. The proposed techniques can achieve closeto-optimal power savings with very low computational complexity, without violating any real-time constraints. We also present algorithms for power-efficient joint scheduling of multi-rate periodic task graphs along with soft aperiodic tasks. The power issue is addressed through both dynamic voltage scaling and power management. Periodic task graphs are scheduled statically. Flexibility is introduced into the static schedule to allow the on-line scheduler to make local changes to PE schedules through resource reclaiming and slack stealing, without interfering with the validity of the global schedule. We provide a unified framework in which the response times of aperiodic tasks and power consumption are dynamically optimized simultaneously. Interconnection network fabrics point to a new generation of power-efficient and scalable interconnection architectures for distributed embedded systems. As the system bandwidth continues to increase, interconnection networks become power/energy limited as well. Variable-frequency links have been designed by circuit designers for both parallel and serial links, which can adaptively regulate the supply voltage of transceivers to a desired link frequency, to exploit the variations in bandwidth requirement for power savings. We propose solutions for simultaneous dynamic voltage scaling of processors and links. The proposed solution considers real-time scheduling, flow control, and packet routing jointly. It can trade off the power consumption on processors and communication links via efficient slack allocation, and lead to more power savings than dynamic voltage scaling on processors alone. For battery-operated systems, the battery lifespan is an important concern. Due to the effects of discharge rate and battery recovery, the discharge pattern of batteries has an impact on the battery lifespan. Battery models indicate that even under the same average power consumption, reducing peak power current and variance in the power profile can increase the battery efficiency and thereby prolong battery lifetime. To take advantage of these effects, we propose battery-driven scheduling techniques for embedded applications, to reduce the peak power and the variance in the power profile of the overall system under real-time constraints. The proposed scheduling algorithms are also beneficial in addressing reliability and signal integrity concerns by effectively controlling peak power and variance of the power profile.

  7. Optimization-based manufacturing scheduling with multiple resources and setup requirements

    NASA Astrophysics Data System (ADS)

    Chen, Dong; Luh, Peter B.; Thakur, Lakshman S.; Moreno, Jack, Jr.

    1998-10-01

    The increasing demand for on-time delivery and low price forces manufacturer to seek effective schedules to improve coordination of multiple resources and to reduce product internal costs associated with labor, setup and inventory. This study describes the design and implementation of a scheduling system for J. M. Product Inc. whose manufacturing is characterized by the need to simultaneously consider machines and operators while an operator may attend several operations at the same time, and the presence of machines requiring significant setup times. The scheduling problem with these characteristics are typical for many manufacturers, very difficult to be handled, and have not been adequately addressed in the literature. In this study, both machine and operators are modeled as resources with finite capacities to obtain efficient coordination between them, and an operator's time can be shared by several operations at the same time to make full use of the operator. Setups are explicitly modeled following our previous work, with additional penalties on excessive setups to reduce setup costs and avoid possible scraps. An integer formulation with a separable structure is developed to maximize on-time delivery of products, low inventory and small number of setups. Within the Lagrangian relaxation framework, the problem is decomposed into individual subproblems that are effectively solved by using dynamic programming with additional penalties embedded in state transitions. Heuristics is then developed to obtain a feasible schedule following on our previous work with new mechanism to satisfy operator capacity constraints. The method has been implemented using the object-oriented programming language C++ with a user-friendly interface, and numerical testing shows that the method generates high quality schedules in a timely fashion. Through simultaneous consideration of machines and operators, machines and operators are well coordinated to facilitate the smooth flow of parts through the system. The explicit modeling of setups and the associated penalties let parts with same setup requirements clustered together to avoid excessive setups.

  8. Linux Kernel Co-Scheduling and Bulk Synchronous Parallelism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Terry R

    2012-01-01

    This paper describes a kernel scheduling algorithm that is based on coscheduling principles and that is intended for parallel applications running on 1000 cores or more. Experimental results for a Linux implementation on a Cray XT5 machine are presented. The results indicate that Linux is a suitable operating system for this new scheduling scheme, and that this design provides a dramatic improvement in scaling performance for synchronizing collective operations at scale.

  9. Design of ProjectRun21: a 14-week prospective cohort study of the influence of running experience and running pace on running-related injury in half-marathoners.

    PubMed

    Damsted, Camma; Parner, Erik Thorlund; Sørensen, Henrik; Malisoux, Laurent; Nielsen, Rasmus Oestergaard

    2017-11-06

    Participation in half-marathon has been steeply increasing during the past decade. In line, a vast number of half-marathon running schedules has surfaced. Unfortunately, the injury incidence proportion for half-marathoners has been found to exceed 30% during 1-year follow-up. The majority of running-related injuries are suggested to develop as overuse injuries, which leads to injury if the cumulative training load over one or more training sessions exceeds the runners' load capacity for adaptive tissue repair. Owing to an increase of load capacity along with adaptive running training, the runners' running experience and pace abilities can be used as estimates for load capacity. Since no evidence-based knowledge exist of how to plan appropriate half-marathon running schedules considering the level of running experience and running pace, the aim of ProjectRun21 is to investigate the association between running experience or running pace and the risk of running-related injury. Healthy runners using Global Positioning System (GPS) watch between 18 and 65 years will be invited to participate in this 14-week prospective cohort study. Runners will be allowed to self-select one of three half-marathon running schedules developed for the study. Running data will be collected objectively by GPS. Injury will be based on the consensus-based time loss definition by Yamato et al.: "Running-related (training or competition) musculoskeletal pain in the lower limbs that causes a restriction on or stoppage of running (distance, speed, duration, or training) for at least 7 days or 3 consecutive scheduled training sessions, or that requires the runner to consult a physician or other health professional". Running experience and running pace will be included as primary exposures, while the exposure to running is pre-fixed in the running schedules and thereby conditioned by design. Time-to-event models will be used for analytical purposes. ProjectRun21 will examine if particular subgroups of runners with certain running experiences and running paces seem to sustain more running-related injuries compared with other subgroups of runners. This will enable sport coaches, physiotherapists as well as the runners to evaluate their injury risk of taking up a 14-week running schedule for half-marathon.

  10. Cell transmission model of dynamic assignment for urban rail transit networks.

    PubMed

    Xu, Guangming; Zhao, Shuo; Shi, Feng; Zhang, Feilian

    2017-01-01

    For urban rail transit network, the space-time flow distribution can play an important role in evaluating and optimizing the space-time resource allocation. For obtaining the space-time flow distribution without the restriction of schedules, a dynamic assignment problem is proposed based on the concept of continuous transmission. To solve the dynamic assignment problem, the cell transmission model is built for urban rail transit networks. The priority principle, queuing process, capacity constraints and congestion effects are considered in the cell transmission mechanism. Then an efficient method is designed to solve the shortest path for an urban rail network, which decreases the computing cost for solving the cell transmission model. The instantaneous dynamic user optimal state can be reached with the method of successive average. Many evaluation indexes of passenger flow can be generated, to provide effective support for the optimization of train schedules and the capacity evaluation for urban rail transit network. Finally, the model and its potential application are demonstrated via two numerical experiments using a small-scale network and the Beijing Metro network.

  11. Application of decentralized cooperative problem solving in dynamic flexible scheduling

    NASA Astrophysics Data System (ADS)

    Guan, Zai-Lin; Lei, Ming; Wu, Bo; Wu, Ya; Yang, Shuzi

    1995-08-01

    The object of this study is to discuss an intelligent solution to the problem of task-allocation in shop floor scheduling. For this purpose, the technique of distributed artificial intelligence (DAI) is applied. Intelligent agents (IAs) are used to realize decentralized cooperation, and negotiation is realized by using message passing based on the contract net model. Multiple agents, such as manager agents, workcell agents, and workstation agents, make game-like decisions based on multiple criteria evaluations. This procedure of decentralized cooperative problem solving makes local scheduling possible. And by integrating such multiple local schedules, dynamic flexible scheduling for the whole shop floor production can be realized.

  12. Applying mathematical modeling to create job rotation schedules for minimizing occupational noise exposure.

    PubMed

    Tharmmaphornphilas, Wipawee; Green, Benjamin; Carnahan, Brian J; Norman, Bryan A

    2003-01-01

    This research developed worker schedules by using administrative controls and a computer programming model to reduce the likelihood of worker hearing loss. By rotating the workers through different jobs during the day it was possible to reduce their exposure to hazardous noise levels. Computer simulations were made based on data collected in a real setting. Worker schedules currently used at the site are compared with proposed worker schedules from the computer simulations. For the worker assignment plans found by the computer model, the authors calculate a significant decrease in time-weighted average (TWA) sound level exposure. The maximum daily dose that any worker is exposed to is reduced by 58.8%, and the maximum TWA value for the workers is reduced by 3.8 dB from the current schedule.

  13. A comparative evaluation of the effect of internet-based CME delivery format on satisfaction, knowledge and confidence

    PubMed Central

    2010-01-01

    Background Internet-based instruction in continuing medical education (CME) has been associated with favorable outcomes. However, more direct comparative studies of different Internet-based interventions, instructional methods, presentation formats, and approaches to implementation are needed. The purpose of this study was to conduct a comparative evaluation of two Internet-based CME delivery formats and the effect on satisfaction, knowledge and confidence outcomes. Methods Evaluative outcomes of two differing formats of an Internet-based CME course with identical subject matter were compared. A Scheduled Group Learning format involved case-based asynchronous discussions with peers and a facilitator over a scheduled 3-week delivery period. An eCME On Demand format did not include facilitated discussion and was not based on a schedule; participants could start and finish at any time. A retrospective, pre-post evaluation study design comparing identical satisfaction, knowledge and confidence outcome measures was conducted. Results Participants in the Scheduled Group Learning format reported significantly higher mean satisfaction ratings in some areas, performed significantly higher on a post-knowledge assessment and reported significantly higher post-confidence scores than participants in the eCME On Demand format that was not scheduled and did not include facilitated discussion activity. Conclusions The findings support the instructional benefits of a scheduled delivery format and facilitated asynchronous discussion in Internet-based CME. PMID:20113493

  14. Analysis of Feeder Bus Network Design and Scheduling Problems

    PubMed Central

    Almasi, Mohammad Hadi; Karim, Mohamed Rehan

    2014-01-01

    A growing concern for public transit is its inability to shift passenger's mode from private to public transport. In order to overcome this problem, a more developed feeder bus network and matched schedules will play important roles. The present paper aims to review some of the studies performed on Feeder Bus Network Design and Scheduling Problem (FNDSP) based on three distinctive parts of the FNDSP setup, namely, problem description, problem characteristics, and solution approaches. The problems consist of different subproblems including data preparation, feeder bus network design, route generation, and feeder bus scheduling. Subsequently, descriptive analysis and classification of previous works are presented to highlight the main characteristics and solution methods. Finally, some of the issues and trends for future research are identified. This paper is targeted at dealing with the FNDSP to exhibit strategic and tactical goals and also contributes to the unification of the field which might be a useful complement to the few existing reviews. PMID:24526890

  15. Astronaut Office Scheduling System Software

    NASA Technical Reports Server (NTRS)

    Brown, Estevancio

    2010-01-01

    AOSS is a highly efficient scheduling application that uses various tools to schedule astronauts weekly appointment information. This program represents an integration of many technologies into a single application to facilitate schedule sharing and management. It is a Windows-based application developed in Visual Basic. Because the NASA standard office automation load environment is Microsoft-based, Visual Basic provides AO SS developers with the ability to interact with Windows collaboration components by accessing objects models from applications like Outlook and Excel. This also gives developers the ability to create newly customizable components that perform specialized tasks pertaining to scheduling reporting inside the application. With this capability, AOSS can perform various asynchronous tasks, such as gathering/ sending/ managing astronauts schedule information directly to their Outlook calendars at any time.

  16. Real-Time MENTAT programming language and architecture

    NASA Technical Reports Server (NTRS)

    Grimshaw, Andrew S.; Silberman, Ami; Liu, Jane W. S.

    1989-01-01

    Real-time MENTAT, a programming environment designed to simplify the task of programming real-time applications in distributed and parallel environments, is described. It is based on the same data-driven computation model and object-oriented programming paradigm as MENTAT. It provides an easy-to-use mechanism to exploit parallelism, language constructs for the expression and enforcement of timing constraints, and run-time support for scheduling and exciting real-time programs. The real-time MENTAT programming language is an extended C++. The extensions are added to facilitate automatic detection of data flow and generation of data flow graphs, to express the timing constraints of individual granules of computation, and to provide scheduling directives for the runtime system. A high-level view of the real-time MENTAT system architecture and programming language constructs is provided.

  17. Integrated Campaign Probabilistic Cost, Schedule, Performance, and Value for Program Office Support

    NASA Technical Reports Server (NTRS)

    Cornelius, David; Sasamoto, Washito; Daugherty, Kevin; Deacon, Shaun

    2012-01-01

    This paper describes an integrated assessment tool developed at NASA Langley Research Center that incorporates probabilistic analysis of life cycle cost, schedule, launch performance, on-orbit performance, and value across a series of planned space-based missions, or campaign. Originally designed as an aid in planning the execution of missions to accomplish the National Research Council 2007 Earth Science Decadal Survey, it utilizes Monte Carlo simulation of a series of space missions for assessment of resource requirements and expected return on investment. Interactions between simulated missions are incorporated, such as competition for launch site manifest, to capture unexpected and non-linear system behaviors. A novel value model is utilized to provide an assessment of the probabilistic return on investment. A demonstration case is discussed to illustrate the tool utility.

  18. Parameterized LMI Based Diagonal Dominance Compensator Study for Polynomial Linear Parameter Varying System

    NASA Astrophysics Data System (ADS)

    Han, Xiaobao; Li, Huacong; Jia, Qiusheng

    2017-12-01

    For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.

  19. Plant operation planning and scheduling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jammar, R.J.

    When properly designed, planning and scheduling can actually add millions of dollars per year to the bottom line. Planning and scheduling is a continuum of decisions starting with crude selection and ending with establishing short-term targets for crude processing and blending. It also includes maintaining optimization and operation simulation models. It is thought that conservatively, a refinery may save from $5 million to $10 million a year if it pays more attention to the processes behind proper planning and scheduling. Of course, the amount of savings can reach staggering proportions for companies now at the bottom of the Solomon Associatesmore » Inc. refinery performance ranking.« less

  20. Measuring the effects of heterogeneity on distributed systems

    NASA Technical Reports Server (NTRS)

    El-Toweissy, Mohamed; Zeineldine, Osman; Mukkamala, Ravi

    1991-01-01

    Distributed computer systems in daily use are becoming more and more heterogeneous. Currently, much of the design and analysis studies of such systems assume homogeneity. This assumption of homogeneity has been mainly driven by the resulting simplicity in modeling and analysis. A simulation study is presented which investigated the effects of heterogeneity on scheduling algorithms for hard real time distributed systems. In contrast to previous results which indicate that random scheduling may be as good as a more complex scheduler, this algorithm is shown to be consistently better than a random scheduler. This conclusion is more prevalent at high workloads as well as at high levels of heterogeneity.

  1. A DAG Scheduling Scheme on Heterogeneous Computing Systems Using Tuple-Based Chemical Reaction Optimization

    PubMed Central

    Jiang, Yuyi; Shao, Zhiqing; Guo, Yi

    2014-01-01

    A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems. PMID:25143977

  2. A DAG scheduling scheme on heterogeneous computing systems using tuple-based chemical reaction optimization.

    PubMed

    Jiang, Yuyi; Shao, Zhiqing; Guo, Yi

    2014-01-01

    A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems.

  3. Managing a big ground-based astronomy project: the Thirty Meter Telescope (TMT) project

    NASA Astrophysics Data System (ADS)

    Sanders, Gary H.

    2008-07-01

    TMT is a big science project and its scale is greater than previous ground-based optical/infrared telescope projects. This paper will describe the ideal "linear" project and how the TMT project departs from that ideal. The paper will describe the needed adaptations to successfully manage real world complexities. The progression from science requirements to a reference design, the development of a product-oriented Work Breakdown Structure (WBS) and an organization that parallels the WBS, the implementation of system engineering, requirements definition and the progression through Conceptual Design to Preliminary Design will be summarized. The development of a detailed cost estimate structured by the WBS, and the methodology of risk analysis to estimate contingency fund requirements will be summarized. Designing the project schedule defines the construction plan and, together with the cost model, provides the basis for executing the project guided by an earned value performance measurement system.

  4. Future applications of artificial intelligence to Mission Control Centers

    NASA Technical Reports Server (NTRS)

    Friedland, Peter

    1991-01-01

    Future applications of artificial intelligence to Mission Control Centers are presented in the form of the viewgraphs. The following subject areas are covered: basic objectives of the NASA-wide AI program; inhouse research program; constraint-based scheduling; learning and performance improvement for scheduling; GEMPLAN multi-agent planner; planning, scheduling, and control; Bayesian learning; efficient learning algorithms; ICARUS (an integrated architecture for learning); design knowledge acquisition and retention; computer-integrated documentation; and some speculation on future applications.

  5. Composable Flexible Real-time Packet Scheduling for Networks on-Chip

    DTIC Science & Technology

    2012-05-16

    unclassified b . ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Copyright © 2012...words, real-time flows need to be composable. We set this as the design goal for our packet scheduling discipline developed in this paper. B . Motivating...with closest deadline is chosen to forward to the next router. B . Traffic Model We assume a traffic model for real-time flows similar to the one used

  6. Post-Stall Aerodynamic Modeling and Gain-Scheduled Control Design

    NASA Technical Reports Server (NTRS)

    Wu, Fen; Gopalarathnam, Ashok; Kim, Sungwan

    2005-01-01

    A multidisciplinary research e.ort that combines aerodynamic modeling and gain-scheduled control design for aircraft flight at post-stall conditions is described. The aerodynamic modeling uses a decambering approach for rapid prediction of post-stall aerodynamic characteristics of multiple-wing con.gurations using known section data. The approach is successful in bringing to light multiple solutions at post-stall angles of attack right during the iteration process. The predictions agree fairly well with experimental results from wind tunnel tests. The control research was focused on actuator saturation and .ight transition between low and high angles of attack regions for near- and post-stall aircraft using advanced LPV control techniques. The new control approaches maintain adequate control capability to handle high angle of attack aircraft control with stability and performance guarantee.

  7. Quick Fix for Managing Risks

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Under a Phase II SBIR contract, Kennedy and Lumina Decision Systems, Inc., jointly developed the Schedule and Cost Risk Analysis Modeling (SCRAM) system, based on a version of Lumina's flagship software product, Analytica(R). Acclaimed as "the best single decision-analysis program yet produced" by MacWorld magazine, Analytica is a "visual" tool used in decision-making environments worldwide to build, revise, and present business models, minus the time-consuming difficulty commonly associated with spreadsheets. With Analytica as their platform, Kennedy and Lumina created the SCRAM system in response to NASA's need to identify the importance of major delays in Shuttle ground processing, a critical function in project management and process improvement. As part of the SCRAM development project, Lumina designed a version of Analytica called the Analytica Design Engine (ADE) that can be easily incorporated into larger software systems. ADE was commercialized and utilized in many other developments, including web-based decision support.

  8. A quantum physical design flow using ILP and graph drawing

    NASA Astrophysics Data System (ADS)

    Yazdani, Maryam; Saheb Zamani, Morteza; Sedighi, Mehdi

    2013-10-01

    Implementing large-scale quantum circuits is one of the challenges of quantum computing. One of the central challenges of accurately modeling the architecture of these circuits is to schedule a quantum application and generate the layout while taking into account the cost of communications and classical resources as well as the maximum exploitable parallelism. In this paper, we present and evaluate a design flow for arbitrary quantum circuits in ion trap technology. Our design flow consists of two parts. First, a scheduler takes a description of a circuit and finds the best order for the execution of its quantum gates using integer linear programming regarding the classical resources (qubits) and instruction dependencies. Then a layout generator receives the schedule produced by the scheduler and generates a layout for this circuit using a graph-drawing algorithm. Our experimental results show that the proposed flow decreases the average latency of quantum circuits by about 11 % for a set of attempted benchmarks and by about 9 % for another set of benchmarks compared with the best in literature.

  9. Study of the impact of cruise speed on scheduling and productivity of commercial transport aircraft

    NASA Technical Reports Server (NTRS)

    Bond, E. Q.; Carroll, E. A.; Flume, R. A.

    1977-01-01

    A comparison is made between airplane productivity and utilization levels derived from commercial airline type schedules which were developed for two subsonic and four supersonic cruise speed aircraft. The cruise speed component is the only difference between the schedules which are based on 1995 passenger demand forecasts. Productivity-to-speed relationships were determined for the three discrete route systems: North Atlantic, Trans-Pacific, and North-South America. Selected combinations of these route systems were also studied. Other areas affecting the productivity-to-speed relationship such as aircraft design range and scheduled turn time were examined.

  10. Physics-based deformable organisms for medical image analysis

    NASA Astrophysics Data System (ADS)

    Hamarneh, Ghassan; McIntosh, Chris

    2005-04-01

    Previously, "Deformable organisms" were introduced as a novel paradigm for medical image analysis that uses artificial life modelling concepts. Deformable organisms were designed to complement the classical bottom-up deformable models methodologies (geometrical and physical layers), with top-down intelligent deformation control mechanisms (behavioral and cognitive layers). However, a true physical layer was absent and in order to complete medical image segmentation tasks, deformable organisms relied on pure geometry-based shape deformations guided by sensory data, prior structural knowledge, and expert-generated schedules of behaviors. In this paper we introduce the use of physics-based shape deformations within the deformable organisms framework yielding additional robustness by allowing intuitive real-time user guidance and interaction when necessary. We present the results of applying our physics-based deformable organisms, with an underlying dynamic spring-mass mesh model, to segmenting and labelling the corpus callosum in 2D midsagittal magnetic resonance images.

  11. Naval Enterprise Engineering: Design, Innovate and Train (NEEDIT)

    DTIC Science & Technology

    2015-03-04

    world and we are somewhat able to stand on the shoulders of giants in Naval Engineering and inherit baseline designs that, in most cases, represent a...Set-Based Design) maximizes design flexibility ( Lamb 2003). However, it may add some risk to schedule or require program managers to trust that the...spiral or Set based design. A design philosophy is a weighted list of attributes used in the evaluation of alternatives ( Lamb 2003). Design leadership

  12. Feelings of energy, exercise-related self-efficacy, and voluntary exercise participation.

    PubMed

    Yoon, Seok; Buckworth, Janet; Focht, Brian; Ko, Bomna

    2013-12-01

    This study used a path analysis approach to examine the relationship between feelings of energy, exercise-related self-efficacy beliefs, and exercise participation. A cross-sectional mailing survey design was used to measure feelings of physical and mental energy, task and scheduling self-efficacy beliefs, and voluntary moderate and vigorous exercise participation in 368 healthy, full-time undergraduate students (mean age = 21.43 ± 2.32 years). The path analysis revealed that the hypothesized path model had a strong fit to the study data. The path model showed that feelings of physical energy had significant direct effects on task and scheduling self-efficacy beliefs as well as exercise behaviors. In addition, scheduling self-efficacy had direct effects on moderate and vigorous exercise participation. However, there was no significant direct relationship between task self-efficacy and exercise participation. The path model also revealed that scheduling self-efficacy partially mediated the relationship between feelings of physical energy and exercise participation.

  13. Understanding Activity Engagement Across Weekdays and Weekend Days: A Multivariate Multiple Discrete-Continuous Modeling Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garikapati, Venu; Astroza, Sebastian; Bhat, Prerna C.

    This paper is motivated by the increasing recognition that modeling activity-travel demand for a single day of the week, as is done in virtually all travel forecasting models, may be inadequate in capturing underlying processes that govern activity-travel scheduling behavior. The considerable variability in daily travel suggests that there are important complementary relationships and competing tradeoffs involved in scheduling and allocating time to various activities across days of the week. Both limited survey data availability and methodological challenges in modeling week-long activity-travel schedules have precluded the development of multi-day activity-travel demand models. With passive and technology-based data collection methods increasinglymore » in vogue, the collection of multi-day travel data may become increasingly commonplace in the years ahead. This paper addresses the methodological challenge associated with modeling multi-day activity-travel demand by formulating a multivariate multiple discrete-continuous probit (MDCP) model system. The comprehensive framework ties together two MDCP model components, one corresponding to weekday time allocation and the other to weekend activity-time allocation. By tying the two MDCP components together, the model system also captures relationships in activity-time allocation between weekdays on the one hand and weekend days on the other. Model estimation on a week-long travel diary data set from the United Kingdom shows that there are significant inter-relationships between weekdays and weekend days in activity-travel scheduling behavior. The model system presented in this paper may serve as a higher-level multi-day activity scheduler in conjunction with existing daily activity-based travel models.« less

  14. Architecture for Integrated Medical Model Dynamic Probabilistic Risk Assessment

    NASA Technical Reports Server (NTRS)

    Jaworske, D. A.; Myers, J. G.; Goodenow, D.; Young, M.; Arellano, J. D.

    2016-01-01

    Probabilistic Risk Assessment (PRA) is a modeling tool used to predict potential outcomes of a complex system based on a statistical understanding of many initiating events. Utilizing a Monte Carlo method, thousands of instances of the model are considered and outcomes are collected. PRA is considered static, utilizing probabilities alone to calculate outcomes. Dynamic Probabilistic Risk Assessment (dPRA) is an advanced concept where modeling predicts the outcomes of a complex system based not only on the probabilities of many initiating events, but also on a progression of dependencies brought about by progressing down a time line. Events are placed in a single time line, adding each event to a queue, as managed by a planner. Progression down the time line is guided by rules, as managed by a scheduler. The recently developed Integrated Medical Model (IMM) summarizes astronaut health as governed by the probabilities of medical events and mitigation strategies. Managing the software architecture process provides a systematic means of creating, documenting, and communicating a software design early in the development process. The software architecture process begins with establishing requirements and the design is then derived from the requirements.

  15. Intermittent Metronomic Drug Schedule Is Essential for Activating Antitumor Innate Immunity and Tumor Xenograft Regression12

    PubMed Central

    Chen, Chong-Sheng; Doloff, Joshua C; Waxman, David J

    2014-01-01

    Metronomic chemotherapy using cyclophosphamide (CPA) is widely associated with antiangiogenesis; however, recent studies implicate other immune-based mechanisms, including antitumor innate immunity, which can induce major tumor regression in implanted brain tumor models. This study demonstrates the critical importance of drug schedule: CPA induced a potent antitumor innate immune response and tumor regression when administered intermittently on a 6-day repeating metronomic schedule but not with the same total exposure to activated CPA administered on an every 3-day schedule or using a daily oral regimen that serves as the basis for many clinical trials of metronomic chemotherapy. Notably, the more frequent metronomic CPA schedules abrogated the antitumor innate immune and therapeutic responses. Further, the innate immune response and antitumor activity both displayed an unusually steep dose-response curve and were not accompanied by antiangiogenesis. The strong recruitment of innate immune cells by the 6-day repeating CPA schedule was not sustained, and tumor regression was abolished, by a moderate (25%) reduction in CPA dose. Moreover, an ∼20% increase in CPA dose eliminated the partial tumor regression and weak innate immune cell recruitment seen in a subset of the every 6-day treated tumors. Thus, metronomic drug treatment must be at a sufficiently high dose but also sufficiently well spaced in time to induce strong sustained antitumor immune cell recruitment. Many current clinical metronomic chemotherapeutic protocols employ oral daily low-dose schedules that do not meet these requirements, suggesting that they may benefit from optimization designed to maximize antitumor immune responses. PMID:24563621

  16. Temporal and Resource Reasoning for Planning, Scheduling and Execution in Autonomous Agents

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Hunsberger, Luke; Tsamardinos, Ioannis

    2005-01-01

    This viewgraph slide tutorial reviews methods for planning and scheduling events. The presentation reviews several methods and uses several examples of scheduling events for the successful and timely completion of the overall plan. Using constraint based models the presentation reviews planning with time, time representations in problem solving and resource reasoning.

  17. Integration of scheduling and discrete event simulation systems to improve production flow planning

    NASA Astrophysics Data System (ADS)

    Krenczyk, D.; Paprocka, I.; Kempa, W. M.; Grabowik, C.; Kalinowski, K.

    2016-08-01

    The increased availability of data and computer-aided technologies such as MRPI/II, ERP and MES system, allowing producers to be more adaptive to market dynamics and to improve production scheduling. Integration of production scheduling and computer modelling, simulation and visualization systems can be useful in the analysis of production system constraints related to the efficiency of manufacturing systems. A integration methodology based on semi-automatic model generation method for eliminating problems associated with complexity of the model and labour-intensive and time-consuming process of simulation model creation is proposed. Data mapping and data transformation techniques for the proposed method have been applied. This approach has been illustrated through examples of practical implementation of the proposed method using KbRS scheduling system and Enterprise Dynamics simulation system.

  18. Enabling a New Planning and Scheduling Paradigm

    NASA Technical Reports Server (NTRS)

    Jaap, John; Davis, Elizabeth

    2004-01-01

    The Flight Projects Directorate at NASA's Marshall Space Flight Center is developing a new planning and scheduling environment and a new scheduling algorithm to enable a paradigm shift in planning and scheduling concepts. Over the past 33 years Marshall has developed and evolved a paradigm for generating payload timelines for Skylab, Spacelab, various other Shuttle payloads, and the International Space Station. The current paradigm starts by collecting the requirements, called "tasks models," from the scientists and technologists for the tasks that they want to be done. Because of shortcomings in the current modeling schema, some requirements are entered as notes. Next a cadre with knowledge of vehicle and hardware modifies these models to encompass and be compatible with the hardware model; again, notes are added when the modeling schema does not provide a better way to represent the requirements. Finally, another cadre further modifies the models to be compatible with the scheduling engine. This last cadre also submits the models to the scheduling engine or builds the timeline manually to accommodate requirements that are expressed in notes. A future paradigm would provide a scheduling engine that accepts separate science models and hardware models. The modeling schema would have the capability to represent all the requirements without resorting to notes. Furthermore, the scheduling engine would not require that the models be modified to account for the capabilities (limitations) of the scheduling engine. The enabling technology under development at Marshall has three major components. (1) A new modeling schema allows expressing all the requirements of the tasks without resorting to notes or awkward contrivances. The chosen modeling schema is both maximally expressive and easy to use. It utilizes graphics methods to show hierarchies of task constraints and networks of temporal relationships. (2) A new scheduling algorithm automatically schedules the models without the intervention of a scheduling expert. The algorithm is tuned for the constraint hierarchies and the complex temporal relationships provided by the modeling schema. It has an extensive search algorithm which can exploit timing flexibilities and constraint and relationship options. (3) A web-based architecture allows multiple remote users to simultaneously model science and technology requirements and other users to model vehicle and hardware characteristics. The architecture allows the users to submit scheduling requests directly to the scheduling engine and immediately see the results. These three components are integrated so that science and technology experts with no knowledge of the vehicle or hardware subsystems and no knowledge of the internal workings of the scheduling engine have the ability to build and submit scheduling requests and see the results. The immediate feedback will hone the users' modeling skills and ultimately enable them to produce the desired timeline. This paper summarizes the three components of the enabling technology and describes how this technology would make a new paradigm possible.

  19. An operation support expert system based on on-line dynamics simulation and fuzzy reasoning for startup schedule optimization in fossil power plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsumoto, H.; Eki, Y.; Kaji, A.

    1993-12-01

    An expert system which can support operators of fossil power plants in creating the optimum startup schedule and executing it accurately is described. The optimum turbine speed-up and load-up pattern is obtained through an iterative manner which is based on fuzzy resonating using quantitative calculations as plant dynamics models and qualitative knowledge as schedule optimization rules with fuzziness. The rules represent relationships between stress margins and modification rates of the schedule parameters. Simulations analysis proves that the system provides quick and accurate plant startups.

  20. A multi-group and preemptable scheduling of cloud resource based on HTCondor

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaowei; Zou, Jiaheng; Cheng, Yaodong; Shi, Jingyan

    2017-10-01

    Due to the features of virtual machine-flexibility, easy controlling and various system environments, more and more fields utilize the virtualization technology to construct the distributed system with the virtual resources, also including high energy physics. This paper introduce a method used in high energy physics that supports multiple resource group and preemptable cloud resource scheduling, combining virtual machine with HTCondor (a batch system). It makes resource controlling more flexible and more efficient and makes resource scheduling independent of job scheduling. Firstly, the resources belong to different experiment-groups, and the type of user-groups mapping to resource-groups(same as experiment-group) is one-to-one or many-to-one. In order to make the confused group simply to be managed, we designed the permission controlling component to ensure that the different resource-groups can get the suitable jobs. Secondly, for the purpose of elastically allocating resources for suitable resource-group, it is necessary to schedule resources like scheduling jobs. So this paper designs the cloud resource scheduling to maintain a resource queue and allocate an appropriate amount of virtual resources to the request resource-group. Thirdly, in some kind of situations, because of the resource occupied for a long time, resources need to be preempted. This paper adds the preemption function for the resource scheduling that implement resource preemption based on the group priority. Additionally, the way to preempting is soft that when virtual resources are preempted, jobs will not be killed but also be held and rematched later. It is implemented with the help of HTCondor, storing the held job information in scheduler, releasing the job to idle status and doing second matcher. In IHEP (institute of high energy physics), we have built a batch system based on HTCondor with a virtual resources pool based on Openstack. And this paper will show some cases of experiment JUNO and LHAASO. The result indicates that multi-group and preemptable resource scheduling is efficient to support multi-group and soft preemption. Additionally, the permission controlling component has been used in the local computing cluster, supporting for experiment JUNO, CMS and LHAASO, and the scale will be expanded to more experiments at the first half year, including DYW, BES and so on. Its evidence that the permission controlling is efficient.

  1. Nambe Pueblo Water Budget and Forecasting model.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brainard, James Robert

    2009-10-01

    This report documents The Nambe Pueblo Water Budget and Water Forecasting model. The model has been constructed using Powersim Studio (PS), a software package designed to investigate complex systems where flows and accumulations are central to the system. Here PS has been used as a platform for modeling various aspects of Nambe Pueblo's current and future water use. The model contains three major components, the Water Forecast Component, Irrigation Scheduling Component, and the Reservoir Model Component. In each of the components, the user can change variables to investigate the impacts of water management scenarios on future water use. The Watermore » Forecast Component includes forecasting for industrial, commercial, and livestock use. Domestic demand is also forecasted based on user specified current population, population growth rates, and per capita water consumption. Irrigation efficiencies are quantified in the Irrigated Agriculture component using critical information concerning diversion rates, acreages, ditch dimensions and seepage rates. Results from this section are used in the Water Demand Forecast, Irrigation Scheduling, and the Reservoir Model components. The Reservoir Component contains two sections, (1) Storage and Inflow Accumulations by Categories and (2) Release, Diversion and Shortages. Results from both sections are derived from the calibrated Nambe Reservoir model where historic, pre-dam or above dam USGS stream flow data is fed into the model and releases are calculated.« less

  2. A real-time Excel-based scheduling solution for nursing staff reallocation.

    PubMed

    Tuominen, Outi Anneli; Lundgren-Laine, Heljä; Kauppila, Wiveka; Hupli, Maija; Salanterä, Sanna

    2016-09-30

    Aim This article describes the development and testing of an Excel-based scheduling solution for the flexible allocation and reallocation of nurses to cover sudden, unplanned absences among permanent nursing staff. Method A quasi-experimental, one group, pre- and post-test study design was used ( Box 1 ) with total sampling. Participants (n=17) were selected purposefully by including all ward managers (n=8) and assistant ward managers (n=9) from one university hospital department. The number of sudden absences among the nursing staff was identified during two 4-week data collection periods (pre- and post-test). Results During the use of the paper-based scheduling system, 121 absences were identified; during the use of the Excel-based system, 106 were identified. The main reasons for the use of flexible 'floating' nurses were sick leave (n=66) and workload (n=31). Other reasons (n=29) included patient transfer to another hospital, scheduling errors and the start or end of employment. Conclusion The Excel-based scheduling solution offered better support in obtaining substitute labour inside the organisation, with smaller employment costs. It also reduced the number of tasks ward managers had to carry out during the process of reallocating staff.

  3. Scheduler software for tracking and data relay satellite system loading analysis: User manual and programmer guide

    NASA Technical Reports Server (NTRS)

    Craft, R.; Dunn, C.; Mccord, J.; Simeone, L.

    1980-01-01

    A user guide and programmer documentation is provided for a system of PRIME 400 minicomputer programs. The system was designed to support loading analyses on the Tracking Data Relay Satellite System (TDRSS). The system is a scheduler for various types of data relays (including tape recorder dumps and real time relays) from orbiting payloads to the TDRSS. Several model options are available to statistically generate data relay requirements. TDRSS time lines (representing resources available for scheduling) and payload/TDRSS acquisition and loss of sight time lines are input to the scheduler from disk. Tabulated output from the interactive system includes a summary of the scheduler activities over time intervals specified by the user and overall summary of scheduler input and output information. A history file, which records every event generated by the scheduler, is written to disk to allow further scheduling on remaining resources and to provide data for graphic displays or additional statistical analysis.

  4. Changed nursing scheduling for improved safety culture and working conditions - patients' and nurses' perspectives.

    PubMed

    Kullberg, Anna; Bergenmar, Mia; Sharp, Lena

    2016-05-01

    To evaluate fixed scheduling compared with self-scheduling for nursing staff in oncological inpatient care with regard to patient and staff outcomes. Various scheduling models have been tested to attract and retain nursing staff. Little is known about how these schedules affect staff and patients. Fixed scheduling and self-scheduling have been studied to a small extent, solely from a staff perspective. We implemented fixed scheduling on two of four oncological inpatient wards. Two wards kept self-scheduling. Through a quasi-experimental design, baseline and follow-up measurements were collected among staff and patients. The Safety Attitudes Questionnaire was used among staff, as well as study-specific questions for patients and staff. Fixed scheduling was associated with less overtime and fewer possibilities to change shifts. Self-scheduling was associated with more requests from management for short notice shift changes. The type of scheduling did not affect patient-reported outcomes. Fixed scheduling should be considered in order to lower overtime. Further research is necessary and should explore patient outcomes to a greater extent. Scheduling is a core task for nurse managers. Our study suggests fixed scheduling as a strategy for managers to improve the effective use of resources and safety. © 2016 John Wiley & Sons Ltd.

  5. Study of Collaborative Management for Transportation Construction Project Based on BIM Technology

    NASA Astrophysics Data System (ADS)

    Jianhua, Liu; Genchuan, Luo; Daiquan, Liu; Wenlei, Li; Bowen, Feng

    2018-03-01

    Abstract. Building Information Modeling(BIM) is a building modeling technology based on the relevant information data of the construction project. It is an advanced technology and management concept, which is widely used in the whole life cycle process of planning, design, construction and operation. Based on BIM technology, transportation construction project collaborative management can have better communication through authenticity simulation and architectural visualization and can obtain the basic and real-time information such as project schedule, engineering quality, cost and environmental impact etc. The main services of highway construction management are integrated on the unified BIM platform for collaborative management to realize information intercommunication and exchange, to change the isolated situation of information in the past, and improve the level of information management. The final BIM model is integrated not only for the information management of project and the integration of preliminary documents and design drawings, but also for the automatic generation of completion data and final accounts, which covers the whole life cycle of traffic construction projects and lays a good foundation for smart highway construction.

  6. Design Tech High School: d.tech

    ERIC Educational Resources Information Center

    EDUCAUSE, 2015

    2015-01-01

    A Bay Area charter high school, d.tech develops "innovation-ready" students by combining content knowledge with the design thinking process while fostering a sense of autonomy and purpose. The academic model is grounded in self-paced learning through a flex schedule, high standards, and design thinking through a four-year design…

  7. Design of pharmaceutical products to meet future patient needs requires modification of current development paradigms and business models.

    PubMed

    Stegemann, S; Baeyens, J-P; Becker, R; Maio, M; Bresciani, M; Shreeves, T; Ecker, F; Gogol, M

    2014-06-01

    Drugs represent the most common intervention strategy for managing acute and chronic medical conditions. In light of demographic change and the increasing age of patients, the classic model of drug research and development by the pharmaceutical industry and drug prescription by physicians is reaching its limits. Different stakeholders, e.g. industry, regulatory authorities, health insurance systems, physicians etc., have at least partially differing interests regarding the process of healthcare provision. The primary responsibility for the correct handling of medication and adherence to treatment schedules lies with the recipient of a drug-based therapy, i.e. the patient. It is thus necessary to interactively involve elderly patients, as well as the other stakeholders, in the development of medication and medication application devices, and in clinical trials. This approach will provide the basis for developing a strategy that better meets patients' needs, thus resulting in improved adherence to treatment schedules and better therapeutic outcomes.

  8. Multiple quay cranes scheduling for double cycling in container terminals

    PubMed Central

    Chu, Yanling; Zhang, Xiaoju; Yang, Zhongzhen

    2017-01-01

    Double cycling is an efficient tool to increase the efficiency of quay crane (QC) in container terminals. In this paper, an optimization model for double cycling is developed to optimize the operation sequence of multiple QCs. The objective is to minimize the makespan of the ship handling operation considering the ship balance constraint. To solve the model, an algorithm based on Lagrangian relaxation is designed. Finally, we compare the efficiency of the Lagrangian relaxation based heuristic with the branch-and-bound method and a genetic algorithm using instances of different sizes. The results of numerical experiments indicate that the proposed model can effectively reduce the unloading and loading times of QCs. The effects of the ship balance constraint are more notable when the number of QCs is high. PMID:28692699

  9. Multiple quay cranes scheduling for double cycling in container terminals.

    PubMed

    Chu, Yanling; Zhang, Xiaoju; Yang, Zhongzhen

    2017-01-01

    Double cycling is an efficient tool to increase the efficiency of quay crane (QC) in container terminals. In this paper, an optimization model for double cycling is developed to optimize the operation sequence of multiple QCs. The objective is to minimize the makespan of the ship handling operation considering the ship balance constraint. To solve the model, an algorithm based on Lagrangian relaxation is designed. Finally, we compare the efficiency of the Lagrangian relaxation based heuristic with the branch-and-bound method and a genetic algorithm using instances of different sizes. The results of numerical experiments indicate that the proposed model can effectively reduce the unloading and loading times of QCs. The effects of the ship balance constraint are more notable when the number of QCs is high.

  10. ESSOPE: Towards S/C operations with reactive schedule planning

    NASA Technical Reports Server (NTRS)

    Wheadon, J.

    1993-01-01

    The ESSOPE is a prototype front-end tool running on a Sun workstation and interfacing to ESOC's MSSS spacecraft control system for the exchange of telecommand requests (to MSSS) and telemetry reports (from MSSS). ESSOPE combines an operations Planner-Scheduler, with a Schedule Execution Control function. Using an internal 'model' of the spacecraft, the Planner generates a schedule based on utilization requests for a variety of payload services by a community of Olympus users, and incorporating certain housekeeping operations. Conflicts based on operational constraints are automatically resolved, by employing one of several available strategies. The schedule is passed to the execution function which drives MSSS to perform it. When the schedule can no longer be met, either because the operator interferes (by delays or changes of requirements), or because ESSOPE has recognized some spacecraft anomalies, the Planner produces a modified schedule maintaining the on-going procedures as far as consistent with the new constraints or requirements.

  11. Cloud computing task scheduling strategy based on improved differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Ge, Junwei; He, Qian; Fang, Yiqiu

    2017-04-01

    In order to optimize the cloud computing task scheduling scheme, an improved differential evolution algorithm for cloud computing task scheduling is proposed. Firstly, the cloud computing task scheduling model, according to the model of the fitness function, and then used improved optimization calculation of the fitness function of the evolutionary algorithm, according to the evolution of generation of dynamic selection strategy through dynamic mutation strategy to ensure the global and local search ability. The performance test experiment was carried out in the CloudSim simulation platform, the experimental results show that the improved differential evolution algorithm can reduce the cloud computing task execution time and user cost saving, good implementation of the optimal scheduling of cloud computing tasks.

  12. CQPSO scheduling algorithm for heterogeneous multi-core DAG task model

    NASA Astrophysics Data System (ADS)

    Zhai, Wenzheng; Hu, Yue-Li; Ran, Feng

    2017-07-01

    Efficient task scheduling is critical to achieve high performance in a heterogeneous multi-core computing environment. The paper focuses on the heterogeneous multi-core directed acyclic graph (DAG) task model and proposes a novel task scheduling method based on an improved chaotic quantum-behaved particle swarm optimization (CQPSO) algorithm. A task priority scheduling list was built. A processor with minimum cumulative earliest finish time (EFT) was acted as the object of the first task assignment. The task precedence relationships were satisfied and the total execution time of all tasks was minimized. The experimental results show that the proposed algorithm has the advantage of optimization abilities, simple and feasible, fast convergence, and can be applied to the task scheduling optimization for other heterogeneous and distributed environment.

  13. Modeling and Control for Microgrids

    NASA Astrophysics Data System (ADS)

    Steenis, Joel

    Traditional approaches to modeling microgrids include the behavior of each inverter operating in a particular network configuration and at a particular operating point. Such models quickly become computationally intensive for large systems. Similarly, traditional approaches to control do not use advanced methodologies and suffer from poor performance and limited operating range. In this document a linear model is derived for an inverter connected to the Thevenin equivalent of a microgrid. This model is then compared to a nonlinear simulation model and analyzed using the open and closed loop systems in both the time and frequency domains. The modeling error is quantified with emphasis on its use for controller design purposes. Control design examples are given using a Glover McFarlane controller, gain scheduled Glover McFarlane controller, and bumpless transfer controller which are compared to the standard droop control approach. These examples serve as a guide to illustrate the use of multi-variable modeling techniques in the context of robust controller design and show that gain scheduled MIMO control techniques can extend the operating range of a microgrid. A hardware implementation is used to compare constant gain droop controllers with Glover McFarlane controllers and shows a clear advantage of the Glover McFarlane approach.

  14. Optimization of Airport Surface Traffic: A Case-Study of Incheon International Airport

    NASA Technical Reports Server (NTRS)

    Eun, Yeonju; Jeon, Daekeun; Lee, Hanbong; Jung, Yoon C.; Zhu, Zhifan; Jeong, Myeongsook; Kim, Hyounkong; Oh, Eunmi; Hong, Sungkwon

    2017-01-01

    This study aims to develop a controllers decision support tool for departure and surface management of ICN. Airport surface traffic optimization for Incheon International Airport (ICN) in South Korea was studied based on the operational characteristics of ICN and airspace of Korea. For surface traffic optimization, a multiple runway scheduling problem and a taxi scheduling problem were formulated into two Mixed Integer Linear Programming (MILP) optimization models. The Miles-In-Trail (MIT) separation constraint at the departure fix shared by the departure flights from multiple runways and the runway crossing constraints due to the taxi route configuration specific to ICN were incorporated into the runway scheduling and taxiway scheduling problems, respectively. Since the MILP-based optimization model for the multiple runway scheduling problem may be computationally intensive, computation times and delay costs of different solving methods were compared for a practical implementation. This research was a collaboration between Korea Aerospace Research Institute (KARI) and National Aeronautics and Space Administration (NASA).

  15. Optimization of Airport Surface Traffic: A Case-Study of Incheon International Airport

    NASA Technical Reports Server (NTRS)

    Eun, Yeonju; Jeon, Daekeun; Lee, Hanbong; Jung, Yoon Chul; Zhu, Zhifan; Jeong, Myeong-Sook; Kim, Hyoun Kyoung; Oh, Eunmi; Hong, Sungkwon

    2017-01-01

    This study aims to develop a controllers' decision support tool for departure and surface management of ICN. Airport surface traffic optimization for Incheon International Airport (ICN) in South Korea was studied based on the operational characteristics of ICN and airspace of Korea. For surface traffic optimization, a multiple runway scheduling problem and a taxi scheduling problem were formulated into two Mixed Integer Linear Programming (MILP) optimization models. The Miles-In-Trail (MIT) separation constraint at the departure fix shared by the departure flights from multiple runways and the runway crossing constraints due to the taxi route configuration specific to ICN were incorporated into the runway scheduling and taxiway scheduling problems, respectively. Since the MILP-based optimization model for the multiple runway scheduling problem may be computationally intensive, computation times and delay costs of different solving methods were compared for a practical implementation. This research was a collaboration between Korea Aerospace Research Institute (KARI) and National Aeronautics and Space Administration (NASA).

  16. Multi-Satellite Observation Scheduling for Large Area Disaster Emergency Response

    NASA Astrophysics Data System (ADS)

    Niu, X. N.; Tang, H.; Wu, L. X.

    2018-04-01

    an optimal imaging plan, plays a key role in coordinating multiple satellites to monitor the disaster area. In the paper, to generate imaging plan dynamically according to the disaster relief, we propose a dynamic satellite task scheduling method for large area disaster response. First, an initial robust scheduling scheme is generated by a robust satellite scheduling model in which both the profit and the robustness of the schedule are simultaneously maximized. Then, we use a multi-objective optimization model to obtain a series of decomposing schemes. Based on the initial imaging plan, we propose a mixed optimizing algorithm named HA_NSGA-II to allocate the decomposing results thus to obtain an adjusted imaging schedule. A real disaster scenario, i.e., 2008 Wenchuan earthquake, is revisited in terms of rapid response using satellite resources and used to evaluate the performance of the proposed method with state-of-the-art approaches. We conclude that our satellite scheduling model can optimize the usage of satellite resources so as to obtain images in disaster response in a more timely and efficient manner.

  17. A low delay transmission method of multi-channel video based on FPGA

    NASA Astrophysics Data System (ADS)

    Fu, Weijian; Wei, Baozhi; Li, Xiaobin; Wang, Quan; Hu, Xiaofei

    2018-03-01

    In order to guarantee the fluency of multi-channel video transmission in video monitoring scenarios, we designed a kind of video format conversion method based on FPGA and its DMA scheduling for video data, reduces the overall video transmission delay.In order to sace the time in the conversion process, the parallel ability of FPGA is used to video format conversion. In order to improve the direct memory access (DMA) writing transmission rate of PCIe bus, a DMA scheduling method based on asynchronous command buffer is proposed. The experimental results show that this paper designs a low delay transmission method based on FPGA, which increases the DMA writing transmission rate by 34% compared with the existing method, and then the video overall delay is reduced to 23.6ms.

  18. Web-Based Medical Appointment Systems: A Systematic Review

    PubMed Central

    Zhao, Peng; Lavoie, Jaie; Lavoie, Beau James; Simoes, Eduardo

    2017-01-01

    Background Health care is changing with a new emphasis on patient-centeredness. Fundamental to this transformation is the increasing recognition of patients' role in health care delivery and design. Medical appointment scheduling, as the starting point of most non-urgent health care services, is undergoing major developments to support active involvement of patients. By using the Internet as a medium, patients are given more freedom in decision making about their preferences for the appointments and have improved access. Objective The purpose of this study was to identify the benefits and barriers to implement Web-based medical scheduling discussed in the literature as well as the unmet needs under the current health care environment. Methods In February 2017, MEDLINE was searched through PubMed to identify articles relating to the impacts of Web-based appointment scheduling. Results A total of 36 articles discussing 21 Web-based appointment systems were selected for this review. Most of the practices have positive changes in some metrics after adopting Web-based scheduling, such as reduced no-show rate, decreased staff labor, decreased waiting time, and improved satisfaction, and so on. Cost, flexibility, safety, and integrity are major reasons discouraging providers from switching to Web-based scheduling. Patients’ reluctance to adopt Web-based appointment scheduling is mainly influenced by their past experiences using computers and the Internet as well as their communication preferences. Conclusions Overall, the literature suggests a growing trend for the adoption of Web-based appointment systems. The findings of this review suggest that there are benefits to a variety of patient outcomes from Web-based scheduling interventions with the need for further studies. PMID:28446422

  19. Web-Based Medical Appointment Systems: A Systematic Review.

    PubMed

    Zhao, Peng; Yoo, Illhoi; Lavoie, Jaie; Lavoie, Beau James; Simoes, Eduardo

    2017-04-26

    Health care is changing with a new emphasis on patient-centeredness. Fundamental to this transformation is the increasing recognition of patients' role in health care delivery and design. Medical appointment scheduling, as the starting point of most non-urgent health care services, is undergoing major developments to support active involvement of patients. By using the Internet as a medium, patients are given more freedom in decision making about their preferences for the appointments and have improved access. The purpose of this study was to identify the benefits and barriers to implement Web-based medical scheduling discussed in the literature as well as the unmet needs under the current health care environment. In February 2017, MEDLINE was searched through PubMed to identify articles relating to the impacts of Web-based appointment scheduling. A total of 36 articles discussing 21 Web-based appointment systems were selected for this review. Most of the practices have positive changes in some metrics after adopting Web-based scheduling, such as reduced no-show rate, decreased staff labor, decreased waiting time, and improved satisfaction, and so on. Cost, flexibility, safety, and integrity are major reasons discouraging providers from switching to Web-based scheduling. Patients' reluctance to adopt Web-based appointment scheduling is mainly influenced by their past experiences using computers and the Internet as well as their communication preferences. Overall, the literature suggests a growing trend for the adoption of Web-based appointment systems. The findings of this review suggest that there are benefits to a variety of patient outcomes from Web-based scheduling interventions with the need for further studies. ©Peng Zhao, Illhoi Yoo, Jaie Lavoie, Beau James Lavoie, Eduardo Simoes. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 26.04.2017.

  20. Designing testing service at baristand industri Medan’s liquid waste laboratory

    NASA Astrophysics Data System (ADS)

    Kusumawaty, Dewi; Napitupulu, Humala L.; Sembiring, Meilita T.

    2018-03-01

    Baristand Industri Medan is a technical implementation unit under the Industrial and Research and Development Agency, the Ministry of Industry. One of the services often used in Baristand Industri Medan is liquid waste testing service. The company set the standard of service is nine working days for testing services. At 2015, 89.66% on testing services liquid waste does not meet the specified standard of services company because of many samples accumulated. The purpose of this research is designing online services to schedule the coming the liquid waste sample. The method used is designing an information system that consists of model design, output design, input design, database design and technology design. The results of designing information system of testing liquid waste online consist of three pages are pages to the customer, the recipient samples and laboratory. From the simulation results with scheduled samples, then the standard services a minimum of nine working days can be reached.

  1. The designer of the 90's: A live demonstration

    NASA Technical Reports Server (NTRS)

    Green, Tommy L.; Jordan, Basil M., Jr.; Oglesby, Timothy L.

    1989-01-01

    A survey of design tools to be used by the aircraft designer is given. Structural reliability, maintainability, cost and predictability, and acoustics expert systems are discussed, as well as scheduling, drawing, engineering systems, sizing functions, and standard parts and materials data bases.

  2. Towards optimization of ACRT schedules applied to the gradient freeze growth of cadmium zinc telluride

    NASA Astrophysics Data System (ADS)

    Divecha, Mia S.; Derby, Jeffrey J.

    2017-12-01

    Historically, the melt growth of II-VI crystals has benefitted from the application of the accelerated crucible rotation technique (ACRT). Here, we employ a comprehensive numerical model to assess the impact of two ACRT schedules designed for a cadmium zinc telluride growth system per the classical recommendations of Capper and co-workers. The ;flow maximizing; ACRT schedule, with higher rotation, effectively mixes the solutal field in the melt but does not reduce supercooling adjacent to the growth interface. The ACRT schedule derived for stable Ekman flow, with lower rotation, proves more effective in reducing supercooling and promoting stable growth. These counterintuitive results highlight the need for more comprehensive studies on the optimization of ACRT schedules for specific growth systems and for desired growth outcomes.

  3. On Several Fundamental Problems of Optimization, Estimation, and Scheduling in Wireless Communications

    NASA Astrophysics Data System (ADS)

    Gao, Qian

    For both the conventional radio frequency and the comparably recent optical wireless communication systems, extensive effort from the academia had been made in improving the network spectrum efficiency and/or reducing the error rate. To achieve these goals, many fundamental challenges such as power efficient constellation design, nonlinear distortion mitigation, channel training design, network scheduling and etc. need to be properly addressed. In this dissertation, novel schemes are proposed accordingly to deal with specific problems falling in category of these challenges. Rigorous proofs and analyses are provided for each of our work to make a fair comparison with the corresponding peer works to clearly demonstrate the advantages. The first part of this dissertation considers a multi-carrier optical wireless system employing intensity modulation (IM) and direct detection (DD). A block-wise constellation design is presented, which treats the DC-bias that conventionally used solely for biasing purpose as an information basis. Our scheme, we term it MSM-JDCM, takes advantage of the compactness of sphere packing in a higher dimensional space, and in turn power efficient constellations are obtained by solving an advanced convex optimization problem. Besides the significant power gains, the MSM-JDCM has many other merits such as being capable of mitigating nonlinear distortion by including a peak-to-power ratio (PAPR) constraint, minimizing inter-symbol-interference (ISI) caused by frequency-selective fading with a novel precoder designed and embedded, and further reducing the bit-error-rate (BER) by combining with an optimized labeling scheme. The second part addresses several optimization problems in a multi-color visible light communication system, including power efficient constellation design, joint pre-equalizer and constellation design, and modeling of different structured channels with cross-talks. Our novel constellation design scheme, termed CSK-Advanced, is compared with the conventional decoupled system with the same spectrum efficiency to demonstrate the power efficiency. Crucial lighting requirements are included as optimization constraints. To control non-linear distortion, the optical peak-to-average-power ratio (PAPR) of LEDs can be individually constrained. With a SVD-based pre-equalizer designed and employed, our scheme can achieve lower BER than counterparts applying zero-forcing (ZF) or linear minimum-mean-squared-error (LMMSE) based post-equalizers. Besides, a binary switching algorithm (BSA) is applied to improve BER performance. The third part looks into a problem of two-phase channel estimation in a relayed wireless network. The channel estimates in every phase are obtained by the linear minimum mean squared error (LMMSE) method. Inaccurate estimate of the relay to destination (RtD) channel in phase 1 could affect estimate of the source to relay (StR) channel in phase 2, which is made erroneous. We first derive a close-form expression for the averaged Bayesian mean-square estimation error (ABMSE) for both phase estimates in terms of the length of source and relay training slots, based on which an iterative searching algorithm is then proposed that optimally allocates training slots to the two phases such that estimation errors are balanced. Analysis shows how the ABMSE of the StD channel estimation varies with the lengths of relay training and source training slots, the relay amplification gain, and the channel prior information respectively. The last part deals with a transmission scheduling problem in a uplink multiple-input-multiple-output (MIMO) wireless network. Code division multiple access (CDMA) is assumed as a multiple access scheme and pseudo-random codes are employed for different users. We consider a heavy traffic scenario, in which each user always has packets to transmit in the scheduled time slots. If the relay is scheduled for transmission together with users, then it operates in a full-duplex mode, where the packets previously collected from users are transmitted to the destination while new packets are being collected from users. A novel expression of throughput is first derived and then used to develop a scheduling algorithm to maximize the throughput. Our full-duplex scheduling is compared with a half-duplex scheduling, random access, and time division multiple access (TDMA), and simulation results illustrate its superiority. Throughput gains due to employment of both MIMO and CDMA are observed.

  4. Stability Assessment and Tuning of an Adaptively Augmented Classical Controller for Launch Vehicle Flight Control

    NASA Technical Reports Server (NTRS)

    VanZwieten, Tannen; Zhu, J. Jim; Adami, Tony; Berry, Kyle; Grammar, Alex; Orr, Jeb S.; Best, Eric A.

    2014-01-01

    Recently, a robust and practical adaptive control scheme for launch vehicles [ [1] has been introduced. It augments a classical controller with a real-time loop-gain adaptation, and it is therefore called Adaptive Augmentation Control (AAC). The loop-gain will be increased from the nominal design when the tracking error between the (filtered) output and the (filtered) command trajectory is large; whereas it will be decreased when excitation of flex or sloshing modes are detected. There is a need to determine the range and rate of the loop-gain adaptation in order to retain (exponential) stability, which is critical in vehicle operation, and to develop some theoretically based heuristic tuning methods for the adaptive law gain parameters. The classical launch vehicle flight controller design technics are based on gain-scheduling, whereby the launch vehicle dynamics model is linearized at selected operating points along the nominal tracking command trajectory, and Linear Time-Invariant (LTI) controller design techniques are employed to ensure asymptotic stability of the tracking error dynamics, typically by meeting some prescribed Gain Margin (GM) and Phase Margin (PM) specifications. The controller gains at the design points are then scheduled, tuned and sometimes interpolated to achieve good performance and stability robustness under external disturbances (e.g. winds) and structural perturbations (e.g. vehicle modeling errors). While the GM does give a bound for loop-gain variation without losing stability, it is for constant dispersions of the loop-gain because the GM is based on frequency-domain analysis, which is applicable only for LTI systems. The real-time adaptive loop-gain variation of the AAC effectively renders the closed-loop system a time-varying system, for which it is well-known that the LTI system stability criterion is neither necessary nor sufficient when applying to a Linear Time-Varying (LTV) system in a frozen-time fashion. Therefore, a generalized stability metric for time-varying loop=gain perturbations is needed for the AAC.

  5. An alternate property tax program requiring a forest management plan and scheduled harvesting

    Treesearch

    D.F. Dennis; P.E. Sendak

    1991-01-01

    Vermont's Use Value Appraisal property tax program, designed to address problems such as tax inequity and forced development caused by taxing agricultural and forest land based on speculative values, requires a forest management plan and scheduled harvests. A probit analysis of enrollment provides evidence of the program's success in attracting large parcels...

  6. Performance evaluation of an agent-based occupancy simulation model

    DOE PAGES

    Luo, Xuan; Lam, Khee Poh; Chen, Yixing; ...

    2017-01-17

    Occupancy is an important factor driving building performance. Static and homogeneous occupant schedules, commonly used in building performance simulation, contribute to issues such as performance gaps between simulated and measured energy use in buildings. Stochastic occupancy models have been recently developed and applied to better represent spatial and temporal diversity of occupants in buildings. However, there is very limited evaluation of the usability and accuracy of these models. This study used measured occupancy data from a real office building to evaluate the performance of an agent-based occupancy simulation model: the Occupancy Simulator. The occupancy patterns of various occupant types weremore » first derived from the measured occupant schedule data using statistical analysis. Then the performance of the simulation model was evaluated and verified based on (1) whether the distribution of observed occupancy behavior patterns follows the theoretical ones included in the Occupancy Simulator, and (2) whether the simulator can reproduce a variety of occupancy patterns accurately. Results demonstrated the feasibility of applying the Occupancy Simulator to simulate a range of occupancy presence and movement behaviors for regular types of occupants in office buildings, and to generate stochastic occupant schedules at the room and individual occupant levels for building performance simulation. For future work, model validation is recommended, which includes collecting and using detailed interval occupancy data of all spaces in an office building to validate the simulated occupant schedules from the Occupancy Simulator.« less

  7. Performance evaluation of an agent-based occupancy simulation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Xuan; Lam, Khee Poh; Chen, Yixing

    Occupancy is an important factor driving building performance. Static and homogeneous occupant schedules, commonly used in building performance simulation, contribute to issues such as performance gaps between simulated and measured energy use in buildings. Stochastic occupancy models have been recently developed and applied to better represent spatial and temporal diversity of occupants in buildings. However, there is very limited evaluation of the usability and accuracy of these models. This study used measured occupancy data from a real office building to evaluate the performance of an agent-based occupancy simulation model: the Occupancy Simulator. The occupancy patterns of various occupant types weremore » first derived from the measured occupant schedule data using statistical analysis. Then the performance of the simulation model was evaluated and verified based on (1) whether the distribution of observed occupancy behavior patterns follows the theoretical ones included in the Occupancy Simulator, and (2) whether the simulator can reproduce a variety of occupancy patterns accurately. Results demonstrated the feasibility of applying the Occupancy Simulator to simulate a range of occupancy presence and movement behaviors for regular types of occupants in office buildings, and to generate stochastic occupant schedules at the room and individual occupant levels for building performance simulation. For future work, model validation is recommended, which includes collecting and using detailed interval occupancy data of all spaces in an office building to validate the simulated occupant schedules from the Occupancy Simulator.« less

  8. An investigation of the use of temporal decomposition in space mission scheduling

    NASA Technical Reports Server (NTRS)

    Bullington, Stanley E.; Narayanan, Venkat

    1994-01-01

    This research involves an examination of techniques for solving scheduling problems in long-duration space missions. The mission timeline is broken up into several time segments, which are then scheduled incrementally. Three methods are presented for identifying the activities that are to be attempted within these segments. The first method is a mathematical model, which is presented primarily to illustrate the structure of the temporal decomposition problem. Since the mathematical model is bound to be computationally prohibitive for realistic problems, two heuristic assignment procedures are also presented. The first heuristic method is based on dispatching rules for activity selection, and the second heuristic assigns performances of a model evenly over timeline segments. These heuristics are tested using a sample Space Station mission and a Spacelab mission. The results are compared with those obtained by scheduling the missions without any problem decomposition. The applicability of this approach to large-scale mission scheduling problems is also discussed.

  9. Collaborative Resource Allocation

    NASA Technical Reports Server (NTRS)

    Wang, Yeou-Fang; Wax, Allan; Lam, Raymond; Baldwin, John; Borden, Chester

    2007-01-01

    Collaborative Resource Allocation Networking Environment (CRANE) Version 0.5 is a prototype created to prove the newest concept of using a distributed environment to schedule Deep Space Network (DSN) antenna times in a collaborative fashion. This program is for all space-flight and terrestrial science project users and DSN schedulers to perform scheduling activities and conflict resolution, both synchronously and asynchronously. Project schedulers can, for the first time, participate directly in scheduling their tracking times into the official DSN schedule, and negotiate directly with other projects in an integrated scheduling system. A master schedule covers long-range, mid-range, near-real-time, and real-time scheduling time frames all in one, rather than the current method of separate functions that are supported by different processes and tools. CRANE also provides private workspaces (both dynamic and static), data sharing, scenario management, user control, rapid messaging (based on Java Message Service), data/time synchronization, workflow management, notification (including emails), conflict checking, and a linkage to a schedule generation engine. The data structure with corresponding database design combines object trees with multiple associated mortal instances and relational database to provide unprecedented traceability and simplify the existing DSN XML schedule representation. These technologies are used to provide traceability, schedule negotiation, conflict resolution, and load forecasting from real-time operations to long-range loading analysis up to 20 years in the future. CRANE includes a database, a stored procedure layer, an agent-based middle tier, a Web service wrapper, a Windows Integrated Analysis Environment (IAE), a Java application, and a Web page interface.

  10. Towards Evolving Electronic Circuits for Autonomous Space Applications

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Haith, Gary L.; Colombano, Silvano P.; Stassinopoulos, Dimitris

    2000-01-01

    The relatively new field of Evolvable Hardware studies how simulated evolution can reconfigure, adapt, and design hardware structures in an automated manner. Space applications, especially those requiring autonomy, are potential beneficiaries of evolvable hardware. For example, robotic drilling from a mobile platform requires high-bandwidth controller circuits that are difficult to design. In this paper, we present automated design techniques based on evolutionary search that could potentially be used in such applications. First, we present a method of automatically generating analog circuit designs using evolutionary search and a circuit construction language. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. Using a parallel genetic algorithm, we present experimental results for five design tasks. Second, we investigate the use of coevolution in automated circuit design. We examine fitness evaluation by comparing the effectiveness of four fitness schedules. The results indicate that solution quality is highest with static and co-evolving fitness schedules as compared to the other two dynamic schedules. We discuss these results and offer two possible explanations for the observed behavior: retention of useful information, and alignment of problem difficulty with circuit proficiency.

  11. Formulation of detailed consumables management models for the development (preoperational) period of advanced space transportation system: Executive summary

    NASA Technical Reports Server (NTRS)

    Torian, J. G.

    1976-01-01

    Formulation of models required for the mission planning and scheduling function and establishment of the relation of those models to prelaunch, onboard, ground support, and postmission functions for the development phase of space transportation systems (STS) was conducted. The preoperational space shuttle is used as the design baseline for the subject model formulations. Analytical models were developed which consist of a mission planning processor with appropriate consumables data base and a method of recognizing potential constraint violations in both the planning and flight operations functions. A flight data file for storage/retrieval of information over an extended period which interfaces with a flight operations processor for monitoring of the actual flights was examined.

  12. Bioreactor design for successive culture of anchorage-dependent cells operated in an automated manner.

    PubMed

    Kino-Oka, Masahiro; Ogawa, Natsuki; Umegaki, Ryota; Taya, Masahito

    2005-01-01

    A novel bioreactor system was designed to perform a series of batchwise cultures of anchorage-dependent cells by means of automated operations of medium change and passage for cell transfer. The experimental data on contamination frequency ensured the biological cleanliness in the bioreactor system, which facilitated the operations in a closed environment, as compared with that in flask culture system with manual handlings. In addition, the tools for growth prediction (based on growth kinetics) and real-time growth monitoring by measurement of medium components (based on small-volume analyzing machinery) were installed into the bioreactor system to schedule the operations of medium change and passage and to confirm that culture proceeds as scheduled, respectively. The successive culture of anchorage-dependent cells was conducted with the bioreactor running in an automated way. The automated bioreactor gave a successful culture performance with fair accordance to preset scheduling based on the information in the latest subculture, realizing 79- fold cell expansion for 169 h. In addition, the correlation factor between experimental data and scheduled values through the bioreactor performance was 0.998. It was concluded that the proposed bioreactor with the integration of the prediction and monitoring tools could offer a feasible system for the manufacturing process of cultured tissue products.

  13. Design of a QoS-controlled ATM-based communications system in chorus

    NASA Astrophysics Data System (ADS)

    Coulson, Geoff; Campbell, Andrew; Robin, Philippe; Blair, Gordon; Papathomas, Michael; Shepherd, Doug

    1995-05-01

    We describe the design of an application platform able to run distributed real-time and multimedia applications alongside conventional UNIX programs. The platform is embedded in a microkernel/PC environment and supported by an ATM-based, QoS-driven communications stack. In particular, we focus on resource-management aspects of the design and deal with CPU scheduling, network resource-management and memory-management issues. An architecture is presented that guarantees QoS levels of both communications and processing with varying degrees of commitment as specified by user-level QoS parameters. The architecture uses admission tests to determine whether or not new activities can be accepted and includes modules to translate user-level QoS parameters into representations usable by the scheduling, network, and memory-management subsystems.

  14. Associations between shift schedule characteristics with sleep, need for recovery, health and performance measures for regular (semi-)continuous 3-shift systems.

    PubMed

    van de Ven, Hardy A; Brouwer, Sandra; Koolhaas, Wendy; Goudswaard, Anneke; de Looze, Michiel P; Kecklund, Göran; Almansa, Josue; Bültmann, Ute; van der Klink, Jac J L

    2016-09-01

    In this cross-sectional study associations were examined between eight shift schedule characteristics with shift-specific sleep complaints and need for recovery and generic health and performance measures. It was hypothesized that shift schedule characteristics meeting ergonomic recommendations are associated with better sleep, need for recovery, health and performance. Questionnaire data were collected from 491 shift workers of 18 companies with 9 regular (semi)-continuous shift schedules. The shift schedule characteristics were analyzed separately and combined using multilevel linear regression models. The hypothesis was largely not confirmed. Relatively few associations were found, of which the majority was in the direction as expected. In particular early starts of morning shifts and many consecutive shifts seem to be avoided. The healthy worker effect, limited variation between included schedules and the cross-sectional design might explain the paucity of significant results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Deadlock-free genetic scheduling algorithm for automated manufacturing systems based on deadlock control policy.

    PubMed

    Xing, KeYi; Han, LiBin; Zhou, MengChu; Wang, Feng

    2012-06-01

    Deadlock-free control and scheduling are vital for optimizing the performance of automated manufacturing systems (AMSs) with shared resources and route flexibility. Based on the Petri net models of AMSs, this paper embeds the optimal deadlock avoidance policy into the genetic algorithm and develops a novel deadlock-free genetic scheduling algorithm for AMSs. A possible solution of the scheduling problem is coded as a chromosome representation that is a permutation with repetition of parts. By using the one-step look-ahead method in the optimal deadlock control policy, the feasibility of a chromosome is checked, and infeasible chromosomes are amended into feasible ones, which can be easily decoded into a feasible deadlock-free schedule. The chromosome representation and polynomial complexity of checking and amending procedures together support the cooperative aspect of genetic search for scheduling problems strongly.

  16. Decomposition of timed automata for solving scheduling problems

    NASA Astrophysics Data System (ADS)

    Nishi, Tatsushi; Wakatake, Masato

    2014-03-01

    A decomposition algorithm for scheduling problems based on timed automata (TA) model is proposed. The problem is represented as an optimal state transition problem for TA. The model comprises of the parallel composition of submodels such as jobs and resources. The procedure of the proposed methodology can be divided into two steps. The first step is to decompose the TA model into several submodels by using decomposable condition. The second step is to combine individual solution of subproblems for the decomposed submodels by the penalty function method. A feasible solution for the entire model is derived through the iterated computation of solving the subproblem for each submodel. The proposed methodology is applied to solve flowshop and jobshop scheduling problems. Computational experiments demonstrate the effectiveness of the proposed algorithm compared with a conventional TA scheduling algorithm without decomposition.

  17. Towards a Decision Support System for Space Flight Operations

    NASA Technical Reports Server (NTRS)

    Meshkat, Leila; Hogle, Charles; Ruszkowski, James

    2013-01-01

    The Mission Operations Directorate (MOD) at the Johnson Space Center (JSC) has put in place a Model Based Systems Engineering (MBSE) technological framework for the development and execution of the Flight Production Process (FPP). This framework has provided much added value and return on investment to date. This paper describes a vision for a model based Decision Support System (DSS) for the development and execution of the FPP and its design and development process. The envisioned system extends the existing MBSE methodology and technological framework which is currently in use. The MBSE technological framework currently in place enables the systematic collection and integration of data required for building an FPP model for a diverse set of missions. This framework includes the technology, people and processes required for rapid development of architectural artifacts. It is used to build a feasible FPP model for the first flight of spacecraft and for recurrent flights throughout the life of the program. This model greatly enhances our ability to effectively engage with a new customer. It provides a preliminary work breakdown structure, data flow information and a master schedule based on its existing knowledge base. These artifacts are then refined and iterated upon with the customer for the development of a robust end-to-end, high-level integrated master schedule and its associated dependencies. The vision is to enhance this framework to enable its application for uncertainty management, decision support and optimization of the design and execution of the FPP by the program. Furthermore, this enhanced framework will enable the agile response and redesign of the FPP based on observed system behavior. The discrepancy of the anticipated system behavior and the observed behavior may be due to the processing of tasks internally, or due to external factors such as changes in program requirements or conditions associated with other organizations that are outside of MOD. The paper provides a roadmap for the three increments of this vision. These increments include (1) hardware and software system components and interfaces with the NASA ground system, (2) uncertainty management and (3) re-planning and automated execution. Each of these increments provide value independently; but some may also enable building of a subsequent increment.

  18. Gain-scheduled {{\\mathscr{H}}}_{\\infty } buckling control of a circular beam-column subject to time-varying axial loads

    NASA Astrophysics Data System (ADS)

    Schaeffner, Maximilian; Platz, Roland

    2018-06-01

    For slender beam-columns loaded by axial compressive forces, active buckling control provides a possibility to increase the maximum bearable axial load above that of a purely passive structure. In this paper, an approach for gain-scheduled {{\\mathscr{H}}}∞ buckling control of a slender beam-column with circular cross-section subject to time-varying axial loads is investigated experimentally. Piezo-elastic supports with integrated piezoelectric stack actuators at the beam-column ends allow an active stabilization in arbitrary lateral directions. The axial loads on the beam-column influence its lateral dynamic behavior and, eventually, cause the beam-column to buckle. A reduced modal model of the beam-column subject to axial loads including the dynamics of the electrical components is set up and calibrated with experimental data. Particularly, the linear parameter-varying open-loop plant is used to design a model-based gain-scheduled {{\\mathscr{H}}}∞ buckling control that is implemented in an experimental test setup. The beam-column is loaded by ramp- and step-shaped time-varying axial compressive loads that result in a lateral deformation of the beam-column due to imperfections, such as predeformation, eccentric loading or clamping moments. The lateral deformations and the maximum bearable loads of the beam-column are analyzed and compared for the beam-column with and without gain-scheduled {{\\mathscr{H}}}∞ buckling control or, respectively, active and passive configuration. With the proposed gain-scheduled {{\\mathscr{H}}}∞ buckling control it is possible to increase the maximum bearable load of the active beam-column by 19% for ramp-shaped axial loads and to significantly reduce the beam-column deformations for step-shaped axial loads compared to the passive structure.

  19. Developing Formal Correctness Properties from Natural Language Requirements

    NASA Technical Reports Server (NTRS)

    Nikora, Allen P.

    2006-01-01

    This viewgraph presentation reviews the rationale of the program to transform natural language specifications into formal notation.Specifically, automate generation of Linear Temporal Logic (LTL)correctness properties from natural language temporal specifications. There are several reasons for this approach (1) Model-based techniques becoming more widely accepted, (2) Analytical verification techniques (e.g., model checking, theorem proving) significantly more effective at detecting types of specification design errors (e.g., race conditions, deadlock) than manual inspection, (3) Many requirements still written in natural language, which results in a high learning curve for specification languages, associated tools and increased schedule and budget pressure on projects reduce training opportunities for engineers, and (4) Formulation of correctness properties for system models can be a difficult problem. This has relevance to NASA in that it would simplify development of formal correctness properties, lead to more widespread use of model-based specification, design techniques, assist in earlier identification of defects and reduce residual defect content for space mission software systems. The presentation also discusses: potential applications, accomplishments and/or technological transfer potential and the next steps.

  20. Dynamic Modeling of ALS Systems

    NASA Technical Reports Server (NTRS)

    Jones, Harry

    2002-01-01

    The purpose of dynamic modeling and simulation of Advanced Life Support (ALS) systems is to help design them. Static steady state systems analysis provides basic information and is necessary to guide dynamic modeling, but static analysis is not sufficient to design and compare systems. ALS systems must respond to external input variations and internal off-nominal behavior. Buffer sizing, resupply scheduling, failure response, and control system design are aspects of dynamic system design. We develop two dynamic mass flow models and use them in simulations to evaluate systems issues, optimize designs, and make system design trades. One model is of nitrogen leakage in the space station, the other is of a waste processor failure in a regenerative life support system. Most systems analyses are concerned with optimizing the cost/benefit of a system at its nominal steady-state operating point. ALS analysis must go beyond the static steady state to include dynamic system design. All life support systems exhibit behavior that varies over time. ALS systems must respond to equipment operating cycles, repair schedules, and occasional off-nominal behavior or malfunctions. Biological components, such as bioreactors, composters, and food plant growth chambers, usually have operating cycles or other complex time behavior. Buffer sizes, material stocks, and resupply rates determine dynamic system behavior and directly affect system mass and cost. Dynamic simulation is needed to avoid the extremes of costly over-design of buffers and material reserves or system failure due to insufficient buffers and lack of stored material.

  1. Chip-set for quality of service support in passive optical networks

    NASA Astrophysics Data System (ADS)

    Ringoot, Edwin; Hoebeke, Rudy; Slabbinck, B. Hans; Verhaert, Michel

    1998-10-01

    In this paper the design of a chip-set for QoS provisioning in ATM-based Passive Optical Networks is discussed. The implementation of a general-purpose switch chip on the Optical Network Unit is presented, with focus on the design of the cell scheduling and buffer management logic. The cell scheduling logic supports `colored' grants, priority jumping and weighted round-robin scheduling. The switch chip offers powerful buffer management capabilities enabling the efficient support of GFR and UBR services. Multicast forwarding is also supported. In addition, the architecture of a MAC controller chip developed for a SuperPON access network is introduced. In particular, the permit scheduling logic and its implementation on the Optical Line Termination will be discussed. The chip-set enables the efficient support of services with different service requirements on the SuperPON. The permit scheduling logic built into the MAC controller chip in combination with the cell scheduling and buffer management capabilities of the switch chip can be used by network operators to offer guaranteed service performance to delay sensitive services, and to efficiently and fairly distribute any spare capacity to delay insensitive services.

  2. Model-Based Data Integration and Process Standardization Techniques for Fault Management: A Feasibility Study

    NASA Technical Reports Server (NTRS)

    Haste, Deepak; Ghoshal, Sudipto; Johnson, Stephen B.; Moore, Craig

    2018-01-01

    This paper describes the theory and considerations in the application of model-based techniques to assimilate information from disjoint knowledge sources for performing NASA's Fault Management (FM)-related activities using the TEAMS® toolset. FM consists of the operational mitigation of existing and impending spacecraft failures. NASA's FM directives have both design-phase and operational-phase goals. This paper highlights recent studies by QSI and DST of the capabilities required in the TEAMS® toolset for conducting FM activities with the aim of reducing operating costs, increasing autonomy, and conforming to time schedules. These studies use and extend the analytic capabilities of QSI's TEAMS® toolset to conduct a range of FM activities within a centralized platform.

  3. Modernizing sports facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dustin, R.

    Modernization and renovation of sports facilities challenge the design team to balance a number of requirements: spectator and owner expectations, existing building and site conditions, architectural layouts, code and legislation issues, time constraints and budget issues. System alternatives are evaluated and selected based on the relative priorities of these requirements. These priorities are unique to each project. At Alexander Memorial Coliseum, project schedules, construction funds and facility usage became the priorities. The ACC basketball schedule and arrival of the Centennial Olympics dictated the construction schedule. Initiation and success of the project depended on the commitment of the design team tomore » meet coliseum funding levels established three years ago. Analysis of facility usage and system alternative capabilities drove the design team to select a system that met the project requirements and will maximize the benefits to the owner and spectators for many years to come.« less

  4. System engineering techniques for establishing balanced design and performance guidelines for the advanced telerobotic testbed

    NASA Technical Reports Server (NTRS)

    Zimmerman, W. F.; Matijevic, J. R.

    1987-01-01

    Novel system engineering techniques have been developed and applied to establishing structured design and performance objectives for the Telerobotics Testbed that reduce technical risk while still allowing the testbed to demonstrate an advancement in state-of-the-art robotic technologies. To estblish the appropriate tradeoff structure and balance of technology performance against technical risk, an analytical data base was developed which drew on: (1) automation/robot-technology availability projections, (2) typical or potential application mission task sets, (3) performance simulations, (4) project schedule constraints, and (5) project funding constraints. Design tradeoffs and configuration/performance iterations were conducted by comparing feasible technology/task set configurations against schedule/budget constraints as well as original program target technology objectives. The final system configuration, task set, and technology set reflected a balanced advancement in state-of-the-art robotic technologies, while meeting programmatic objectives and schedule/cost constraints.

  5. Challenges and models in supporting logistics system design for dedicated-biomass-based bioenergy industry.

    PubMed

    Zhu, Xiaoyan; Li, Xueping; Yao, Qingzhu; Chen, Yuerong

    2011-01-01

    This paper analyzed the uniqueness and challenges in designing the logistics system for dedicated biomass-to-bioenergy industry, which differs from the other industries, due to the unique features of dedicated biomass (e.g., switchgrass) including its low bulk density, restrictions on harvesting season and frequency, content variation with time and circumambient conditions, weather effects, scattered distribution over a wide geographical area, and so on. To design it, this paper proposed a mixed integer linear programming model. It covered from planting and harvesting switchgrass to delivering to a biorefinery and included the residue handling, concentrating on integrating strategic decisions on the supply chain design and tactical decisions on the annual operation schedules. The present numerical examples verified the model and demonstrated its use in practice. This paper showed that the operations of the logistics system were significantly different for harvesting and non-harvesting seasons, and that under the well-designed biomass logistics system, the mass production with a steady and sufficient supply of biomass can increase the unit profit of bioenergy. The analytical model and practical methodology proposed in this paper will help realize the commercial production in biomass-to-bioenergy industry. Copyright © 2010 Elsevier Ltd. All rights reserved.

  6. Incentive Compatible Online Scheduling of Malleable Parallel Jobs with Individual Deadlines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carroll, Thomas E.; Grosu, Daniel

    2010-09-13

    We consider the online scheduling of malleable jobs on parallel systems, such as clusters, symmetric multiprocessing computers, and multi-core processor computers. Malleable jobs is a model of parallel processing in which jobs adapt to the number of processors assigned to them. This model permits the scheduler and resource manager to make more efficient use of the available resources. Each malleable job is characterized by arrival time, deadline, and value. If the job completes by its deadline, the user earns the payoff indicated by the value; otherwise, she earns a payoff of zero. The scheduling objective is to maximize the summore » of the values of the jobs that complete by their associated deadlines. Complicating the matter is that users in the real world are rational and they will attempt to manipulate the scheduler by misreporting their jobs’ parameters if it benefits them to do so. To mitigate this behavior, we design an incentive compatible online scheduling mechanism. Incentive compatibility assures us that the users will obtain the maximum payoff only if they truthfully report their jobs’ parameters to the scheduler. Finally, we simulate and study the mechanism to show the effects of misreports on the cheaters and on the system.« less

  7. Construction schedule simulation of a diversion tunnel based on the optimized ventilation time.

    PubMed

    Wang, Xiaoling; Liu, Xuepeng; Sun, Yuefeng; An, Juan; Zhang, Jing; Chen, Hongchao

    2009-06-15

    Former studies, the methods for estimating the ventilation time are all empirical in construction schedule simulation. However, in many real cases of construction schedule, the many factors have impact on the ventilation time. Therefore, in this paper the 3D unsteady quasi-single phase models are proposed to optimize the ventilation time with different tunneling lengths. The effect of buoyancy is considered in the momentum equation of the CO transport model, while the effects of inter-phase drag, lift force, and virtual mass force are taken into account in the momentum source of the dust transport model. The prediction by the present model for airflow in a diversion tunnel is confirmed by the experimental values reported by Nakayama [Nakayama, In-situ measurement and simulation by CFD of methane gas distribution at a heading faces, Shigen-to-Sozai 114 (11) (1998) 769-775]. The construction ventilation of the diversion tunnel of XinTangfang power station in China is used as a case. The distributions of airflow, CO and dust in the diversion tunnel are analyzed. A theory method for GIS-based dynamic visual simulation for the construction processes of underground structure groups is presented that combines cyclic operation network simulation, system simulation, network plan optimization, and GIS-based construction processes' 3D visualization. Based on the ventilation time the construction schedule of the diversion tunnel is simulated by the above theory method.

  8. Permutation flow-shop scheduling problem to optimize a quadratic objective function

    NASA Astrophysics Data System (ADS)

    Ren, Tao; Zhao, Peng; Zhang, Da; Liu, Bingqian; Yuan, Huawei; Bai, Danyu

    2017-09-01

    A flow-shop scheduling model enables appropriate sequencing for each job and for processing on a set of machines in compliance with identical processing orders. The objective is to achieve a feasible schedule for optimizing a given criterion. Permutation is a special setting of the model in which the processing order of the jobs on the machines is identical for each subsequent step of processing. This article addresses the permutation flow-shop scheduling problem to minimize the criterion of total weighted quadratic completion time. With a probability hypothesis, the asymptotic optimality of the weighted shortest processing time schedule under a consistency condition (WSPT-CC) is proven for sufficiently large-scale problems. However, the worst case performance ratio of the WSPT-CC schedule is the square of the number of machines in certain situations. A discrete differential evolution algorithm, where a new crossover method with multiple-point insertion is used to improve the final outcome, is presented to obtain high-quality solutions for moderate-scale problems. A sequence-independent lower bound is designed for pruning in a branch-and-bound algorithm for small-scale problems. A set of random experiments demonstrates the performance of the lower bound and the effectiveness of the proposed algorithms.

  9. Vehicle coordinated transportation dispatching model base on multiple crisis locations

    NASA Astrophysics Data System (ADS)

    Tian, Ran; Li, Shanwei; Yang, Guoying

    2018-05-01

    Many disastrous events are often caused after unconventional emergencies occur, and the requirements of disasters are often different. It is difficult for a single emergency resource center to satisfy such requirements at the same time. Therefore, how to coordinate the emergency resources stored by multiple emergency resource centers to various disaster sites requires the coordinated transportation of emergency vehicles. In this paper, according to the problem of emergency logistics coordination scheduling, based on the related constraints of emergency logistics transportation, an emergency resource scheduling model based on multiple disasters is established.

  10. A time scheduling model of logistics service supply chain based on the customer order decoupling point: a perspective from the constant service operation time.

    PubMed

    Liu, Weihua; Yang, Yi; Xu, Haitao; Liu, Xiaoyan; Wang, Yijia; Liang, Zhicheng

    2014-01-01

    In mass customization logistics service, reasonable scheduling of the logistics service supply chain (LSSC), especially time scheduling, is benefit to increase its competitiveness. Therefore, the effect of a customer order decoupling point (CODP) on the time scheduling performance should be considered. To minimize the total order operation cost of the LSSC, minimize the difference between the expected and actual time of completing the service orders, and maximize the satisfaction of functional logistics service providers, this study establishes an LSSC time scheduling model based on the CODP. Matlab 7.8 software is used in the numerical analysis for a specific example. Results show that the order completion time of the LSSC can be delayed or be ahead of schedule but cannot be infinitely advanced or infinitely delayed. Obtaining the optimal comprehensive performance can be effective if the expected order completion time is appropriately delayed. The increase in supply chain comprehensive performance caused by the increase in the relationship coefficient of logistics service integrator (LSI) is limited. The relative concern degree of LSI on cost and service delivery punctuality leads to not only changes in CODP but also to those in the scheduling performance of the LSSC.

  11. A Time Scheduling Model of Logistics Service Supply Chain Based on the Customer Order Decoupling Point: A Perspective from the Constant Service Operation Time

    PubMed Central

    Yang, Yi; Xu, Haitao; Liu, Xiaoyan; Wang, Yijia; Liang, Zhicheng

    2014-01-01

    In mass customization logistics service, reasonable scheduling of the logistics service supply chain (LSSC), especially time scheduling, is benefit to increase its competitiveness. Therefore, the effect of a customer order decoupling point (CODP) on the time scheduling performance should be considered. To minimize the total order operation cost of the LSSC, minimize the difference between the expected and actual time of completing the service orders, and maximize the satisfaction of functional logistics service providers, this study establishes an LSSC time scheduling model based on the CODP. Matlab 7.8 software is used in the numerical analysis for a specific example. Results show that the order completion time of the LSSC can be delayed or be ahead of schedule but cannot be infinitely advanced or infinitely delayed. Obtaining the optimal comprehensive performance can be effective if the expected order completion time is appropriately delayed. The increase in supply chain comprehensive performance caused by the increase in the relationship coefficient of logistics service integrator (LSI) is limited. The relative concern degree of LSI on cost and service delivery punctuality leads to not only changes in CODP but also to those in the scheduling performance of the LSSC. PMID:24715818

  12. Health Optimizing Physical Education (HOPE): A New Curriculum for School Programs--Part 1: Establishing the Need and Describing the Model

    ERIC Educational Resources Information Center

    Metzler, Michael W.; McKenzie, Thomas L.; van der Mars, Hans; Barrett-Williams, Shannon L.; Ellis, Rebecca

    2013-01-01

    Comprehensive School Physical Activity Programs (CSPAP) are designed to provide expanded opportunities for physical activity beyond regularly scheduled physical education time-including before, during, and after school, as well as at home and in the community. While CSPAPs are gaining support, currently there are no models for designing,…

  13. Scheduling IT Staff at a Bank: A Mathematical Programming Approach

    PubMed Central

    Labidi, M.; Mrad, M.; Gharbi, A.; Louly, M. A.

    2014-01-01

    We address a real-world optimization problem: the scheduling of a Bank Information Technologies (IT) staff. This problem can be defined as the process of constructing optimized work schedules for staff. In a general sense, it requires the allocation of suitably qualified staff to specific shifts to meet the demands for services of an organization while observing workplace regulations and attempting to satisfy individual work preferences. A monthly shift schedule is prepared to determine the shift duties of each staff considering shift coverage requirements, seniority-based workload rules, and staff work preferences. Due to the large number of conflicting constraints, a multiobjective programming model has been proposed to automate the schedule generation process. The suggested mathematical model has been implemented using Lingo software. The results indicate that high quality solutions can be obtained within a few seconds compared to the manually prepared schedules. PMID:24772032

  14. Comparison of 2-Dose and 3-Dose 9-Valent Human Papillomavirus Vaccine Schedules in the United States: A Cost-effectiveness Analysis.

    PubMed

    Laprise, Jean-François; Markowitz, Lauri E; Chesson, Harrell W; Drolet, Mélanie; Brisson, Marc

    2016-09-01

    A recent clinical trial using the 9-valent human papillomavirus virus (HPV) vaccine has shown that antibody responses after 2 doses are noninferior to those after 3 doses, suggesting that 2 and 3 doses may have comparable vaccine efficacy. We used an individual-based transmission-dynamic model to compare the population-level effectiveness and cost-effectiveness of 2- and 3-dose schedules of 9-valent HPV vaccine in the United States. Our model predicts that if 2 doses of 9-valent vaccine protect for ≥20 years, the additional benefits of a 3-dose schedule are small as compared to those of 2-dose schedules, and 2-dose schedules are likely much more cost-efficient than 3-dose schedules. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.

  15. Scheduling IT staff at a bank: a mathematical programming approach.

    PubMed

    Labidi, M; Mrad, M; Gharbi, A; Louly, M A

    2014-01-01

    We address a real-world optimization problem: the scheduling of a Bank Information Technologies (IT) staff. This problem can be defined as the process of constructing optimized work schedules for staff. In a general sense, it requires the allocation of suitably qualified staff to specific shifts to meet the demands for services of an organization while observing workplace regulations and attempting to satisfy individual work preferences. A monthly shift schedule is prepared to determine the shift duties of each staff considering shift coverage requirements, seniority-based workload rules, and staff work preferences. Due to the large number of conflicting constraints, a multiobjective programming model has been proposed to automate the schedule generation process. The suggested mathematical model has been implemented using Lingo software. The results indicate that high quality solutions can be obtained within a few seconds compared to the manually prepared schedules.

  16. Radiology scheduling: preferences of users of radiologic services and impact on referral base and extension.

    PubMed

    Mozumdar, Biswita C; Hornsby, Douglas Neal; Gogate, Adheet S; Intriere, Lisa A; Hanson, Richard; McGreal, Karen; Kelly, Pauline; Ros, Pablo

    2003-08-01

    To study end-user attitudes and preferences with respect to radiology scheduling systems and to assess implications for retention and extension of the referral base. A study of the institution's historical data indicated reduced satisfaction with the process of patient scheduling in recent years. Sixty physicians who referred patients to a single, large academic radiology department received the survey. The survey was designed to identify (A) the preferred vehicle for patient scheduling (on-line versus telephone scheduling) and (B) whether ease of scheduling was a factor in physicians referring patients to other providers. Referring physicians were asked to forward the survey to any appropriate office staff member in case the latter scheduled appointments for patients. Users were asked to provide comments and suggestions for improvement. The statistical method used was the analysis of proportions. Thirty-three responses were received, corresponding to a return rate of 55%. Twenty-six of the 33 respondents (78.8%, P < .01) stated they were willing to try an online scheduling system; 16 of which tried the system. Twelve of the 16 (75%, P < .05) preferred the on-line application to the telephone system, stating logistical simplification as the primary reason for preference. Three (18.75%) did not consider online scheduling to be more convenient than traditional telephone scheduling. One respondent did not indicate any preference. Eleven of 33 users (33.33%, P < .001) stated that they would change radiology service providers if expectations of scheduling ease are not met. On-line scheduling applications are becoming the preferred scheduling vehicle. Augmenting their capabilities and availability can simplify the scheduling process, improve referring physician satisfaction, and provide a competitive advantage. Referrers are willing to change providers if scheduling expectations are not met.

  17. Multi-objective group scheduling optimization integrated with preventive maintenance

    NASA Astrophysics Data System (ADS)

    Liao, Wenzhu; Zhang, Xiufang; Jiang, Min

    2017-11-01

    This article proposes a single-machine-based integration model to meet the requirements of production scheduling and preventive maintenance in group production. To describe the production for identical/similar and different jobs, this integrated model considers the learning and forgetting effects. Based on machine degradation, the deterioration effect is also considered. Moreover, perfect maintenance and minimal repair are adopted in this integrated model. The multi-objective of minimizing total completion time and maintenance cost is taken to meet the dual requirements of delivery date and cost. Finally, a genetic algorithm is developed to solve this optimization model, and the computation results demonstrate that this integrated model is effective and reliable.

  18. Resource constrained design of artificial neural networks using comparator neural network

    NASA Technical Reports Server (NTRS)

    Wah, Benjamin W.; Karnik, Tanay S.

    1992-01-01

    We present a systematic design method executed under resource constraints for automating the design of artificial neural networks using the back error propagation algorithm. Our system aims at finding the best possible configuration for solving the given application with proper tradeoff between the training time and the network complexity. The design of such a system is hampered by three related problems. First, there are infinitely many possible network configurations, each may take an exceedingly long time to train; hence, it is impossible to enumerate and train all of them to completion within fixed time, space, and resource constraints. Second, expert knowledge on predicting good network configurations is heuristic in nature and is application dependent, rendering it difficult to characterize fully in the design process. A learning procedure that refines this knowledge based on examples on training neural networks for various applications is, therefore, essential. Third, the objective of the network to be designed is ill-defined, as it is based on a subjective tradeoff between the training time and the network cost. A design process that proposes alternate configurations under different cost-performance tradeoff is important. We have developed a Design System which schedules the available time, divided into quanta, for testing alternative network configurations. Its goal is to select/generate and test alternative network configurations in each quantum, and find the best network when time is expended. Since time is limited, a dynamic schedule that determines the network configuration to be tested in each quantum is developed. The schedule is based on relative comparison of predicted training times of alternative network configurations using comparator network paradigm. The comparator network has been trained to compare training times for a large variety of traces of TSSE-versus-time collected during back-propagation learning of various applications.

  19. Scheduling in Sensor Grid Middleware for Telemedicine Using ABC Algorithm

    PubMed Central

    Vigneswari, T.; Mohamed, M. A. Maluk

    2014-01-01

    Advances in microelectromechanical systems (MEMS) and nanotechnology have enabled design of low power wireless sensor nodes capable of sensing different vital signs in our body. These nodes can communicate with each other to aggregate data and transmit vital parameters to a base station (BS). The data collected in the base station can be used to monitor health in real time. The patient wearing sensors may be mobile leading to aggregation of data from different BS for processing. Processing real time data is compute-intensive and telemedicine facilities may not have appropriate hardware to process the real time data effectively. To overcome this, sensor grid has been proposed in literature wherein sensor data is integrated to the grid for processing. This work proposes a scheduling algorithm to efficiently process telemedicine data in the grid. The proposed algorithm uses the popular swarm intelligence algorithm for scheduling to overcome the NP complete problem of grid scheduling. Results compared with other heuristic scheduling algorithms show the effectiveness of the proposed algorithm. PMID:25548557

  20. An on-time power-aware scheduling scheme for medical sensor SoC-based WBAN systems.

    PubMed

    Hwang, Tae-Ho; Kim, Dong-Sun; Kim, Jung-Guk

    2012-12-27

    The focus of many leading technologies in the field of medical sensor systems is on low power consumption and robust data transmission. For example, the implantable cardioverter-defibrillator (ICD), which is used to maintain the heart in a healthy state, requires a reliable wireless communication scheme with an extremely low duty-cycle, high bit rate, and energy-efficient media access protocols. Because such devices must be sustained for over 5 years without access to battery replacement, they must be designed to have extremely low power consumption in sleep mode. Here, an on-time, energy-efficient scheduling scheme is proposed that performs power adjustments to minimize the sleep-mode current. The novelty of this scheduler is that it increases the determinacy of power adjustment and the predictability of scheduling by employing non-pre-emptible dual priority scheduling. This predictable scheduling also guarantees the punctuality of important periodic tasks based on their serialization, by using their worst case execution time) and the power consumption optimization. The scheduler was embedded into a system on chip (SoC) developed to support the wireless body area network-a wakeup-radio and wakeup-timer for implantable medical devices. This scheduling system is validated by the experimental results of its performance when used with life-time extensions of ICD devices.

  1. An On-Time Power-Aware Scheduling Scheme for Medical Sensor SoC-Based WBAN Systems

    PubMed Central

    Hwang, Tae-Ho; Kim, Dong-Sun; Kim, Jung-Guk

    2013-01-01

    The focus of many leading technologies in the field of medical sensor systems is on low power consumption and robust data transmission. For example, the implantable cardioverter-defibrillator (ICD), which is used to maintain the heart in a healthy state, requires a reliable wireless communication scheme with an extremely low duty-cycle, high bit rate, and energy-efficient media access protocols. Because such devices must be sustained for over 5 years without access to battery replacement, they must be designed to have extremely low power consumption in sleep mode. Here, an on-time, energy-efficient scheduling scheme is proposed that performs power adjustments to minimize the sleep-mode current. The novelty of this scheduler is that it increases the determinacy of power adjustment and the predictability of scheduling by employing non-pre-emptible dual priority scheduling. This predictable scheduling also guarantees the punctuality of important periodic tasks based on their serialization, by using their worst case execution time) and the power consumption optimization. The scheduler was embedded into a system on chip (SoC) developed to support the wireless body area network—a wakeup-radio and wakeup-timer for implantable medical devices. This scheduling system is validated by the experimental results of its performance when used with life-time extensions of ICD devices. PMID:23271602

  2. Southwestern Cooperative Educational Laboratory Interaction Observation Schedule (SCIOS): A System for Analyzing Teacher-Pupil Interaction in the Affective Domain.

    ERIC Educational Resources Information Center

    Bemis, Katherine A.; Liberty, Paul G.

    The Southwestern Cooperative Interaction Observation Schedule (SCIOS) is a classroom observation instrument designed to record pupil-teacher interaction. The classification of pupil behavior is based on Krathwohl's (1964) theory of the three lowest levels of the affective domain. The levels are (1) receiving: the learner should be sensitized to…

  3. Research, Development and Validation of the Daily Demand Computer Schedule 360/50. Final Report.

    ERIC Educational Resources Information Center

    Ovard, Glen F.; Rowley, Vernon C.

    A study was designed to further the research, development and validation of the Daily Demand Computer Schedule (DDCS), a system by which students can be rescheduled daily for facilitating their individual continuous progress through the curriculum. It will allow teachers to regroup students as needed based upon that progress, and will make time a…

  4. Analysis Testing of Sociocultural Factors Influence on Human Reliability within Sociotechnical Systems: The Algerian Oil Companies.

    PubMed

    Laidoune, Abdelbaki; Rahal Gharbi, Med El Hadi

    2016-09-01

    The influence of sociocultural factors on human reliability within an open sociotechnical systems is highlighted. The design of such systems is enhanced by experience feedback. The study was focused on a survey related to the observation of working cases, and by processing of incident/accident statistics and semistructured interviews in the qualitative part. In order to consolidate the study approach, we considered a schedule for the purpose of standard statistical measurements. We tried to be unbiased by supporting an exhaustive list of all worker categories including age, sex, educational level, prescribed task, accountability level, etc. The survey was reinforced by a schedule distributed to 300 workers belonging to two oil companies. This schedule comprises 30 items related to six main factors that influence human reliability. Qualitative observations and schedule data processing had shown that the sociocultural factors can negatively and positively influence operator behaviors. The explored sociocultural factors influence the human reliability both in qualitative and quantitative manners. The proposed model shows how reliability can be enhanced by some measures such as experience feedback based on, for example, safety improvements, training, and information. With that is added the continuous systems improvements to improve sociocultural reality and to reduce negative behaviors.

  5. An effective hybrid immune algorithm for solving the distributed permutation flow-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Xu, Ye; Wang, Ling; Wang, Shengyao; Liu, Min

    2014-09-01

    In this article, an effective hybrid immune algorithm (HIA) is presented to solve the distributed permutation flow-shop scheduling problem (DPFSP). First, a decoding method is proposed to transfer a job permutation sequence to a feasible schedule considering both factory dispatching and job sequencing. Secondly, a local search with four search operators is presented based on the characteristics of the problem. Thirdly, a special crossover operator is designed for the DPFSP, and mutation and vaccination operators are also applied within the framework of the HIA to perform an immune search. The influence of parameter setting on the HIA is investigated based on the Taguchi method of design of experiment. Extensive numerical testing results based on 420 small-sized instances and 720 large-sized instances are provided. The effectiveness of the HIA is demonstrated by comparison with some existing heuristic algorithms and the variable neighbourhood descent methods. New best known solutions are obtained by the HIA for 17 out of 420 small-sized instances and 585 out of 720 large-sized instances.

  6. Leveraging Hypoxia-Activated Prodrugs to Prevent Drug Resistance in Solid Tumors.

    PubMed

    Lindsay, Danika; Garvey, Colleen M; Mumenthaler, Shannon M; Foo, Jasmine

    2016-08-01

    Experimental studies have shown that one key factor in driving the emergence of drug resistance in solid tumors is tumor hypoxia, which leads to the formation of localized environmental niches where drug-resistant cell populations can evolve and survive. Hypoxia-activated prodrugs (HAPs) are compounds designed to penetrate to hypoxic regions of a tumor and release cytotoxic or cytostatic agents; several of these HAPs are currently in clinical trial. However, preliminary results have not shown a survival benefit in several of these trials. We hypothesize that the efficacy of treatments involving these prodrugs depends heavily on identifying the correct treatment schedule, and that mathematical modeling can be used to help design potential therapeutic strategies combining HAPs with standard therapies to achieve long-term tumor control or eradication. We develop this framework in the specific context of EGFR-driven non-small cell lung cancer, which is commonly treated with the tyrosine kinase inhibitor erlotinib. We develop a stochastic mathematical model, parametrized using clinical and experimental data, to explore a spectrum of treatment regimens combining a HAP, evofosfamide, with erlotinib. We design combination toxicity constraint models and optimize treatment strategies over the space of tolerated schedules to identify specific combination schedules that lead to optimal tumor control. We find that (i) combining these therapies delays resistance longer than any monotherapy schedule with either evofosfamide or erlotinib alone, (ii) sequentially alternating single doses of each drug leads to minimal tumor burden and maximal reduction in probability of developing resistance, and (iii) strategies minimizing the length of time after an evofosfamide dose and before erlotinib confer further benefits in reduction of tumor burden. These results provide insights into how hypoxia-activated prodrugs may be used to enhance therapeutic effectiveness in the clinic.

  7. Experimental demonstration of bandwidth on demand (BoD) provisioning based on time scheduling in software-defined multi-domain optical networks

    NASA Astrophysics Data System (ADS)

    Zhao, Yongli; Li, Yajie; Wang, Xinbo; Chen, Bowen; Zhang, Jie

    2016-09-01

    A hierarchical software-defined networking (SDN) control architecture is designed for multi-domain optical networks with the Open Daylight (ODL) controller. The OpenFlow-based Control Virtual Network Interface (CVNI) protocol is deployed between the network orchestrator and the domain controllers. Then, a dynamic bandwidth on demand (BoD) provisioning solution is proposed based on time scheduling in software-defined multi-domain optical networks (SD-MDON). Shared Risk Link Groups (SRLG)-disjoint routing schemes are adopted to separate each tenant for reliability. The SD-MDON testbed is built based on the proposed hierarchical control architecture. Then the proposed time scheduling-based BoD (Ts-BoD) solution is experimentally demonstrated on the testbed. The performance of the Ts-BoD solution is evaluated with respect to blocking probability, resource utilization, and lightpath setup latency.

  8. Towards Optimization of ACRT Schedules Applied to the Gradient Freeze Growth of Cadmium Zinc Telluride

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Divecha, Mia S.; Derby, Jeffrey J.

    Historically, the melt growth of II-VI crystals has benefitted by the application of the accelerated crucible rotation technique (ACRT). Here, we employ a comprehensive numerical model to assess the impact of two ACRT schedules designed for a cadmium zinc telluride growth system per the classical recommendations of Capper and co-workers. The “flow maximizing” ACRT schedule, with higher rotation, effectively mixes the solutal field in the melt but does not reduce supercooling adjacent to the growth interface. The ACRT schedule derived for stable Ekman flow, with lower rotation, proves more effective in reducing supercooling and promoting stable growth. Furthermore, these counterintuitivemore » results highlight the need for more comprehensive studies on the optimization of ACRT schedules for specific growth systems and for desired growth outcomes.« less

  9. Towards Optimization of ACRT Schedules Applied to the Gradient Freeze Growth of Cadmium Zinc Telluride

    DOE PAGES

    Divecha, Mia S.; Derby, Jeffrey J.

    2017-10-03

    Historically, the melt growth of II-VI crystals has benefitted by the application of the accelerated crucible rotation technique (ACRT). Here, we employ a comprehensive numerical model to assess the impact of two ACRT schedules designed for a cadmium zinc telluride growth system per the classical recommendations of Capper and co-workers. The “flow maximizing” ACRT schedule, with higher rotation, effectively mixes the solutal field in the melt but does not reduce supercooling adjacent to the growth interface. The ACRT schedule derived for stable Ekman flow, with lower rotation, proves more effective in reducing supercooling and promoting stable growth. Furthermore, these counterintuitivemore » results highlight the need for more comprehensive studies on the optimization of ACRT schedules for specific growth systems and for desired growth outcomes.« less

  10. Finite-Horizon H∞ Consensus Control of Time-Varying Multiagent Systems With Stochastic Communication Protocol.

    PubMed

    Zou, Lei; Wang, Zidong; Gao, Huijun; Alsaadi, Fuad E

    2017-03-31

    This paper is concerned with the distributed H∞ consensus control problem for a discrete time-varying multiagent system with the stochastic communication protocol (SCP). A directed graph is used to characterize the communication topology of the multiagent network. The data transmission between each agent and the neighboring ones is implemented via a constrained communication channel where only one neighboring agent is allowed to transmit data at each time instant. The SCP is applied to schedule the signal transmission of the multiagent system. A sequence of random variables is utilized to capture the scheduling behavior of the SCP. By using the mapping technology combined with the Hadamard product, the closed-loop multiagent system is modeled as a time-varying system with a stochastic parameter matrix. The purpose of the addressed problem is to design a cooperative controller for each agent such that, for all probabilistic scheduling behaviors, the H∞ consensus performance is achieved over a given finite horizon for the closed-loop multiagent system. A necessary and sufficient condition is derived to ensure the H∞ consensus performance based on the completing squares approach and the stochastic analysis technique. Then, the controller parameters are obtained by solving two coupled backward recursive Riccati difference equations. Finally, a numerical example is given to illustrate the effectiveness of the proposed controller design scheme.

  11. A Unified Model of Performance: Validation of its Predictions across Different Sleep/Wake Schedules

    PubMed Central

    Ramakrishnan, Sridhar; Wesensten, Nancy J.; Balkin, Thomas J.; Reifman, Jaques

    2016-01-01

    Study Objectives: Historically, mathematical models of human neurobehavioral performance developed on data from one sleep study were limited to predicting performance in similar studies, restricting their practical utility. We recently developed a unified model of performance (UMP) to predict the effects of the continuum of sleep loss—from chronic sleep restriction (CSR) to total sleep deprivation (TSD) challenges—and validated it using data from two studies of one laboratory. Here, we significantly extended this effort by validating the UMP predictions across a wide range of sleep/wake schedules from different studies and laboratories. Methods: We developed the UMP on psychomotor vigilance task (PVT) lapse data from one study encompassing four different CSR conditions (7 d of 3, 5, 7, and 9 h of sleep/night), and predicted performance in five other studies (from four laboratories), including different combinations of TSD (40 to 88 h), CSR (2 to 6 h of sleep/night), control (8 to 10 h of sleep/night), and nap (nocturnal and diurnal) schedules. Results: The UMP accurately predicted PVT performance trends across 14 different sleep/wake conditions, yielding average prediction errors between 7% and 36%, with the predictions lying within 2 standard errors of the measured data 87% of the time. In addition, the UMP accurately predicted performance impairment (average error of 15%) for schedules (TSD and naps) not used in model development. Conclusions: The unified model of performance can be used as a tool to help design sleep/wake schedules to optimize the extent and duration of neurobehavioral performance and to accelerate recovery after sleep loss. Citation: Ramakrishnan S, Wesensten NJ, Balkin TJ, Reifman J. A unified model of performance: validation of its predictions across different sleep/wake schedules. SLEEP 2016;39(1):249–262. PMID:26518594

  12. An ant colony optimization heuristic for an integrated production and distribution scheduling problem

    NASA Astrophysics Data System (ADS)

    Chang, Yung-Chia; Li, Vincent C.; Chiang, Chia-Ju

    2014-04-01

    Make-to-order or direct-order business models that require close interaction between production and distribution activities have been adopted by many enterprises in order to be competitive in demanding markets. This article considers an integrated production and distribution scheduling problem in which jobs are first processed by one of the unrelated parallel machines and then distributed to corresponding customers by capacitated vehicles without intermediate inventory. The objective is to find a joint production and distribution schedule so that the weighted sum of total weighted job delivery time and the total distribution cost is minimized. This article presents a mathematical model for describing the problem and designs an algorithm using ant colony optimization. Computational experiments illustrate that the algorithm developed is capable of generating near-optimal solutions. The computational results also demonstrate the value of integrating production and distribution in the model for the studied problem.

  13. The application of connectionism to query planning/scheduling in intelligent user interfaces

    NASA Technical Reports Server (NTRS)

    Short, Nicholas, Jr.; Shastri, Lokendra

    1990-01-01

    In the mid nineties, the Earth Observing System (EOS) will generate an estimated 10 terabytes of data per day. This enormous amount of data will require the use of sophisticated technologies from real time distributed Artificial Intelligence (AI) and data management. Without regard to the overall problems in distributed AI, efficient models were developed for doing query planning and/or scheduling in intelligent user interfaces that reside in a network environment. Before intelligent query/planning can be done, a model for real time AI planning and/or scheduling must be developed. As Connectionist Models (CM) have shown promise in increasing run times, a connectionist approach to AI planning and/or scheduling is proposed. The solution involves merging a CM rule based system to a general spreading activation model for the generation and selection of plans. The system was implemented in the Rochester Connectionist Simulator and runs on a Sun 3/260.

  14. Scheduling lessons learned from the Autonomous Power System

    NASA Technical Reports Server (NTRS)

    Ringer, Mark J.

    1992-01-01

    The Autonomous Power System (APS) project at NASA LeRC is designed to demonstrate the applications of integrated intelligent diagnosis, control, and scheduling techniques to space power distribution systems. The project consists of three elements: the Autonomous Power Expert System (APEX) for Fault Diagnosis, Isolation, and Recovery (FDIR); the Autonomous Intelligent Power Scheduler (AIPS) to efficiently assign activities start times and resources; and power hardware (Brassboard) to emulate a space-based power system. The AIPS scheduler was tested within the APS system. This scheduler is able to efficiently assign available power to the requesting activities and share this information with other software agents within the APS system in order to implement the generated schedule. The AIPS scheduler is also able to cooperatively recover from fault situations by rescheduling the affected loads on the Brassboard in conjunction with the APEX FDIR system. AIPS served as a learning tool and an initial scheduling testbed for the integration of FDIR and automated scheduling systems. Many lessons were learned from the AIPS scheduler and are now being integrated into a new scheduler called SCRAP (Scheduler for Continuous Resource Allocation and Planning). This paper will service three purposes: an overview of the AIPS implementation, lessons learned from the AIPS scheduler, and a brief section on how these lessons are being applied to the new SCRAP scheduler.

  15. Investigations into Generalization of Constraint-Based Scheduling Theories with Applications to Space Telescope Observation Scheduling

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Smith, Steven S.

    1996-01-01

    This final report summarizes research performed under NASA contract NCC 2-531 toward generalization of constraint-based scheduling theories and techniques for application to space telescope observation scheduling problems. Our work into theories and techniques for solution of this class of problems has led to the development of the Heuristic Scheduling Testbed System (HSTS), a software system for integrated planning and scheduling. Within HSTS, planning and scheduling are treated as two complementary aspects of the more general process of constructing a feasible set of behaviors of a target system. We have validated the HSTS approach by applying it to the generation of observation schedules for the Hubble Space Telescope. This report summarizes the HSTS framework and its application to the Hubble Space Telescope domain. First, the HSTS software architecture is described, indicating (1) how the structure and dynamics of a system is modeled in HSTS, (2) how schedules are represented at multiple levels of abstraction, and (3) the problem solving machinery that is provided. Next, the specific scheduler developed within this software architecture for detailed management of Hubble Space Telescope operations is presented. Finally, experimental performance results are given that confirm the utility and practicality of the approach.

  16. Research in Distributed Real-Time Systems

    NASA Technical Reports Server (NTRS)

    Mukkamala, R.

    1997-01-01

    This document summarizes the progress we have made on our study of issues concerning the schedulability of real-time systems. Our study has produced several results in the scalability issues of distributed real-time systems. In particular, we have used our techniques to resolve schedulability issues in distributed systems with end-to-end requirements. During the next year (1997-98), we propose to extend the current work to address the modeling and workload characterization issues in distributed real-time systems. In particular, we propose to investigate the effect of different workload models and component models on the design and the subsequent performance of distributed real-time systems.

  17. Traffic Patrol Service Platform Scheduling and Containment Optimization Strategy

    NASA Astrophysics Data System (ADS)

    Wang, Tiane; Niu, Taiyang; Wan, Baocheng; Li, Jian

    This article is based on the traffic and patrol police service platform settings and scheduling, in order to achieve the main purpose of rapid containment for the suspect in an emergency event. Proposing new boundary definition based on graph theory, using 0-1 programming, Dijkstra algorithm, the shortest path tree (SPT) and some of the related knowledge establish a containment model. Finally, making a combination with a city-specific data and using this model obtain the best containment plan.

  18. A particle swarm model for estimating reliability and scheduling system maintenance

    NASA Astrophysics Data System (ADS)

    Puzis, Rami; Shirtz, Dov; Elovici, Yuval

    2016-05-01

    Modifying data and information system components may introduce new errors and deteriorate the reliability of the system. Reliability can be efficiently regained with reliability centred maintenance, which requires reliability estimation for maintenance scheduling. A variant of the particle swarm model is used to estimate reliability of systems implemented according to the model view controller paradigm. Simulations based on data collected from an online system of a large financial institute are used to compare three component-level maintenance policies. Results show that appropriately scheduled component-level maintenance greatly reduces the cost of upholding an acceptable level of reliability by reducing the need in system-wide maintenance.

  19. Magnetohydrodynamics MHD Engineering Test Facility ETF 200 MWe power plant. Conceptual Design Engineering Report CDER. Volume 3: Costs and schedules

    NASA Astrophysics Data System (ADS)

    1981-09-01

    The estimated plant capital cost for a coal fired 200 MWE electric generating plant with open cycle magnetohydrodynamics is divided into principal accounts based on Federal Energy Regulatory Commision account structure. Each principal account is defined and its estimated cost subdivided into identifiable and major equipment systems. The cost data sources for compiling the estimates, cost parameters, allotments, assumptions, and contingencies, are discussed. Uncertainties associated with developing the costs are quantified to show the confidence level acquired. Guidelines established in preparing the estimated costs are included. Based on an overall milestone schedule related to conventional power plant scheduling experience and starting procurement of MHD components during the preliminary design phase there is a 6 1/2-year construction period. The duration of the project from start to commercial operation is 79 months. The engineering phase of the project is 4 1/2 years; the construction duration following the start of the man power block is 37 months.

  20. Magnetohydrodynamics MHD Engineering Test Facility ETF 200 MWe power plant. Conceptual Design Engineering Report CDER. Volume 3: Costs and schedules

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The estimated plant capital cost for a coal fired 200 MWE electric generating plant with open cycle magnetohydrodynamics is divided into principal accounts based on Federal Energy Regulatory Commision account structure. Each principal account is defined and its estimated cost subdivided into identifiable and major equipment systems. The cost data sources for compiling the estimates, cost parameters, allotments, assumptions, and contingencies, are discussed. Uncertainties associated with developing the costs are quantified to show the confidence level acquired. Guidelines established in preparing the estimated costs are included. Based on an overall milestone schedule related to conventional power plant scheduling experience and starting procurement of MHD components during the preliminary design phase there is a 6 1/2-year construction period. The duration of the project from start to commercial operation is 79 months. The engineering phase of the project is 4 1/2 years; the construction duration following the start of the man power block is 37 months.

  1. Preliminary Evaluation of BIM-based Approaches for Schedule Delay Analysis

    NASA Astrophysics Data System (ADS)

    Chou, Hui-Yu; Yang, Jyh-Bin

    2017-10-01

    The problem of schedule delay commonly occurs in construction projects. The quality of delay analysis depends on the availability of schedule-related information and delay evidence. More information used in delay analysis usually produces more accurate and fair analytical results. How to use innovative techniques to improve the quality of schedule delay analysis results have received much attention recently. As Building Information Modeling (BIM) technique has been quickly developed, using BIM and 4D simulation techniques have been proposed and implemented. Obvious benefits have been achieved especially in identifying and solving construction consequence problems in advance of construction. This study preforms an intensive literature review to discuss the problems encountered in schedule delay analysis and the possibility of using BIM as a tool in developing a BIM-based approach for schedule delay analysis. This study believes that most of the identified problems can be dealt with by BIM technique. Research results could be a fundamental of developing new approaches for resolving schedule delay disputes.

  2. Deep Space Habitat Configurations Based On International Space Station Systems

    NASA Technical Reports Server (NTRS)

    Smitherman, David; Russell, Tiffany; Baysinger, Mike; Capizzo, Pete; Fabisinski, Leo; Griffin, Brand; Hornsby, Linda; Maples,Dauphne; Miernik, Janie

    2012-01-01

    A Deep Space Habitat (DSH) is the crew habitation module designed for long duration missions. Although humans have lived in space for many years, there has never been a habitat beyond low-Earth-orbit. As part of the Advanced Exploration Systems (AES) Habitation Project, a study was conducted to develop weightless habitat configurations using systems based on International Space Station (ISS) designs. Two mission sizes are described for a 4-crew 60-day mission, and a 4-crew 500-day mission using standard Node, Lab, and Multi-Purpose Logistics Module (MPLM) sized elements, and ISS derived habitation systems. These durations were selected to explore the lower and upper bound for the exploration missions under consideration including a range of excursions within the Earth-Moon vicinity, near earth asteroids, and Mars orbit. Current methods for sizing the mass and volume for habitats are based on mathematical models that assume the construction of a new single volume habitat. In contrast to that approach, this study explored the use of ISS designs based on existing hardware where available and construction of new hardware based on ISS designs where appropriate. Findings included a very robust design that could be reused if the DSH were assembled and based at the ISS and a transportation system were provided for its return after each mission. Mass estimates were found to be higher than mathematical models due primarily to the use of multiple ISS modules instead of one new large module, but the maturity of the designs using flight qualified systems have potential for improved cost, schedule, and risk benefits.

  3. Deep Space Habitat Configurations Based on International Space Station Systems

    NASA Technical Reports Server (NTRS)

    Smitherman, David; Russell, Tiffany; Baysinger, Mike; Capizzo, Pete; Fabisinski, Leo; Griffin, Brand; Hornsby, Linda; Maples, Dauphne; Miernik, Janie

    2012-01-01

    A Deep Space Habitat (DSH) is the crew habitation module designed for long duration missions. Although humans have lived in space for many years, there has never been a habitat beyond low-Earth-orbit. As part of the Advanced Exploration Systems (AES) Habitation Project, a study was conducted to develop weightless habitat configurations using systems based on International Space Station (ISS) designs. Two mission sizes are described for a 4-crew 60-day mission, and a 4-crew 500-day mission using standard Node, Lab, and Multi-Purpose Logistics Module (MPLM) sized elements, and ISS derived habitation systems. These durations were selected to explore the lower and upper bound for the exploration missions under consideration including a range of excursions within the Earth-Moon vicinity, near earth asteroids, and Mars orbit. Current methods for sizing the mass and volume for habitats are based on mathematical models that assume the construction of a new single volume habitat. In contrast to that approach, this study explored the use of ISS designs based on existing hardware where available and construction of new hardware based on ISS designs where appropriate. Findings included a very robust design that could be reused if the DSH were assembled and based at the ISS and a transportation system were provided for its return after each mission. Mass estimates were found to be higher than mathematical models due primarily to the use of multiple ISS modules instead of one new large module, but the maturity of the designs using flight qualified systems have potential for improved cost, schedule, and risk benefits.

  4. The design of a turboshaft speed governor using modern control techniques

    NASA Technical Reports Server (NTRS)

    Delosreyes, G.; Gouchoe, D. R.

    1986-01-01

    The objectives of this program were: to verify the model of off schedule compressor variable geometry in the T700 turboshaft engine nonlinear model; to evaluate the use of the pseudo-random binary noise (PRBN) technique for obtaining engine frequency response data; and to design a high performance power turbine speed governor using modern control methods. Reduction of T700 engine test data generated at NASA-Lewis indicated that the off schedule variable geometry effects were accurate as modeled. Analysis also showed that the PRBN technique combined with the maximum likelihood model identification method produced a Bode frequency response that was as accurate as the response obtained from standard sinewave testing methods. The frequency response verified the accuracy of linear models consisting of engine partial derivatives and used for design. A power turbine governor was designed using the Linear Quadratic Regulator (LQR) method of full state feedback control. A Kalman filter observer was used to estimate helicopter main rotor blade velocity. Compared to the baseline T700 power turbine speed governor, the LQR governor reduced droop up to 25 percent for a 490 shaft horsepower transient in 0.1 sec simulating a wind gust, and up to 85 percent for a 700 shaft horsepower transient in 0.5 sec simulating a large collective pitch angle transient.

  5. Transit scheduling: Basic and advanced manuals. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pine, R.; Niemeyer, J.; Chisholm, R.

    1998-12-01

    This manual will be of interest to new transit schedulers, experienced schedulers, transit planners, operating staff, and others who need to be conversant with the scheduling process. The materials clearly describe all steps in the bus and light rail scheduling process, under TCRP Project A-11, Transit Scheduling: A Manual with Materials, research was undertaken by Transportation Management and Design of Solana Beach, California, to prepare a transit scheduling manual that incorporates modern training techniques for bus and light rail transit scheduling. The manual consists of two sections: a basic treatment and an advanced section. The basic-level section is in anmore » instructional format designed primarily for novice schedulers and other transit staff. The advance section covers more complex scheduling requirements. Each section may be used sequentially or independently and is designed to integrate with agency apprenticeship and on-the-job training.« less

  6. Conceptual Design of an In-Space Cryogenic Fluid Management Facility

    NASA Technical Reports Server (NTRS)

    Willen, G. S.; Riemer, D. H.; Hustvedt, D. C.

    1981-01-01

    The conceptual design of a Spacelab experiment to develop the technology associated with low gravity propellant management is presented. The proposed facility consisting of a supply tank, receiver tank, pressurization system, instrumentation, and supporting hardware, is described. The experimental objectives, the receiver tank to be modeled, and constraints imposed on the design by the space shuttle, Spacelab, and scaling requirements, are described. The conceptual design, including the general configurations, flow schematics, insulation systems, instrumentation requirements, and internal tank configurations for the supply tank and the receiver tank, is described. Thermal, structural, fluid, and safety and reliability aspects of the facility are analyzed. The facility development plan, including schedule and cost estimates for the facility, is presented. A program work breakdown structure and master program schedule for a seven year program are included.

  7. Optimized Hypervisor Scheduler for Parallel Discrete Event Simulations on Virtual Machine Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S

    2013-01-01

    With the advent of virtual machine (VM)-based platforms for parallel computing, it is now possible to execute parallel discrete event simulations (PDES) over multiple virtual machines, in contrast to executing in native mode directly over hardware as is traditionally done over the past decades. While mature VM-based parallel systems now offer new, compelling benefits such as serviceability, dynamic reconfigurability and overall cost effectiveness, the runtime performance of parallel applications can be significantly affected. In particular, most VM-based platforms are optimized for general workloads, but PDES execution exhibits unique dynamics significantly different from other workloads. Here we first present results frommore » experiments that highlight the gross deterioration of the runtime performance of VM-based PDES simulations when executed using traditional VM schedulers, quantitatively showing the bad scaling properties of the scheduler as the number of VMs is increased. The mismatch is fundamental in nature in the sense that any fairness-based VM scheduler implementation would exhibit this mismatch with PDES runs. We also present a new scheduler optimized specifically for PDES applications, and describe its design and implementation. Experimental results obtained from running PDES benchmarks (PHOLD and vehicular traffic simulations) over VMs show over an order of magnitude improvement in the run time of the PDES-optimized scheduler relative to the regular VM scheduler, with over 20 reduction in run time of simulations using up to 64 VMs. The observations and results are timely in the context of emerging systems such as cloud platforms and VM-based high performance computing installations, highlighting to the community the need for PDES-specific support, and the feasibility of significantly reducing the runtime overhead for scalable PDES on VM platforms.« less

  8. CRI planning and scheduling for space

    NASA Technical Reports Server (NTRS)

    Aarup, Mads

    1994-01-01

    Computer Resources International (CRI) has many years of experience in developing space planning and scheduling systems for the European Space Agency. Activities range from AIT/AIV planning over mission planning to research in on-board autonomy using advanced planning and scheduling technologies in conjunction with model based diagnostics. This article presents four projects carried out for ESA by CRI with various subcontractors: (1) DI, Distributed Intelligence for Ground/Space Systems is an on-going research project; (2) GMPT, Generic Mission Planning Toolset, a feasibility study concluded in 1993; (3) OPTIMUM-AIV, Open Planning Tool for AIV, development of a knowledge based AIV planning and scheduling tool ended in 1992; and (4) PlanERS-1, development of an AI and knowledge-based mission planning prototype for the ERS-1 earth observation spacecraft ended in 1991.

  9. Research on crude oil storage and transportation based on optimization algorithm

    NASA Astrophysics Data System (ADS)

    Yuan, Xuhua

    2018-04-01

    At present, the optimization theory and method have been widely used in the optimization scheduling and optimal operation scheme of complex production systems. Based on C++Builder 6 program development platform, the theoretical research results are implemented by computer. The simulation and intelligent decision system of crude oil storage and transportation inventory scheduling are designed. The system includes modules of project management, data management, graphics processing, simulation of oil depot operation scheme. It can realize the optimization of the scheduling scheme of crude oil storage and transportation system. A multi-point temperature measuring system for monitoring the temperature field of floating roof oil storage tank is developed. The results show that by optimizing operating parameters such as tank operating mode and temperature, the total transportation scheduling costs of the storage and transportation system can be reduced by 9.1%. Therefore, this method can realize safe and stable operation of crude oil storage and transportation system.

  10. Fault-tolerant dynamic task graph scheduling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurt, Mehmet C.; Krishnamoorthy, Sriram; Agrawal, Kunal

    2014-11-16

    In this paper, we present an approach to fault tolerant execution of dynamic task graphs scheduled using work stealing. In particular, we focus on selective and localized recovery of tasks in the presence of soft faults. We elicit from the user the basic task graph structure in terms of successor and predecessor relationships. The work stealing-based algorithm to schedule such a task graph is augmented to enable recovery when the data and meta-data associated with a task get corrupted. We use this redundancy, and the knowledge of the task graph structure, to selectively recover from faults with low space andmore » time overheads. We show that the fault tolerant design retains the essential properties of the underlying work stealing-based task scheduling algorithm, and that the fault tolerant execution is asymptotically optimal when task re-execution is taken into account. Experimental evaluation demonstrates the low cost of recovery under various fault scenarios.« less

  11. Robust optimisation-based microgrid scheduling with islanding constraints

    DOE PAGES

    Liu, Guodong; Starke, Michael; Xiao, Bailu; ...

    2017-02-17

    This paper proposes a robust optimization based optimal scheduling model for microgrid operation considering constraints of islanding capability. Our objective is to minimize the total operation cost, including generation cost and spinning reserve cost of local resources as well as purchasing cost of energy from the main grid. In order to ensure the resiliency of a microgrid and improve the reliability of the local electricity supply, the microgrid is required to maintain enough spinning reserve (both up and down) to meet local demand and accommodate local renewable generation when the supply of power from the main grid is interrupted suddenly,more » i.e., microgrid transitions from grid-connected into islanded mode. Prevailing operational uncertainties in renewable energy resources and load are considered and captured using a robust optimization method. With proper robust level, the solution of the proposed scheduling model ensures successful islanding of the microgrid with minimum load curtailment and guarantees robustness against all possible realizations of the modeled operational uncertainties. Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator and a battery demonstrate the effectiveness of the proposed scheduling model.« less

  12. Bi-Axial Solar Array Drive Mechanism: Design, Build and Environmental Testing

    NASA Technical Reports Server (NTRS)

    Scheidegger, Noemy; Ferris, Mark; Phillips, Nigel

    2014-01-01

    The development of the Bi-Axial Solar Array Drive Mechanism (BSADM) presented in this paper is a demonstration of SSTL's unique space manufacturing approach that enables performing rapid development cycles for cost-effective products that meet ever-challenging mission requirements: The BSADM is designed to orient a solar array wing towards the sun, using its first rotation axis to track the sun, and its second rotation axis to compensate for the satellite orbit and attitude changes needed for a successful payload operation. The tight development schedule, with manufacture of 7 Flight Models within 1.5 year after kick-off, is offset by the risk-reduction of using qualified key component-families from other proven SSTL mechanisms. This allowed focusing the BSADM design activities on the mechanism features that are unique to the BSADM, and having an Engineering Qualification Model (EQM) built 8 months after kick-off. The EQM is currently undergoing a full environmental qualification test campaign. This paper presents the BSADM design approach that enabled meeting such a challenging schedule, its design particularities, and the ongoing verification activities.

  13. Generating Test Templates via Automated Theorem Proving

    NASA Technical Reports Server (NTRS)

    Kancherla, Mani Prasad

    1997-01-01

    Testing can be used during the software development process to maintain fidelity between evolving specifications, program designs, and code implementations. We use a form of specification-based testing that employs the use of an automated theorem prover to generate test templates. A similar approach was developed using a model checker on state-intensive systems. This method applies to systems with functional rather than state-based behaviors. This approach allows for the use of incomplete specifications to aid in generation of tests for potential failure cases. We illustrate the technique on the cannonical triangle testing problem and discuss its use on analysis of a spacecraft scheduling system.

  14. Schedule-Aware Workflow Management Systems

    NASA Astrophysics Data System (ADS)

    Mans, Ronny S.; Russell, Nick C.; van der Aalst, Wil M. P.; Moleman, Arnold J.; Bakker, Piet J. M.

    Contemporary workflow management systems offer work-items to users through specific work-lists. Users select the work-items they will perform without having a specific schedule in mind. However, in many environments work needs to be scheduled and performed at particular times. For example, in hospitals many work-items are linked to appointments, e.g., a doctor cannot perform surgery without reserving an operating theater and making sure that the patient is present. One of the problems when applying workflow technology in such domains is the lack of calendar-based scheduling support. In this paper, we present an approach that supports the seamless integration of unscheduled (flow) and scheduled (schedule) tasks. Using CPN Tools we have developed a specification and simulation model for schedule-aware workflow management systems. Based on this a system has been realized that uses YAWL, Microsoft Exchange Server 2007, Outlook, and a dedicated scheduling service. The approach is illustrated using a real-life case study at the AMC hospital in the Netherlands. In addition, we elaborate on the experiences obtained when developing and implementing a system of this scale using formal techniques.

  15. Design and implementation of workflow engine for service-oriented architecture

    NASA Astrophysics Data System (ADS)

    Peng, Shuqing; Duan, Huining; Chen, Deyun

    2009-04-01

    As computer network is developed rapidly and in the situation of the appearance of distribution specialty in enterprise application, traditional workflow engine have some deficiencies, such as complex structure, bad stability, poor portability, little reusability and difficult maintenance. In this paper, in order to improve the stability, scalability and flexibility of workflow management system, a four-layer architecture structure of workflow engine based on SOA is put forward according to the XPDL standard of Workflow Management Coalition, the route control mechanism in control model is accomplished and the scheduling strategy of cyclic routing and acyclic routing is designed, and the workflow engine which adopts the technology such as XML, JSP, EJB and so on is implemented.

  16. Interval Analysis Approach to Prototype the Robust Control of the Laboratory Overhead Crane

    NASA Astrophysics Data System (ADS)

    Smoczek, J.; Szpytko, J.; Hyla, P.

    2014-07-01

    The paper describes the software-hardware equipment and control-measurement solutions elaborated to prototype the laboratory scaled overhead crane control system. The novelty approach to crane dynamic system modelling and fuzzy robust control scheme design is presented. The iterative procedure for designing a fuzzy scheduling control scheme is developed based on the interval analysis of discrete-time closed-loop system characteristic polynomial coefficients in the presence of rope length and mass of a payload variation to select the minimum set of operating points corresponding to the midpoints of membership functions at which the linear controllers are determined through desired poles assignment. The experimental results obtained on the laboratory stand are presented.

  17. Cost and schedule analytical techniques development

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This contract provided technical services and products to the Marshall Space Flight Center's Engineering Cost Office (PP03) and the Program Plans and Requirements Office (PP02) for the period of 3 Aug. 1991 - 30 Nov. 1994. Accomplishments summarized cover the REDSTAR data base, NASCOM hard copy data base, NASCOM automated data base, NASCOM cost model, complexity generators, program planning, schedules, NASA computer connectivity, other analytical techniques, and special project support.

  18. Energy latency tradeoffs for medium access and sleep scheduling in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Gang, Lu

    Wireless sensor networks are expected to be used in a wide range of applications from environment monitoring to event detection. The key challenge is to provide energy efficient communication; however, latency remains an important concern for many applications that require fast response. The central thesis of this work is that energy efficient medium access and sleep scheduling mechanisms can be designed without necessarily sacrificing application-specific latency performance. We validate this thesis through results from four case studies that cover various aspects of medium access and sleep scheduling design in wireless sensor networks. Our first effort, DMAC, is to design an adaptive low latency and energy efficient MAC for data gathering to reduce the sleep latency. We propose staggered schedule, duty cycle adaptation, data prediction and the use of more-to-send packets to enable seamless packet forwarding under varying traffic load and channel contentions. Simulation and experimental results show significant energy savings and latency reduction while ensuring high data reliability. The second research effort, DESS, investigates the problem of designing sleep schedules in arbitrary network communication topologies to minimize the worst case end-to-end latency (referred to as delay diameter). We develop a novel graph-theoretical formulation, derive and analyze optimal solutions for the tree and ring topologies and heuristics for arbitrary topologies. The third study addresses the problem of minimum latency joint scheduling and routing (MLSR). By constructing a novel delay graph, the optimal joint scheduling and routing can be solved by M node-disjoint paths algorithm under multiple channel model. We further extended the algorithm to handle dynamic traffic changes and topology changes. A heuristic solution is proposed for MLSR under single channel interference. In the fourth study, EEJSPC, we first formulate a fundamental optimization problem that provides tunable energy-latency-throughput tradeoffs with joint scheduling and power control and present both exponential and polynomial complexity solutions. Then we investigate the problem of minimizing total transmission energy while satisfying transmission requests within a latency bound, and present an iterative approach which converges rapidly to the optimal parameter settings.

  19. Design Principles and Algorithms for Air Traffic Arrival Scheduling

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz; Itoh, Eri

    2014-01-01

    This report presents design principles and algorithms for building a real-time scheduler of arrival aircraft based on a first-come-first-served (FCFS) scheduling protocol. The algorithms provide the conceptual and computational foundation for the Traffic Management Advisor (TMA) of the Center/terminal radar approach control facilities (TRACON) automation system, which comprises a set of decision support tools for managing arrival traffic at major airports in the United States. The primary objective of the scheduler is to assign arrival aircraft to a favorable landing runway and schedule them to land at times that minimize delays. A further objective of the scheduler is to allocate delays between high-altitude airspace far away from the airport and low-altitude airspace near the airport. A method of delay allocation is described that minimizes the average operating cost in the presence of errors in controlling aircraft to a specified landing time. This report is a revision of an earlier paper first presented as part of an Advisory Group for Aerospace Research and Development (AGARD) lecture series in September 1995. The authors, during vigorous discussions over the details of this paper, felt it was important to the air-trafficmanagement (ATM) community to revise and extend the original 1995 paper, providing more detail and clarity and thereby allowing future researchers to understand this foundational work as the basis for the TMA's scheduling algorithms.

  20. Modeling Tumor Clonal Evolution for Drug Combinations Design.

    PubMed

    Zhao, Boyang; Hemann, Michael T; Lauffenburger, Douglas A

    2016-03-01

    Cancer is a clonal evolutionary process. This presents challenges for effective therapeutic intervention, given the constant selective pressure towards drug resistance. Mathematical modeling from population genetics, evolutionary dynamics, and engineering perspectives are being increasingly employed to study tumor progression, intratumoral heterogeneity, drug resistance, and rational drug scheduling and combinations design. In this review, we discuss promising opportunities these inter-disciplinary approaches hold for advances in cancer biology and treatment. We propose that quantitative modeling perspectives can complement emerging experimental technologies to facilitate enhanced understanding of disease progression and improved capabilities for therapeutic drug regimen designs.

  1. Choice in situations of time-based diminishing returns: immediate versus delayed consequences of action.

    PubMed Central

    Hackenberg, T D; Hineline, P N

    1992-01-01

    Pigeons chose between two schedules of food presentation, a fixed-interval schedule and a progressive-interval schedule that began at 0 s and increased by 20 s with each food delivery provided by that schedule. Choosing one schedule disabled the alternate schedule and stimuli until the requirements of the chosen schedule were satisfied, at which point both schedules were again made available. Fixed-interval duration remained constant within individual sessions but varied across conditions. Under reset conditions, completing the fixed-interval schedule not only produced food but also reset the progressive interval to its minimum. Blocks of sessions under the reset procedure were interspersed with sessions under a no-reset procedure, in which the progressive schedule value increased independent of fixed-interval choices. Median points of switching from the progressive to the fixed schedule varied systematically with fixed-interval value, and were consistently lower during reset than during no-reset conditions. Under the latter, each subject's choices of the progressive-interval schedule persisted beyond the point at which its requirements equaled those of the fixed-interval schedule at all but the highest fixed-interval value. Under the reset procedure, switching occurred at or prior to that equality point. These results qualitatively confirm molar analyses of schedule preference and some versions of optimality theory, but they are more adequately characterized by a model of schedule preference based on the cumulated values of multiple reinforcers, weighted in inverse proportion to the delay between the choice and each successive reinforcer. PMID:1548449

  2. Automated array assembly, phase 2

    NASA Technical Reports Server (NTRS)

    Carbajal, B. G.

    1979-01-01

    Tasks of scaling up the tandem junction cell (TJC) from 2 cm x 2 cm to 6.2 cm and the assembly of several modules using these large area TJC's are described. The scale-up of the TJC was based on using the existing process and doing the necessary design activities to increase the cell area to an acceptably large area. The design was carried out using available device models. The design was verified and sample large area TJCs were fabricated. Mechanical and process problems occurred causing a schedule slippage that resulted in contract expiration before enough large-area TJCs were fabricated to populate the sample tandem junction modules (TJM). A TJM design was carried out in which the module interconnects served to augment the current collecting buses on the cell. No sample TJMs were assembled due to a shortage of large-area TJCs.

  3. Supporting Space Systems Design via Systems Dependency Analysis Methodology

    NASA Astrophysics Data System (ADS)

    Guariniello, Cesare

    The increasing size and complexity of space systems and space missions pose severe challenges to space systems engineers. When complex systems and Systems-of-Systems are involved, the behavior of the whole entity is not only due to that of the individual systems involved but also to the interactions and dependencies between the systems. Dependencies can be varied and complex, and designers usually do not perform analysis of the impact of dependencies at the level of complex systems, or this analysis involves excessive computational cost, or occurs at a later stage of the design process, after designers have already set detailed requirements, following a bottom-up approach. While classical systems engineering attempts to integrate the perspectives involved across the variety of engineering disciplines and the objectives of multiple stakeholders, there is still a need for more effective tools and methods capable to identify, analyze and quantify properties of the complex system as a whole and to model explicitly the effect of some of the features that characterize complex systems. This research describes the development and usage of Systems Operational Dependency Analysis and Systems Developmental Dependency Analysis, two methods based on parametric models of the behavior of complex systems, one in the operational domain and one in the developmental domain. The parameters of the developed models have intuitive meaning, are usable with subjective and quantitative data alike, and give direct insight into the causes of observed, and possibly emergent, behavior. The approach proposed in this dissertation combines models of one-to-one dependencies among systems and between systems and capabilities, to analyze and evaluate the impact of failures or delays on the outcome of the whole complex system. The analysis accounts for cascading effects, partial operational failures, multiple failures or delays, and partial developmental dependencies. The user of these methods can assess the behavior of each system based on its internal status and on the topology of its dependencies on systems connected to it. Designers and decision makers can therefore quickly analyze and explore the behavior of complex systems and evaluate different architectures under various working conditions. The methods support educated decision making both in the design and in the update process of systems architecture, reducing the need to execute extensive simulations. In particular, in the phase of concept generation and selection, the information given by the methods can be used to identify promising architectures to be further tested and improved, while discarding architectures that do not show the required level of global features. The methods, when used in conjunction with appropriate metrics, also allow for improved reliability and risk analysis, as well as for automatic scheduling and re-scheduling based on the features of the dependencies and on the accepted level of risk. This dissertation illustrates the use of the two methods in sample aerospace applications, both in the operational and in the developmental domain. The applications show how to use the developed methodology to evaluate the impact of failures, assess the criticality of systems, quantify metrics of interest, quantify the impact of delays, support informed decision making when scheduling the development of systems and evaluate the achievement of partial capabilities. A larger, well-framed case study illustrates how the Systems Operational Dependency Analysis method and the Systems Developmental Dependency Analysis method can support analysis and decision making, at the mid and high level, in the design process of architectures for the exploration of Mars. The case study also shows how the methods do not replace the classical systems engineering methodologies, but support and improve them.

  4. Optimal radiotherapy dose schedules under parametric uncertainty

    NASA Astrophysics Data System (ADS)

    Badri, Hamidreza; Watanabe, Yoichi; Leder, Kevin

    2016-01-01

    We consider the effects of parameter uncertainty on the optimal radiation schedule in the context of the linear-quadratic model. Our interest arises from the observation that if inter-patient variability in normal and tumor tissue radiosensitivity or sparing factor of the organs-at-risk (OAR) are not accounted for during radiation scheduling, the performance of the therapy may be strongly degraded or the OAR may receive a substantially larger dose than the allowable threshold. This paper proposes a stochastic radiation scheduling concept to incorporate inter-patient variability into the scheduling optimization problem. Our method is based on a probabilistic approach, where the model parameters are given by a set of random variables. Our probabilistic formulation ensures that our constraints are satisfied with a given probability, and that our objective function achieves a desired level with a stated probability. We used a variable transformation to reduce the resulting optimization problem to two dimensions. We showed that the optimal solution lies on the boundary of the feasible region and we implemented a branch and bound algorithm to find the global optimal solution. We demonstrated how the configuration of optimal schedules in the presence of uncertainty compares to optimal schedules in the absence of uncertainty (conventional schedule). We observed that in order to protect against the possibility of the model parameters falling into a region where the conventional schedule is no longer feasible, it is required to avoid extremal solutions, i.e. a single large dose or very large total dose delivered over a long period. Finally, we performed numerical experiments in the setting of head and neck tumors including several normal tissues to reveal the effect of parameter uncertainty on optimal schedules and to evaluate the sensitivity of the solutions to the choice of key model parameters.

  5. Projecting Future Scheduled Airline Demand, Schedules and NGATS Benefits Using TSAM

    NASA Technical Reports Server (NTRS)

    Dollyhigh, Samuel; Smith, Jeremy; Viken, Jeff; Trani, Antonio; Baik, Hojong; Hinze, Nickolas; Ashiabor, Senanu

    2006-01-01

    The Transportation Systems Analysis Model (TSAM) developed by Virginia Tech s Air Transportation Systems Lab and NASA Langley can provide detailed analysis of the effects on the demand for air travel of a full range of NASA and FAA aviation projects. TSAM has been used to project the passenger demand for very light jet (VLJ) air taxi service, scheduled airline demand growth and future schedules, Next Generation Air Transportation System (NGATS) benefits, and future passenger revenues for the Airport and Airway Trust Fund. TSAM can project the resulting demand when new vehicles and/or technology is inserted into the long distance (100 or more miles one-way) transportation system, as well as, changes in demand as a result of fare yield increases or decreases, airport transit times, scheduled flight times, ticket taxes, reductions or increases in flight delays, and so on. TSAM models all long distance travel in the contiguous U.S. and determines the mode choice of the traveler based on detailed trip costs, travel time, schedule frequency, purpose of the trip (business or non-business), and household income level of the traveler. Demand is modeled at the county level, with an airport choice module providing up to three airports as part of the mode choice. Future enplanements at airports can be projected for different scenarios. A Fratar algorithm and a schedule generator are applied to generate future flight schedules. This paper presents the application of TSAM to modeling future scheduled air passenger demand and resulting airline schedules, the impact of NGATS goals and objectives on passenger demand, along with projections for passenger fee receipts for several scenarios for the FAA Airport and Airway Trust Fund.

  6. A two-stage stochastic optimization model for scheduling electric vehicle charging loads to relieve distribution-system constraints

    DOE PAGES

    Wu, Fei; Sioshansi, Ramteen

    2017-05-25

    Electric vehicles (EVs) hold promise to improve the energy efficiency and environmental impacts of transportation. However, widespread EV use can impose significant stress on electricity-distribution systems due to their added charging loads. This paper proposes a centralized EV charging-control model, which schedules the charging of EVs that have flexibility. This flexibility stems from EVs that are parked at the charging station for a longer duration of time than is needed to fully recharge the battery. The model is formulated as a two-stage stochastic optimization problem. The model captures the use of distributed energy resources and uncertainties around EV arrival timesmore » and charging demands upon arrival, non-EV loads on the distribution system, energy prices, and availability of energy from the distributed energy resources. We use a Monte Carlo-based sample-average approximation technique and an L-shaped method to solve the resulting optimization problem efficiently. We also apply a sequential sampling technique to dynamically determine the optimal size of the randomly sampled scenario tree to give a solution with a desired quality at minimal computational cost. Here, we demonstrate the use of our model on a Central-Ohio-based case study. We show the benefits of the model in reducing charging costs, negative impacts on the distribution system, and unserved EV-charging demand compared to simpler heuristics. Lastly, we also conduct sensitivity analyses, to show how the model performs and the resulting costs and load profiles when the design of the station or EV-usage parameters are changed.« less

  7. The VTIE telescope resource management system

    NASA Astrophysics Data System (ADS)

    Busschots, B.; Keating, J. G.

    2005-06-01

    The VTIE Telescope Resource Management System (TRMS) provides a frame work for managing a distributed group of internet telescopes as a single "Virtual Observatory". The TRMS provides hooks which allow for it to be connected to any Java Based web portal and for a Java based scheduler to be added to it. The TRMS represents each telescope and observatory in the system with a software agent and then allows the scheduler and web portal to communicate with these distributed resources in a simple transparent way, hence allowing the scheduler and portal designers to concentrate only on what they wish to do with these resources rather than how to communicate with them. This paper outlines the structure and implementation of this frame work.

  8. Autonomous planning and scheduling on the TechSat 21 mission

    NASA Technical Reports Server (NTRS)

    Sherwood, R.; Chien, S.; Castano, R.; Rabideau, G.

    2002-01-01

    The Autonomous Sciencecraft Experiment (ASE) will fly onboard the Air Force TechSat 21 constellation of three spacecraft scheduled for launch in 2006. ASE uses onboard continuous planning, robust task and goal-based execution, model-based mode identification and reconfiguration, and onboard machine learning and pattern recognition to radically increase science return by enabling intelligent downlink selection and autonomous retargeting.

  9. Gain-Scheduled Complementary Filter Design for a MEMS Based Attitude and Heading Reference System

    PubMed Central

    Yoo, Tae Suk; Hong, Sung Kyung; Yoon, Hyok Min; Park, Sungsu

    2011-01-01

    This paper describes a robust and simple algorithm for an attitude and heading reference system (AHRS) based on low-cost MEMS inertial and magnetic sensors. The proposed approach relies on a gain-scheduled complementary filter, augmented by an acceleration-based switching architecture to yield robust performance, even when the vehicle is subject to strong accelerations. Experimental results are provided for a road captive test during which the vehicle dynamics are in high-acceleration mode and the performance of the proposed filter is evaluated against the output from a conventional linear complementary filter. PMID:22163824

  10. O/S analysis of conceptual space vehicles. Part 1

    NASA Technical Reports Server (NTRS)

    Ebeling, Charles E.

    1995-01-01

    The application of recently developed computer models in determining operational capabilities and support requirements during the conceptual design of proposed space systems is discussed. The models used are the reliability and maintainability (R&M) model, the maintenance simulation model, and the operations and support (O&S) cost model. In the process of applying these models, the R&M and O&S cost models were updated. The more significant enhancements include (1) improved R&M equations for the tank subsystems, (2) the ability to allocate schedule maintenance by subsystem, (3) redefined spares calculations, (4) computing a weighted average of the working days and mission days per month, (5) the use of a position manning factor, and (6) the incorporation into the O&S model of new formulas for computing depot and organizational recurring and nonrecurring training costs and documentation costs, and depot support equipment costs. The case study used is based upon a winged, single-stage, vertical-takeoff vehicle (SSV) designed to deliver to the Space Station Freedom (SSF) a 25,000 lb payload including passengers without a crew.

  11. Solving the flexible job shop problem by hybrid metaheuristics-based multiagent model

    NASA Astrophysics Data System (ADS)

    Nouri, Houssem Eddine; Belkahla Driss, Olfa; Ghédira, Khaled

    2018-03-01

    The flexible job shop scheduling problem (FJSP) is a generalization of the classical job shop scheduling problem that allows to process operations on one machine out of a set of alternative machines. The FJSP is an NP-hard problem consisting of two sub-problems, which are the assignment and the scheduling problems. In this paper, we propose how to solve the FJSP by hybrid metaheuristics-based clustered holonic multiagent model. First, a neighborhood-based genetic algorithm (NGA) is applied by a scheduler agent for a global exploration of the search space. Second, a local search technique is used by a set of cluster agents to guide the research in promising regions of the search space and to improve the quality of the NGA final population. The efficiency of our approach is explained by the flexible selection of the promising parts of the search space by the clustering operator after the genetic algorithm process, and by applying the intensification technique of the tabu search allowing to restart the search from a set of elite solutions to attain new dominant scheduling solutions. Computational results are presented using four sets of well-known benchmark literature instances. New upper bounds are found, showing the effectiveness of the presented approach.

  12. Case study: Lockheed-Georgia Company integrated design process

    NASA Technical Reports Server (NTRS)

    Waldrop, C. T.

    1980-01-01

    A case study of the development of an Integrated Design Process is presented. The approach taken in preparing for the development of an integrated design process includes some of the IPAD approaches such as developing a Design Process Model, cataloging Technical Program Elements (TPE's), and examining data characteristics and interfaces between contiguous TPE's. The implementation plan is based on an incremental development of capabilities over a period of time with each step directed toward, and consistent with, the final architecture of a total integrated system. Because of time schedules and different computer hardware, this system will not be the same as the final IPAD release; however, many IPAD concepts will no doubt prove applicable as the best approach. Full advantage will be taken of the IPAD development experience. A scenario that could be typical for many companies, even outside the aerospace industry, in developing an integrated design process for an IPAD-type environment is represented.

  13. Automatic programming via iterated local search for dynamic job shop scheduling.

    PubMed

    Nguyen, Su; Zhang, Mengjie; Johnston, Mark; Tan, Kay Chen

    2015-01-01

    Dispatching rules have been commonly used in practice for making sequencing and scheduling decisions. Due to specific characteristics of each manufacturing system, there is no universal dispatching rule that can dominate in all situations. Therefore, it is important to design specialized dispatching rules to enhance the scheduling performance for each manufacturing environment. Evolutionary computation approaches such as tree-based genetic programming (TGP) and gene expression programming (GEP) have been proposed to facilitate the design task through automatic design of dispatching rules. However, these methods are still limited by their high computational cost and low exploitation ability. To overcome this problem, we develop a new approach to automatic programming via iterated local search (APRILS) for dynamic job shop scheduling. The key idea of APRILS is to perform multiple local searches started with programs modified from the best obtained programs so far. The experiments show that APRILS outperforms TGP and GEP in most simulation scenarios in terms of effectiveness and efficiency. The analysis also shows that programs generated by APRILS are more compact than those obtained by genetic programming. An investigation of the behavior of APRILS suggests that the good performance of APRILS comes from the balance between exploration and exploitation in its search mechanism.

  14. Rolling scheduling of electric power system with wind power based on improved NNIA algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Q. S.; Luo, C. J.; Yang, D. J.; Fan, Y. H.; Sang, Z. X.; Lei, H.

    2017-11-01

    This paper puts forth a rolling modification strategy for day-ahead scheduling of electric power system with wind power, which takes the operation cost increment of unit and curtailed wind power of power grid as double modification functions. Additionally, an improved Nondominated Neighbor Immune Algorithm (NNIA) is proposed for solution. The proposed rolling scheduling model has further improved the operation cost of system in the intra-day generation process, enhanced the system’s accommodation capacity of wind power, and modified the key transmission section power flow in a rolling manner to satisfy the security constraint of power grid. The improved NNIA algorithm has defined an antibody preference relation model based on equal incremental rate, regulation deviation constraints and maximum & minimum technical outputs of units. The model can noticeably guide the direction of antibody evolution, and significantly speed up the process of algorithm convergence to final solution, and enhance the local search capability.

  15. Scalable approximate policies for Markov decision process models of hospital elective admissions.

    PubMed

    Zhu, George; Lizotte, Dan; Hoey, Jesse

    2014-05-01

    To demonstrate the feasibility of using stochastic simulation methods for the solution of a large-scale Markov decision process model of on-line patient admissions scheduling. The problem of admissions scheduling is modeled as a Markov decision process in which the states represent numbers of patients using each of a number of resources. We investigate current state-of-the-art real time planning methods to compute solutions to this Markov decision process. Due to the complexity of the model, traditional model-based planners are limited in scalability since they require an explicit enumeration of the model dynamics. To overcome this challenge, we apply sample-based planners along with efficient simulation techniques that given an initial start state, generate an action on-demand while avoiding portions of the model that are irrelevant to the start state. We also propose a novel variant of a popular sample-based planner that is particularly well suited to the elective admissions problem. Results show that the stochastic simulation methods allow for the problem size to be scaled by a factor of almost 10 in the action space, and exponentially in the state space. We have demonstrated our approach on a problem with 81 actions, four specialities and four treatment patterns, and shown that we can generate solutions that are near-optimal in about 100s. Sample-based planners are a viable alternative to state-based planners for large Markov decision process models of elective admissions scheduling. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Traffic shaping and scheduling for OBS-based IP/WDM backbones

    NASA Astrophysics Data System (ADS)

    Elhaddad, Mahmoud S.; Melhem, Rami G.; Znati, Taieb; Basak, Debashis

    2003-10-01

    We introduce Proactive Reservation-based Switching (PRS) -- a switching architecture for IP/WDM networks based on Labeled Optical Burst Switching. PRS achieves packet delay and loss performance comparable to that of packet-switched networks, without requiring large buffering capacity, or burst scheduling across a large number of wavelengths at the core routers. PRS combines proactive channel reservation with periodic shaping of ingress-egress traffic aggregates to hide the offset latency and approximate the utilization/buffering characteristics of discrete-time queues with periodic arrival streams. A channel scheduling algorithm imposes constraints on burst departure times to ensure efficient utilization of wavelength channels and to maintain the distance between consecutive bursts through the network. Results obtained from simulation using TCP traffic over carefully designed topologies indicate that PRS consistently achieves channel utilization above 90% with modest buffering requirements.

  17. A Method for Forecasting the Commercial Air Traffic Schedule in the Future

    NASA Technical Reports Server (NTRS)

    Long, Dou; Lee, David; Gaier, Eric; Johnson, Jesse; Kostiuk, Peter

    1999-01-01

    This report presents an integrated set of models that forecasts air carriers' future operations when delays due to limited terminal-area capacity are considered. This report models the industry as a whole, avoiding unnecessary details of competition among the carriers. To develop the schedule outputs, we first present a model to forecast the unconstrained flight schedules in the future, based on the assumption of rational behavior of the carriers. Then we develop a method to modify the unconstrained schedules, accounting for effects of congestion due to limited NAS capacities. Our underlying assumption is that carriers will modify their operations to keep mean delays within certain limits. We estimate values for those limits from changes in planned block times reflected in the OAG. Our method for modifying schedules takes many means of reducing the delays into considerations, albeit some of them indirectly. The direct actions include depeaking, operating in off-hours, and reducing hub airports'operations. Indirect actions include using secondary airports, using larger aircraft, and selecting new hub airports, which, we assume, have already been modeled in the FAA's TAF. Users of our suite of models can substitute an alternative forecast for the TAF.

  18. Shock Position Control for Mode Transition in a Turbine Based Combined Cycle Engine Inlet Model

    NASA Technical Reports Server (NTRS)

    Csank, Jeffrey T.; Stueber, Thomas J.

    2013-01-01

    A dual flow-path inlet for a turbine based combined cycle (TBCC) propulsion system is to be tested in order to evaluate methodologies for performing a controlled inlet mode transition. Prior to experimental testing, simulation models are used to test, debug, and validate potential control algorithms which are designed to maintain shock position during inlet disturbances. One simulation package being used for testing is the High Mach Transient Engine Cycle Code simulation, known as HiTECC. This paper discusses the development of a mode transition schedule for the HiTECC simulation that is analogous to the development of inlet performance maps. Inlet performance maps, derived through experimental means, describe the performance and operability of the inlet as the splitter closes, switching power production from the turbine engine to the Dual Mode Scram Jet. With knowledge of the operability and performance tradeoffs, a closed loop system can be designed to optimize the performance of the inlet. This paper demonstrates the design of the closed loop control system and benefit with the implementation of a Proportional-Integral controller, an H-Infinity based controller, and a disturbance observer based controller; all of which avoid inlet unstart during a mode transition with a simulated disturbance that would lead to inlet unstart without closed loop control.

  19. Modelling of project cash flow on construction projects in Malang city

    NASA Astrophysics Data System (ADS)

    Djatmiko, Bambang

    2017-09-01

    Contractors usually prepare a project cash flow (PCF) on construction projects. The flow of cash in and cash out within a construction project may vary depending on the owner, contract documents, and construction service providers who have their own authority. Other factors affecting the PCF are down payment, termyn, progress schedule, material schedule, equipment schedule, manpower schedules, and wages of workers and subcontractors. This study aims to describe the cash inflow and cash outflow based on the empirical data obtained from contractors, develop a PCF model based on Halpen & Woodhead's PCF model, and investigate whether or not there is a significant difference between the Halpen & Woodhead's PCF model and the empirical PCF model. Based on the researcher's observation, the PCF management has never been implemented by the contractors in Malang in serving their clients (owners). The research setting is in Malang City because physical development in all field and there are many new construction service providers. The findings in this current study are summarised as follows: 1) Cash in included current assets (20%), owner's down payment (20%), termyin I (5%-25%), termyin II (20%), termyin III (25%), termyin IV (25%) and retention (5%). Cash out included direct cost (65%), indirect cost (20%), and profit + informal cost(15%), 2)the construction work involving the empirical PCF model in this study was started with the funds obtained from DP or current assets and 3) The two models bear several similarities in the upward trends of direct cost, indirect cost, Pro Ic, progress billing, and S-curve. The difference between the two models is the occurrence of overdraft in the Halpen and Woodhead's PCF model only.

  20. Developing an efficient scheduling template of a chemotherapy treatment unit: A case study.

    PubMed

    Ahmed, Z; Elmekkawy, Ty; Bates, S

    2011-01-01

    This study was undertaken to improve the performance of a Chemotherapy Treatment Unit by increasing the throughput and reducing the average patient's waiting time. In order to achieve this objective, a scheduling template has been built. The scheduling template is a simple tool that can be used to schedule patients' arrival to the clinic. A simulation model of this system was built and several scenarios, that target match the arrival pattern of the patients and resources availability, were designed and evaluated. After performing detailed analysis, one scenario provide the best system's performance. A scheduling template has been developed based on this scenario. After implementing the new scheduling template, 22.5% more patients can be served. 1. CancerCare Manitoba is a provincially mandated cancer care agency. It is dedicated to provide quality care to those who have been diagnosed and are living with cancer. MacCharles Chemotherapy unit is specially built to provide chemotherapy treatment to the cancer patients of Winnipeg. In order to maintain an excellent service, it tries to ensure that patients get their treatment in a timely manner. It is challenging to maintain that goal because of the lack of a proper roster, the workload distribution and inefficient resource allotment. In order to maintain the satisfaction of the patients and the healthcare providers, by serving the maximum number of patients in a timely manner, it is necessary to develop an efficient scheduling template that matches the required demand with the availability of resources. This goal can be reached using simulation modelling. Simulation has proven to be an excellent modelling tool. It can be defined as building computer models that represent real world or hypothetical systems, and hence experimenting with these models to study system behaviour under different scenarios.1, 2 A study was undertaken at the Children's Hospital of Eastern Ontario to identify the issues behind the long waiting time of a emergency room.3 A 20---day field observation revealed that the availability of the staff physician and interaction affects the patient wait time. Jyväskylä et al.4 used simulation to test different process scenarios, allocate resources and perform activity---based cost analysis in the Emergency Department (ED) at the Central Hospital. The simulation also supported the study of a new operational method, named "triage-team" method without interrupting the main system. The proposed triage team method categorises the entire patient according to the urgency to see the doctor and allows the patient to complete the necessary test before being seen by the doctor for the first time. The simulation study showed that it will decrease the throughput time of the patient and reduce the utilisation of the specialist and enable the ordering all the tests the patient needs right after arrival, thus quickening the referral to treatment. Santibáñez et al.5 developed a discrete event simulation model of British Columbia Cancer Agency"s ambulatory care unit which was used to study the impact of scenarios considering different operational factors (delay in starting clinic), appointment schedule (appointment order, appointment adjustment, add---ons to the schedule) and resource allocation. It was found that the best outcomes were obtained when not one but multiple changes were implemented simultaneously. Sepúlveda et al.6 studied the M. D. Anderson Cancer Centre Orlando, which is a cancer treatment facility and built a simulation model to analyse and improve flow process and increase capacity in the main facility. Different scenarios were considered like, transferring laboratory and pharmacy areas, adding an extra blood draw room and applying different scheduling techniques of patients. The study shows that by increasing the number of short---term (four hours or less) patients in the morning could increase chair utilisation. Discrete event simulation also helps improve a service where staff are ignorant about the behaviour of the system as a whole; which can also be described as a real professional system. Niranjon et al.7 used simulation successfully where they had to face such constraints and lack of accessible data. Carlos et al. 8 used Total quality management and simulation - animation to improve the quality of the emergency room. Simulation was used to cover the key point of the emergency room and animation was used to indicate the areas of opportunity required. This study revealed that a long waiting time, overload personnel and increasing withdrawal rate of patients are caused by the lack of capacity in the emergency room. Baesler et al.9 developed a methodology for a cancer treatment facility to find stochastically a global optimum point for the control variables. A simulation model generated the output using a goal programming framework for all the objectives involved in the analysis. Later a genetic algorithm was responsible for performing the search for an improved solution. The control variables that were considered in this research are number of treatment chairs, number of drawing blood nurses, laboratory personnel, and pharmacy personnel. Guo et al. 10 presented a simulation framework considering demand for appointment, patient flow logic, distribution of resources, scheduling rules followed by the scheduler. The objective of the study was to develop a scheduling rule which will ensure that 95% of all the appointment requests should be seen within one week after the request is made to increase the level of patient satisfaction and balance the schedule of each doctor to maintain a fine harmony between "busy clinic" and "quiet clinic". Huschka et al.11 studied a healthcare system which was about to change their facility layout. In this case a simulation model study helped them to design a new healthcare practice by evaluating the change in layout before implementation. Historical data like the arrival rate of the patients, number of patients visited each day, patient flow logic, was used to build the current system model. Later, different scenarios were designed which measured the changes in the current layout and performance. Wijewickrama et al.12 developed a simulation model to evaluate appointment schedule (AS) for second time consultations and patient appointment sequence (PSEQ) in a multi---facility system. Five different appointment rule (ARULE) were considered: i) Baily; ii) 3Baily; iii) Individual (Ind); iv) two patients at a time (2AtaTime); v) Variable Interval and (V---I) rule. PSEQ is based on type of patients: Appointment patients (APs) and new patients (NPs). The different PSEQ that were studied in this study were: i) first--- come first---serve; ii) appointment patient at the beginning of the clinic (APBEG); iii) new patient at the beginning of the clinic (NPBEG); iv) assigning appointed and new patients in an alternating manner (ALTER); v) assigning a new patient after every five---appointment patients. Also patient no show (0% and 5%) and patient punctuality (PUNCT) (on---time and 10 minutes early) were also considered. The study found that ALTER---Ind. and ALTER5---Ind. performed best on 0% NOSHOW, on---time PUNCT and 5% NOSHOW, on---time PUNCT situation to reduce WT and IT per patient. As NOSHOW created slack time for waiting patients, their WT tends to reduce while IT increases due to unexpected cancellation. Earliness increases congestion whichin turn increases waiting time. Ramis et al.13 conducted a study of a Medical Imaging Center (MIC) to build a simulation model which was used to improve the patient journey through an imaging centre by reducing the wait time and making better use of the resources. The simulation model also used a Graphic User Interface (GUI) to provide the parameters of the centre, such as arrival rates, distances, processing times, resources and schedule. The simulation was used to measure the waiting time of the patients in different case scenarios. The study found that assigning a common function to the resource personnel could improve the waiting time of the patients. The objective of this study is to develop an efficient scheduling template that maximises the number of served patients and minimises the average patient's waiting time at the given resources availability. To accomplish this objective, we will build a simulation model which mimics the working conditions of the clinic. Then we will suggest different scenarios of matching the arrival pattern of the patients with the availability of the resources. Full experiments will be performed to evaluate these scenarios. Hence, a simple and practical scheduling template will be built based on the indentified best scenario. The developed simulation model is described in section 2, which consists of a description of the treatment room, and a description of the types of patients and treatment durations. In section 3, different improvement scenarios are described and their analysis is presented in section 4. Section 5 illustrates a scheduling template based on one of the improvement scenarios. Finally, the conclusion and future direction of our work is exhibited in section 6. 2. A simulation model represents the actual system and assists in visualising and evaluating the performance of the system under different scenarios without interrupting the actual system. Building a proper simulation model of a system consists of the following steps. Observing the system to understand the flow of the entities, key players, availability of resources and overall generic framework.Collecting the data on the number and type of entities, time consumed by the entities at each step of their journey, and availability of resources.After building the simulation model it is necessary to confirm that the model is valid. (ABSTRACT TRUNCATED)

  1. Automating the self-scheduling process of nurses in Swedish healthcare: a pilot study.

    PubMed

    Rönnberg, Elina; Larsson, Torbjörn

    2010-03-01

    Hospital wards need to be staffed by nurses round the clock, resulting in irregular working hours for many nurses. Over the years, the nurses' influence on the scheduling has been increased in order to improve their working conditions. In Sweden it is common to apply a kind of self-scheduling where each nurse individually proposes a schedule, and then the final schedule is determined through informal negotiations between the nurses. This kind of self-scheduling is very time-consuming and does often lead to conflicts. We present a pilot study which aims at determining if it is possible to create an optimisation tool that automatically delivers a usable schedule based on the schedules proposed by the nurses. The study is performed at a typical Swedish nursing ward, for which we have developed a mathematical model and delivered schedules. The results of this study are very promising and suggest continued work along these lines.

  2. Simultaneous personnel and vehicle shift scheduling in the waste management sector.

    PubMed

    Ghiani, Gianpaolo; Guerriero, Emanuela; Manni, Andrea; Manni, Emanuele; Potenza, Agostino

    2013-07-01

    Urban waste management is becoming an increasingly complex task, absorbing a huge amount of resources, and having a major environmental impact. The design of a waste management system consists in various activities, and one of these is related to the definition of shift schedules for both personnel and vehicles. This activity has a great incidence on the tactical and operational cost for companies. In this paper, we propose an integer programming model to find an optimal solution to the integrated problem. The aim is to determine optimal schedules at minimum cost. Moreover, we design a fast and effective heuristic to face large-size problems. Both approaches are tested on data from a real-world case in Southern Italy and compared to the current practice utilized by the company managing the service, showing that simultaneously solving these problems can lead to significant monetary savings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Systems cost/performance analysis; study 2.3. Volume 3: Programmer's manual and user's guide

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The implementation of the entire systems cost/performance model as a digital computer program was studied. A discussion of the operating environment in which the program was written and checked, the program specifications such as discussions of logic and computational flow, the different subsystem models involved in the design of the spacecraft, and routines involved in the nondesign area such as costing and scheduling of the design were covered. Preliminary results for the DSCS-2 design are also included.

  4. System cost/performance analysis (study 2.3). Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    Kazangey, T.

    1973-01-01

    The relationships between performance, safety, cost, and schedule parameters were identified and quantified in support of an overall effort to generate program models and methodology that provide insight into a total space vehicle program. A specific space vehicle system, the attitude control system (ACS), was used, and a modeling methodology was selected that develops a consistent set of quantitative relationships among performance, safety, cost, and schedule, based on the characteristics of the components utilized in candidate mechanisms. These descriptive equations were developed for a three-axis, earth-pointing, mass expulsion ACS. A data base describing typical candidate ACS components was implemented, along with a computer program to perform sample calculations. This approach, implemented on a computer, is capable of determining the effect of a change in functional requirements to the ACS mechanization and the resulting cost and schedule. By a simple extension of this modeling methodology to the other systems in a space vehicle, a complete space vehicle model can be developed. Study results and recommendations are presented.

  5. Single-machine group scheduling problems with deteriorating and learning effect

    NASA Astrophysics Data System (ADS)

    Xingong, Zhang; Yong, Wang; Shikun, Bai

    2016-07-01

    The concepts of deteriorating jobs and learning effects have been individually studied in many scheduling problems. However, most studies considering the deteriorating and learning effects ignore the fact that production efficiency can be increased by grouping various parts and products with similar designs and/or production processes. This phenomenon is known as 'group technology' in the literature. In this paper, a new group scheduling model with deteriorating and learning effects is proposed, where learning effect depends not only on job position, but also on the position of the corresponding job group; deteriorating effect depends on its starting time of the job. This paper shows that the makespan and the total completion time problems remain polynomial optimal solvable under the proposed model. In addition, a polynomial optimal solution is also presented to minimise the maximum lateness problem under certain agreeable restriction.

  6. Performance comparison of some evolutionary algorithms on job shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Mishra, S. K.; Rao, C. S. P.

    2016-09-01

    Job Shop Scheduling as a state space search problem belonging to NP-hard category due to its complexity and combinational explosion of states. Several naturally inspire evolutionary methods have been developed to solve Job Shop Scheduling Problems. In this paper the evolutionary methods namely Particles Swarm Optimization, Artificial Intelligence, Invasive Weed Optimization, Bacterial Foraging Optimization, Music Based Harmony Search Algorithms are applied and find tuned to model and solve Job Shop Scheduling Problems. To compare about 250 Bench Mark instances have been used to evaluate the performance of these algorithms. The capabilities of each these algorithms in solving Job Shop Scheduling Problems are outlined.

  7. Design and specification of a centralized manufacturing data management and scheduling system

    NASA Technical Reports Server (NTRS)

    Farrington, Phillip A.

    1993-01-01

    As was revealed in a previous study, the Materials and Processes Laboratory's Productivity Enhancement Complex (PEC) has a number of automated production areas/cells that are not effectively integrated, limiting the ability of users to readily share data. The recent decision to utilize the PEC for the fabrication of flight hardware has focused new attention on the problem and brought to light the need for an integrated data management and scheduling system. This report addresses this need by developing preliminary designs specifications for a centralized manufacturing data management and scheduling system for managing flight hardware fabrication in the PEC. This prototype system will be developed under the auspices of the Integrated Engineering Environment (IEE) Oversight team and the IEE Committee. At their recommendation the system specifications were based on the fabrication requirements of the AXAF-S Optical Bench.

  8. Robust Gain-Scheduled Fault Tolerant Control for a Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Gregory, Irene

    2007-01-01

    This paper presents an application of robust gain-scheduled control concepts using a linear parameter-varying (LPV) control synthesis method to design fault tolerant controllers for a civil transport aircraft. To apply the robust LPV control synthesis method, the nonlinear dynamics must be represented by an LPV model, which is developed using the function substitution method over the entire flight envelope. The developed LPV model associated with the aerodynamic coefficient uncertainties represents nonlinear dynamics including those outside the equilibrium manifold. Passive and active fault tolerant controllers (FTC) are designed for the longitudinal dynamics of the Boeing 747-100/200 aircraft in the presence of elevator failure. Both FTC laws are evaluated in the full nonlinear aircraft simulation in the presence of the elevator fault and the results are compared to show pros and cons of each control law.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demeure, I.M.

    The research presented here is concerned with representation techniques and tools to support the design, prototyping, simulation, and evaluation of message-based parallel, distributed computations. The author describes ParaDiGM-Parallel, Distributed computation Graph Model-a visual representation technique for parallel, message-based distributed computations. ParaDiGM provides several views of a computation depending on the aspect of concern. It is made of two complementary submodels, the DCPG-Distributed Computing Precedence Graph-model, and the PAM-Process Architecture Model-model. DCPGs are precedence graphs used to express the functionality of a computation in terms of tasks, message-passing, and data. PAM graphs are used to represent the partitioning of a computationmore » into schedulable units or processes, and the pattern of communication among those units. There is a natural mapping between the two models. He illustrates the utility of ParaDiGM as a representation technique by applying it to various computations (e.g., an adaptive global optimization algorithm, the client-server model). ParaDiGM representations are concise. They can be used in documenting the design and the implementation of parallel, distributed computations, in describing such computations to colleagues, and in comparing and contrasting various implementations of the same computation. He then describes VISA-VISual Assistant, a software tool to support the design, prototyping, and simulation of message-based parallel, distributed computations. VISA is based on the ParaDiGM model. In particular, it supports the editing of ParaDiGM graphs to describe the computations of interest, and the animation of these graphs to provide visual feedback during simulations. The graphs are supplemented with various attributes, simulation parameters, and interpretations which are procedures that can be executed by VISA.« less

  10. New Integrated Modeling Capabilities: MIDAS' Recent Behavioral Enhancements

    NASA Technical Reports Server (NTRS)

    Gore, Brian F.; Jarvis, Peter A.

    2005-01-01

    The Man-machine Integration Design and Analysis System (MIDAS) is an integrated human performance modeling software tool that is based on mechanisms that underlie and cause human behavior. A PC-Windows version of MIDAS has been created that integrates the anthropometric character "Jack (TM)" with MIDAS' validated perceptual and attention mechanisms. MIDAS now models multiple simulated humans engaging in goal-related behaviors. New capabilities include the ability to predict situations in which errors and/or performance decrements are likely due to a variety of factors including concurrent workload and performance influencing factors (PIFs). This paper describes a new model that predicts the effects of microgravity on a mission specialist's performance, and its first application to simulating the task of conducting a Life Sciences experiment in space according to a sequential or parallel schedule of performance.

  11. Title I preliminary engineering for: A. S. E. F. solid waste to methane gas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1976-01-01

    An assignment to provide preliminary engineering of an Advanced System Experimental Facility for production of methane gas from urban solid waste by anaerobic digestion is documented. The experimental facility will be constructed on a now-existing solid waste shredding and landfill facility in Pompano Beach, Florida. Information is included on: general description of the project; justification of basic need; process design; preliminary drawings; outline specifications; preliminary estimate of cost; and time schedules for design and construction of accomplishment of design and construction. The preliminary cost estimate for the design and construction phases of the experimental program is $2,960,000, based on Dec.more » 1975 and Jan. 1976 costs. A time schedule of eight months to complete the Detailed Design, Equipment Procurement and the Award of Subcontracts is given.« less

  12. Automation Improves Schedule Quality and Increases Scheduling Efficiency for Residents.

    PubMed

    Perelstein, Elizabeth; Rose, Ariella; Hong, Young-Chae; Cohn, Amy; Long, Micah T

    2016-02-01

    Medical resident scheduling is difficult due to multiple rules, competing educational goals, and ever-evolving graduate medical education requirements. Despite this, schedules are typically created manually, consuming hours of work, producing schedules of varying quality, and yielding negative consequences for resident morale and learning. To determine whether computerized decision support can improve the construction of residency schedules, saving time and improving schedule quality. The Optimized Residency Scheduling Assistant was designed by a team from the University of Michigan Department of Industrial and Operations Engineering. It was implemented in the C.S. Mott Children's Hospital Pediatric Emergency Department in the 2012-2013 academic year. The 4 metrics of schedule quality that were compared between the 2010-2011 and 2012-2013 academic years were the incidence of challenging shift transitions, the incidence of shifts following continuity clinics, the total shift inequity, and the night shift inequity. All scheduling rules were successfully incorporated. Average schedule creation time fell from 22 to 28 hours to 4 to 6 hours per month, and 3 of 4 metrics of schedule quality significantly improved. For the implementation year, the incidence of challenging shift transitions decreased from 83 to 14 (P < .01); the incidence of postclinic shifts decreased from 72 to 32 (P < .01); and the SD of night shifts dropped by 55.6% (P < .01). This automated shift scheduling system improves the current manual scheduling process, reducing time spent and improving schedule quality. Embracing such automated tools can benefit residency programs with shift-based scheduling needs.

  13. The Trick Simulation Toolkit: A NASA/Opensource Framework for Running Time Based Physics Models

    NASA Technical Reports Server (NTRS)

    Penn, John M.

    2016-01-01

    The Trick Simulation Toolkit is a simulation development environment used to create high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. Its purpose is to generate a simulation executable from a collection of user-supplied models and a simulation definition file. For each Trick-based simulation, Trick automatically provides job scheduling, numerical integration, the ability to write and restore human readable checkpoints, data recording, interactive variable manipulation, a run-time interpreter, and many other commonly needed capabilities. This allows simulation developers to concentrate on their domain expertise and the algorithms and equations of their models. Also included in Trick are tools for plotting recorded data and various other supporting utilities and libraries. Trick is written in C/C++ and Java and supports both Linux and MacOSX computer operating systems. This paper describes Trick's design and use at NASA Johnson Space Center.

  14. Car painting process scheduling with harmony search algorithm

    NASA Astrophysics Data System (ADS)

    Syahputra, M. F.; Maiyasya, A.; Purnamawati, S.; Abdullah, D.; Albra, W.; Heikal, M.; Abdurrahman, A.; Khaddafi, M.

    2018-02-01

    Automotive painting program in the process of painting the car body by using robot power, making efficiency in the production system. Production system will be more efficient if pay attention to scheduling of car order which will be done by considering painting body shape of car. Flow shop scheduling is a scheduling model in which the job-job to be processed entirely flows in the same product direction / path. Scheduling problems often arise if there are n jobs to be processed on the machine, which must be specified which must be done first and how to allocate jobs on the machine to obtain a scheduled production process. Harmony Search Algorithm is a metaheuristic optimization algorithm based on music. The algorithm is inspired by observations that lead to music in search of perfect harmony. This musical harmony is in line to find optimal in the optimization process. Based on the tests that have been done, obtained the optimal car sequence with minimum makespan value.

  15. Multiresource allocation and scheduling for periodic soft real-time applications

    NASA Astrophysics Data System (ADS)

    Gopalan, Kartik; Chiueh, Tzi-cker

    2001-12-01

    Real-time applications that utilize multiple system resources, such as CPU, disks, and network links, require coordinated scheduling of these resources in order to meet their end-to-end performance requirements. Most state-of-the-art operating systems support independent resource allocation and deadline-driven scheduling but lack coordination among multiple heterogeneous resources. This paper describes the design and implementation of an Integrated Real-time Resource Scheduler (IRS) that performs coordinated allocation and scheduling of multiple heterogeneous resources on the same machine for periodic soft real-time application. The principal feature of IRS is a heuristic multi-resource allocation algorithm that reserves multiple resources for real-time applications in a manner that can maximize the number of applications admitted into the system in the long run. At run-time, a global scheduler dispatches the tasks of the soft real-time application to individual resource schedulers according to the precedence constraints between tasks. The individual resource schedulers, which could be any deadline based schedulers, can make scheduling decisions locally and yet collectively satisfy a real-time application's performance requirements. The tightness of overall timing guarantees is ultimately determined by the properties of individual resource schedulers. However, IRS maximizes overall system resource utilization efficiency by coordinating deadline assignment across multiple tasks in a soft real-time application.

  16. Conceptual design for the National Water Information System

    USGS Publications Warehouse

    Edwards, Melvin D.; Putnam, Arthur L.; Hutchison, Norman E.

    1986-01-01

    The Water Resources Division of the U.S. Geological Survey began the design and development of a National Water Information System (NWIS) in 1983. The NWIS will replace and integrate the existing data systems of the National Water Data Storage and Retrieval System, National Water Data Exchange, National Water-Use Information Program, and Water Resources Scientific Information Center. The NWIS has been designed as an interactive, distributed data system. The software system has been designed in a modular manner which integrates existing software functions and allows multiple use of software modules. The data base has been designed as a relational data model that allows integrated storage of the existing water data, water-use data, and water-data indexing information by using a common relational data base management system. The NWIS will be operated on microcomputers located in each of the Water Resources Division's District offices and many of its State, subdistrict, and field offices. The microcomputers will be linked together through a national telecommunication network maintained by the U. S. Geological Survey. The NWIS is scheduled to be placed in operation in 1990.

  17. A bi-objective integer programming model for partly-restricted flight departure scheduling

    PubMed Central

    Guan, Wei; Zhang, Wenyi; Jiang, Shixiong; Fan, Lingling

    2018-01-01

    The normal studies on air traffic departure scheduling problem (DSP) mainly deal with an independent airport in which the departure traffic is not affected by surrounded airports, which, however, is not a consistent case. In reality, there still exist cases where several commercial airports are closely located and one of them possesses a higher priority. During the peak hours, the departure activities of the lower-priority airports are usually required to give way to those of higher-priority airport. These giving-way requirements can inflict a set of changes on the modeling of departure scheduling problem with respect to the lower-priority airports. To the best of our knowledge, studies on DSP under this condition are scarce. Accordingly, this paper develops a bi-objective integer programming model to address the flight departure scheduling of the partly-restricted (e.g., lower-priority) one among several adjacent airports. An adapted tabu search algorithm is designed to solve the current problem. It is demonstrated from the case study of Tianjin Binhai International Airport in China that the proposed method can obviously improve the operation efficiency, while still realizing superior equity and regularity among restricted flows. PMID:29715299

  18. A bi-objective integer programming model for partly-restricted flight departure scheduling.

    PubMed

    Zhong, Han; Guan, Wei; Zhang, Wenyi; Jiang, Shixiong; Fan, Lingling

    2018-01-01

    The normal studies on air traffic departure scheduling problem (DSP) mainly deal with an independent airport in which the departure traffic is not affected by surrounded airports, which, however, is not a consistent case. In reality, there still exist cases where several commercial airports are closely located and one of them possesses a higher priority. During the peak hours, the departure activities of the lower-priority airports are usually required to give way to those of higher-priority airport. These giving-way requirements can inflict a set of changes on the modeling of departure scheduling problem with respect to the lower-priority airports. To the best of our knowledge, studies on DSP under this condition are scarce. Accordingly, this paper develops a bi-objective integer programming model to address the flight departure scheduling of the partly-restricted (e.g., lower-priority) one among several adjacent airports. An adapted tabu search algorithm is designed to solve the current problem. It is demonstrated from the case study of Tianjin Binhai International Airport in China that the proposed method can obviously improve the operation efficiency, while still realizing superior equity and regularity among restricted flows.

  19. ATD-2 Surface Scheduling and Metering Concept

    NASA Technical Reports Server (NTRS)

    Coppenbarger, Richard A.; Jung, Yoon Chul; Capps, Richard Alan; Engelland, Shawn A.

    2017-01-01

    This presentation describes the concept of ATD-2 tactical surface scheduling and metering. The concept is composed of several elements, including data exchange and integration; surface modeling; surface scheduling; and surface metering. The presentation explains each of the elements. Surface metering is implemented to balance demand and capacity• When surface metering is on, target times from surface scheduler areconverted to advisories for throttling demand• Through the scheduling process, flights with CTOTs will not get addedmetering delay (avoids potential for ‘double delay’)• Carriers can designate certain flights as exempt from metering holds• Demand throttle in Phase 1 at CLT is through advisories sent to rampcontrollers for pushback instructions to the flight deck– Push now– Hold for an advised period of time (in minutes)• Principles of surface metering can be more generally applied to otherairports in the NAS to throttle demand via spot-release times (TMATs Strong focus on optimal use of airport resources• Flexibility enables stakeholders to vary the amount of delay theywould like transferred to gate• Addresses practical aspects of executing surface metering in aturbulent real world environment• Algorithms designed for both short term demand/capacityimbalances (banks) or sustained metering situations• Leverage automation to enable surface metering capability withoutrequiring additional positions• Represents first step in Tactical/Strategic fusion• Provides longer look-ahead calculations to enable analysis ofstrategic surface metering potential usage

  20. A performance analysis method for distributed real-time robotic systems: A case study of remote teleoperation

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Sanderson, A. C.

    1994-01-01

    Robot coordination and control systems for remote teleoperation applications are by necessity implemented on distributed computers. Modeling and performance analysis of these distributed robotic systems is difficult, but important for economic system design. Performance analysis methods originally developed for conventional distributed computer systems are often unsatisfactory for evaluating real-time systems. The paper introduces a formal model of distributed robotic control systems; and a performance analysis method, based on scheduling theory, which can handle concurrent hard-real-time response specifications. Use of the method is illustrated by a case of remote teleoperation which assesses the effect of communication delays and the allocation of robot control functions on control system hardware requirements.

  1. High-throughput bioinformatics with the Cyrille2 pipeline system

    PubMed Central

    Fiers, Mark WEJ; van der Burgt, Ate; Datema, Erwin; de Groot, Joost CW; van Ham, Roeland CHJ

    2008-01-01

    Background Modern omics research involves the application of high-throughput technologies that generate vast volumes of data. These data need to be pre-processed, analyzed and integrated with existing knowledge through the use of diverse sets of software tools, models and databases. The analyses are often interdependent and chained together to form complex workflows or pipelines. Given the volume of the data used and the multitude of computational resources available, specialized pipeline software is required to make high-throughput analysis of large-scale omics datasets feasible. Results We have developed a generic pipeline system called Cyrille2. The system is modular in design and consists of three functionally distinct parts: 1) a web based, graphical user interface (GUI) that enables a pipeline operator to manage the system; 2) the Scheduler, which forms the functional core of the system and which tracks what data enters the system and determines what jobs must be scheduled for execution, and; 3) the Executor, which searches for scheduled jobs and executes these on a compute cluster. Conclusion The Cyrille2 system is an extensible, modular system, implementing the stated requirements. Cyrille2 enables easy creation and execution of high throughput, flexible bioinformatics pipelines. PMID:18269742

  2. 76 FR 48049 - Airworthiness Directives; Lockheed Martin Corporation/Lockheed Martin Aeronautics Company Model L...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-08

    ... neither imminently approaching nor had exceeded the manufacturer's original fatigue design life goal. In... scheduling time. (k) For all airplanes: Where Lockheed Document Number LG92ER0060, ``L-1011-385 Series... Corporation/Lockheed Martin Aeronautics Company Model L-1011 Series Airplanes AGENCY: Federal Aviation...

  3. A Unified Model of Performance: Validation of its Predictions across Different Sleep/Wake Schedules.

    PubMed

    Ramakrishnan, Sridhar; Wesensten, Nancy J; Balkin, Thomas J; Reifman, Jaques

    2016-01-01

    Historically, mathematical models of human neurobehavioral performance developed on data from one sleep study were limited to predicting performance in similar studies, restricting their practical utility. We recently developed a unified model of performance (UMP) to predict the effects of the continuum of sleep loss-from chronic sleep restriction (CSR) to total sleep deprivation (TSD) challenges-and validated it using data from two studies of one laboratory. Here, we significantly extended this effort by validating the UMP predictions across a wide range of sleep/wake schedules from different studies and laboratories. We developed the UMP on psychomotor vigilance task (PVT) lapse data from one study encompassing four different CSR conditions (7 d of 3, 5, 7, and 9 h of sleep/night), and predicted performance in five other studies (from four laboratories), including different combinations of TSD (40 to 88 h), CSR (2 to 6 h of sleep/night), control (8 to 10 h of sleep/night), and nap (nocturnal and diurnal) schedules. The UMP accurately predicted PVT performance trends across 14 different sleep/wake conditions, yielding average prediction errors between 7% and 36%, with the predictions lying within 2 standard errors of the measured data 87% of the time. In addition, the UMP accurately predicted performance impairment (average error of 15%) for schedules (TSD and naps) not used in model development. The unified model of performance can be used as a tool to help design sleep/wake schedules to optimize the extent and duration of neurobehavioral performance and to accelerate recovery after sleep loss. © 2016 Associated Professional Sleep Societies, LLC.

  4. On program restructuring, scheduling, and communication for parallel processor systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polychronopoulos, Constantine D.

    1986-08-01

    This dissertation discusses several software and hardware aspects of program execution on large-scale, high-performance parallel processor systems. The issues covered are program restructuring, partitioning, scheduling and interprocessor communication, synchronization, and hardware design issues of specialized units. All this work was performed focusing on a single goal: to maximize program speedup, or equivalently, to minimize parallel execution time. Parafrase, a Fortran restructuring compiler was used to transform programs in a parallel form and conduct experiments. Two new program restructuring techniques are presented, loop coalescing and subscript blocking. Compile-time and run-time scheduling schemes are covered extensively. Depending on the program construct, thesemore » algorithms generate optimal or near-optimal schedules. For the case of arbitrarily nested hybrid loops, two optimal scheduling algorithms for dynamic and static scheduling are presented. Simulation results are given for a new dynamic scheduling algorithm. The performance of this algorithm is compared to that of self-scheduling. Techniques for program partitioning and minimization of interprocessor communication for idealized program models and for real Fortran programs are also discussed. The close relationship between scheduling, interprocessor communication, and synchronization becomes apparent at several points in this work. Finally, the impact of various types of overhead on program speedup and experimental results are presented.« less

  5. Modeling Tumor Clonal Evolution for Drug Combinations Design

    PubMed Central

    Zhao, Boyang; Hemann, Michael T.; Lauffenburger, Douglas A.

    2016-01-01

    Cancer is a clonal evolutionary process. This presents challenges for effective therapeutic intervention, given the constant selective pressure towards drug resistance. Mathematical modeling from population genetics, evolutionary dynamics, and engineering perspectives are being increasingly employed to study tumor progression, intratumoral heterogeneity, drug resistance, and rational drug scheduling and combinations design. In this review, we discuss promising opportunities these inter-disciplinary approaches hold for advances in cancer biology and treatment. We propose that quantitative modeling perspectives can complement emerging experimental technologies to facilitate enhanced understanding of disease progression and improved capabilities for therapeutic drug regimen designs. PMID:28435907

  6. The effect of Fisher information matrix approximation methods in population optimal design calculations.

    PubMed

    Strömberg, Eric A; Nyberg, Joakim; Hooker, Andrew C

    2016-12-01

    With the increasing popularity of optimal design in drug development it is important to understand how the approximations and implementations of the Fisher information matrix (FIM) affect the resulting optimal designs. The aim of this work was to investigate the impact on design performance when using two common approximations to the population model and the full or block-diagonal FIM implementations for optimization of sampling points. Sampling schedules for two example experiments based on population models were optimized using the FO and FOCE approximations and the full and block-diagonal FIM implementations. The number of support points was compared between the designs for each example experiment. The performance of these designs based on simulation/estimations was investigated by computing bias of the parameters as well as through the use of an empirical D-criterion confidence interval. Simulations were performed when the design was computed with the true parameter values as well as with misspecified parameter values. The FOCE approximation and the Full FIM implementation yielded designs with more support points and less clustering of sample points than designs optimized with the FO approximation and the block-diagonal implementation. The D-criterion confidence intervals showed no performance differences between the full and block diagonal FIM optimal designs when assuming true parameter values. However, the FO approximated block-reduced FIM designs had higher bias than the other designs. When assuming parameter misspecification in the design evaluation, the FO Full FIM optimal design was superior to the FO block-diagonal FIM design in both of the examples.

  7. Efficient operation scheduling for adsorption chillers using predictive optimization-based control methods

    NASA Astrophysics Data System (ADS)

    Bürger, Adrian; Sawant, Parantapa; Bohlayer, Markus; Altmann-Dieses, Angelika; Braun, Marco; Diehl, Moritz

    2017-10-01

    Within this work, the benefits of using predictive control methods for the operation of Adsorption Cooling Machines (ACMs) are shown on a simulation study. Since the internal control decisions of series-manufactured ACMs often cannot be influenced, the work focuses on optimized scheduling of an ACM considering its internal functioning as well as forecasts for load and driving energy occurrence. For illustration, an assumed solar thermal climate system is introduced and a system model suitable for use within gradient-based optimization methods is developed. The results of a system simulation using a conventional scheme for ACM scheduling are compared to the results of a predictive, optimization-based scheduling approach for the same exemplary scenario of load and driving energy occurrence. The benefits of the latter approach are shown and future actions for application of these methods for system control are addressed.

  8. Design and development of cell queuing, processing, and scheduling modules for the iPOINT input-buffered ATM testbed

    NASA Astrophysics Data System (ADS)

    Duan, Haoran

    1997-12-01

    This dissertation presents the concepts, principles, performance, and implementation of input queuing and cell-scheduling modules for the Illinois Pulsar-based Optical INTerconnect (iPOINT) input-buffered Asynchronous Transfer Mode (ATM) testbed. Input queuing (IQ) ATM switches are well suited to meet the requirements of current and future ultra-broadband ATM networks. The IQ structure imposes minimum memory bandwidth requirements for cell buffering, tolerates bursty traffic, and utilizes memory efficiently for multicast traffic. The lack of efficient cell queuing and scheduling solutions has been a major barrier to build high-performance, scalable IQ-based ATM switches. This dissertation proposes a new Three-Dimensional Queue (3DQ) and a novel Matrix Unit Cell Scheduler (MUCS) to remove this barrier. 3DQ uses a linked-list architecture based on Synchronous Random Access Memory (SRAM) to combine the individual advantages of per-virtual-circuit (per-VC) queuing, priority queuing, and N-destination queuing. It avoids Head of Line (HOL) blocking and provides per-VC Quality of Service (QoS) enforcement mechanisms. Computer simulation results verify the QoS capabilities of 3DQ. For multicast traffic, 3DQ provides efficient usage of cell buffering memory by storing multicast cells only once. Further, the multicast mechanism of 3DQ prevents a congested destination port from blocking other less- loaded ports. The 3DQ principle has been prototyped in the Illinois Input Queue (iiQueue) module. Using Field Programmable Gate Array (FPGA) devices, SRAM modules, and integrated on a Printed Circuit Board (PCB), iiQueue can process incoming traffic at 800 Mb/s. Using faster circuit technology, the same design is expected to operate at the OC-48 rate (2.5 Gb/s). MUCS resolves the output contention by evaluating the weight index of each candidate and selecting the heaviest. It achieves near-optimal scheduling and has a very short response time. The algorithm originates from a heuristic strategy that leads to 'socially optimal' solutions, yielding a maximum number of contention-free cells being scheduled. A novel mixed digital-analog circuit has been designed to implement the MUCS core functionality. The MUCS circuit maps the cell scheduling computation to the capacitor charging and discharging procedures that are conducted fully in parallel. The design has a uniform circuit structure, low interconnect counts, and low chip I/O counts. Using 2 μm CMOS technology, the design operates on a 100 MHz clock and finds a near-optimal solution within a linear processing time. The circuit has been verified at the transistor level by HSPICE simulation. During this research, a five-port IQ-based optoelectronic iPOINT ATM switch has been developed and demonstrated. It has been fully functional with an aggregate throughput of 800 Mb/s. The second-generation IQ-based switch is currently under development. Equipped with iiQueue modules and MUCS module, the new switch system will deliver a multi-gigabit aggregate throughput, eliminate HOL blocking, provide per-VC QoS, and achieve near-100% link bandwidth utilization. Complete documentation of input modules and trunk module for the existing testbed, and complete documentation of 3DQ, iiQueue, and MUCS for the second-generation testbed are given in this dissertation.

  9. Design and Evaluation of the Terminal Area Precision Scheduling and Spacing System

    NASA Technical Reports Server (NTRS)

    Swenson, Harry N.; Thipphavong, Jane; Sadovsky, Alex; Chen, Liang; Sullivan, Chris; Martin, Lynne

    2011-01-01

    This paper describes the design, development and results from a high fidelity human-in-the-loop simulation of an integrated set of trajectory-based automation tools providing precision scheduling, sequencing and controller merging and spacing functions. These integrated functions are combined into a system called the Terminal Area Precision Scheduling and Spacing (TAPSS) system. It is a strategic and tactical planning tool that provides Traffic Management Coordinators, En Route and Terminal Radar Approach Control air traffic controllers the ability to efficiently optimize the arrival capacity of a demand-impacted airport while simultaneously enabling fuel-efficient descent procedures. The TAPSS system consists of four-dimensional trajectory prediction, arrival runway balancing, aircraft separation constraint-based scheduling, traffic flow visualization and trajectory-based advisories to assist controllers in efficient metering, sequencing and spacing. The TAPSS system was evaluated and compared to today's ATC operation through extensive series of human-in-the-loop simulations for arrival flows into the Los Angeles International Airport. The test conditions included the variation of aircraft demand from a baseline of today's capacity constrained periods through 5%, 10% and 20% increases. Performance data were collected for engineering and human factor analysis and compared with similar operations both with and without the TAPSS system. The engineering data indicate operations with the TAPSS show up to a 10% increase in airport throughput during capacity constrained periods while maintaining fuel-efficient aircraft descent profiles from cruise to landing.

  10. Job Design and Ethnic Differences in Working Women’s Physical Activity

    PubMed Central

    Grzywacz, Joseph G.; Crain, A. Lauren; Martinson, Brian C.; Quandt, Sara A.

    2014-01-01

    Objective To document the role job control and schedule control play in shaping women’s physical activity, and how it delineates educational and racial variability in associations of job and social control with physical activity. Methods Prospective data were obtained from a community-based sample of working women (N = 302). Validated instruments measured job control and schedule control. Steps per day were assessed using New Lifestyles 800 activity monitors. Results Greater job control predicted more steps per day, whereas greater schedule control predicted fewer steps. Small indirect associations between ethnicity and physical activity were observed among women with a trade school degree or less but not for women with a college degree. Conclusions Low job control created barriers to physical activity among working women with a trade school degree or less. Greater schedule control predicted less physical activity, suggesting women do not use time “created” by schedule flexibility for personal health enhancement. PMID:24034681

  11. Job design and ethnic differences in working women's physical activity.

    PubMed

    Grzywacz, Joseph G; Crain, A Lauren; Martinson, Brian C; Quandt, Sara A

    2014-01-01

    To document the role job control and schedule control play in shaping women's physical activity, and how it delineates educational and racial variability in associations of job and social control with physical activity. Prospective data were obtained from a community-based sample of working women (N = 302). Validated instruments measured job control and schedule control. Steps per day were assessed using New Lifestyles 800 activity monitors. Greater job control predicted more steps per day, whereas greater schedule control predicted fewer steps. Small indirect associations between ethnicity and physical activity were observed among women with a trade school degree or less but not for women with a college degree. Low job control created barriers to physical activity among working women with a trade school degree or less. Greater schedule control predicted less physical activity, suggesting women do not use time "created" by schedule flexibility for personal health enhancement.

  12. Simulated annealing with probabilistic analysis for solving traveling salesman problems

    NASA Astrophysics Data System (ADS)

    Hong, Pei-Yee; Lim, Yai-Fung; Ramli, Razamin; Khalid, Ruzelan

    2013-09-01

    Simulated Annealing (SA) is a widely used meta-heuristic that was inspired from the annealing process of recrystallization of metals. Therefore, the efficiency of SA is highly affected by the annealing schedule. As a result, in this paper, we presented an empirical work to provide a comparable annealing schedule to solve symmetric traveling salesman problems (TSP). Randomized complete block design is also used in this study. The results show that different parameters do affect the efficiency of SA and thus, we propose the best found annealing schedule based on the Post Hoc test. SA was tested on seven selected benchmarked problems of symmetric TSP with the proposed annealing schedule. The performance of SA was evaluated empirically alongside with benchmark solutions and simple analysis to validate the quality of solutions. Computational results show that the proposed annealing schedule provides a good quality of solution.

  13. Design of an Aircrew Scheduling Decision Aid for the 6916th Electronic Security Squadron.

    DTIC Science & Technology

    1987-06-01

    Security Classification) Design of an Aircrew Scheduling Decision Aid for the 6916th Electronic Security Squadron 12. PERSONAL AUTHOR(S) Thomas J. Kopf...Because of the great number of possible scheduling alternatives, it is difficult to find an optimal solution to-the scheduling problem. Additionally...changes to the original schedule make it even more difficult to find an optimal solution. The emergence of capable microcompu- ters, decision support

  14. Visually Exploring Transportation Schedules.

    PubMed

    Palomo, Cesar; Guo, Zhan; Silva, Cláudio T; Freire, Juliana

    2016-01-01

    Public transportation schedules are designed by agencies to optimize service quality under multiple constraints. However, real service usually deviates from the plan. Therefore, transportation analysts need to identify, compare and explain both eventual and systemic performance issues that must be addressed so that better timetables can be created. The purely statistical tools commonly used by analysts pose many difficulties due to the large number of attributes at trip- and station-level for planned and real service. Also challenging is the need for models at multiple scales to search for patterns at different times and stations, since analysts do not know exactly where or when relevant patterns might emerge and need to compute statistical summaries for multiple attributes at different granularities. To aid in this analysis, we worked in close collaboration with a transportation expert to design TR-EX, a visual exploration tool developed to identify, inspect and compare spatio-temporal patterns for planned and real transportation service. TR-EX combines two new visual encodings inspired by Marey's Train Schedule: Trips Explorer for trip-level analysis of frequency, deviation and speed; and Stops Explorer for station-level study of delay, wait time, reliability and performance deficiencies such as bunching. To tackle overplotting and to provide a robust representation for a large numbers of trips and stops at multiple scales, the system supports variable kernel bandwidths to achieve the level of detail required by users for different tasks. We justify our design decisions based on specific analysis needs of transportation analysts. We provide anecdotal evidence of the efficacy of TR-EX through a series of case studies that explore NYC subway service, which illustrate how TR-EX can be used to confirm hypotheses and derive new insights through visual exploration.

  15. A Flexible System for Simulating Aeronautical Telecommunication Network

    NASA Technical Reports Server (NTRS)

    Maly, Kurt; Overstreet, C. M.; Andey, R.

    1998-01-01

    At Old Dominion University, we have built Aeronautical Telecommunication Network (ATN) Simulator with NASA being the fund provider. It provides a means to evaluate the impact of modified router scheduling algorithms on the network efficiency, to perform capacity studies on various network topologies and to monitor and study various aspects of ATN through graphical user interface (GUI). In this paper we describe briefly about the proposed ATN model and our abstraction of this model. Later we describe our simulator architecture highlighting some of the design specifications, scheduling algorithms and user interface. At the end, we have provided the results of performance studies on this simulator.

  16. A Mechanized Decision Support System for Academic Scheduling.

    DTIC Science & Technology

    1986-03-01

    an operational system called software. The first step in the development phase is Design . Designers destribute software control by factoring the Data...SUBJECT TERMS (Continue on reverse if necessary and identify by block number) ELD GROUP SUB-GROUP Scheduling, Decision Support System , Software Design ...scheduling system . It will also examine software - design techniques to identify the most appropriate method- ology for this problem. " - Chapter 3 will

  17. The SSM/PMAD automated test bed project

    NASA Technical Reports Server (NTRS)

    Lollar, Louis F.

    1991-01-01

    The Space Station Module/Power Management and Distribution (SSM/PMAD) autonomous subsystem project was initiated in 1984. The project's goal has been to design and develop an autonomous, user-supportive PMAD test bed simulating the SSF Hab/Lab module(s). An eighteen kilowatt SSM/PMAD test bed model with a high degree of automated operation has been developed. This advanced automation test bed contains three expert/knowledge based systems that interact with one another and with other more conventional software residing in up to eight distributed 386-based microcomputers to perform the necessary tasks of real-time and near real-time load scheduling, dynamic load prioritizing, and fault detection, isolation, and recovery (FDIR).

  18. Conceptual Design of Simulation Models in an Early Development Phase of Lunar Spacecraft Simulator Using SMP2 Standard

    NASA Astrophysics Data System (ADS)

    Lee, Hoon Hee; Koo, Cheol Hea; Moon, Sung Tae; Han, Sang Hyuck; Ju, Gwang Hyeok

    2013-08-01

    The conceptual study for Korean lunar orbiter/lander prototype has been performed in Korea Aerospace Research Institute (KARI). Across diverse space programs around European countries, a variety of simulation application has been developed using SMP2 (Simulation Modelling Platform) standard related to portability and reuse of simulation models by various model users. KARI has not only first-hand experience of a development of SMP compatible simulation environment but also an ongoing study to apply the SMP2 development process of simulation model to a simulator development project for lunar missions. KARI has tried to extend the coverage of the development domain based on SMP2 standard across the whole simulation model life-cycle from software design to its validation through a lunar exploration project. Figure. 1 shows a snapshot from a visualization tool for the simulation of lunar lander motion. In reality, a demonstrator prototype on the right-hand side of image was made and tested in 2012. In an early phase of simulator development prior to a kick-off start in the near future, targeted hardware to be modelled has been investigated and indentified at the end of 2012. The architectural breakdown of the lunar simulator at system level was performed and the architecture with a hierarchical tree of models from the system to parts at lower level has been established. Finally, SMP Documents such as Catalogue, Assembly, Schedule and so on were converted using a XML(eXtensible Mark-up Language) converter. To obtain benefits of the suggested approaches and design mechanisms in SMP2 standard as far as possible, the object-oriented and component-based design concepts were strictly chosen throughout a whole model development process.

  19. Design and Analysis of Scheduling Policies for Real-Time Computer Systems

    DTIC Science & Technology

    1992-01-01

    C. M. Krishna, "The Impact of Workload on the Reliability of Real-Time Processor Triads," to appear in Micro . Rel. [17] J.F. Kurose, "Performance... Processor Triads", to appear in Micro . Rel. "* J.F. Kurose. "Performance Analysis of Minimum Laxity Scheduling in Discrete Time Queue- ing Systems", to...exponentially distributed service times and deadlines. A similar model was developed for the ED policy for a single processor system under identical

  20. Electric power scheduling: A distributed problem-solving approach

    NASA Technical Reports Server (NTRS)

    Mellor, Pamela A.; Dolce, James L.; Krupp, Joseph C.

    1990-01-01

    Space Station Freedom's power system, along with the spacecraft's other subsystems, needs to carefully conserve its resources and yet strive to maximize overall Station productivity. Due to Freedom's distributed design, each subsystem must work cooperatively within the Station community. There is a need for a scheduling tool which will preserve this distributed structure, allow each subsystem the latitude to satisfy its own constraints, and preserve individual value systems while maintaining Station-wide integrity. The value-driven free-market economic model is such a tool.

  1. Rail Mounted Gantry Crane Scheduling Optimization in Railway Container Terminal Based on Hybrid Handling Mode

    PubMed Central

    Zhu, Xiaoning

    2014-01-01

    Rail mounted gantry crane (RMGC) scheduling is important in reducing makespan of handling operation and improving container handling efficiency. In this paper, we present an RMGC scheduling optimization model, whose objective is to determine an optimization handling sequence in order to minimize RMGC idle load time in handling tasks. An ant colony optimization is proposed to obtain near optimal solutions. Computational experiments on a specific railway container terminal are conducted to illustrate the proposed model and solution algorithm. The results show that the proposed method is effective in reducing the idle load time of RMGC. PMID:25538768

  2. A Fast-Time Simulation Tool for Analysis of Airport Arrival Traffic

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz; Meyn, Larry A.; Neuman, Frank

    2004-01-01

    The basic objective of arrival sequencing in air traffic control automation is to match traffic demand and airport capacity while minimizing delays. The performance of an automated arrival scheduling system, such as the Traffic Management Advisor developed by NASA for the FAA, can be studied by a fast-time simulation that does not involve running expensive and time-consuming real-time simulations. The fast-time simulation models runway configurations, the characteristics of arrival traffic, deviations from predicted arrival times, as well as the arrival sequencing and scheduling algorithm. This report reviews the development of the fast-time simulation method used originally by NASA in the design of the sequencing and scheduling algorithm for the Traffic Management Advisor. The utility of this method of simulation is demonstrated by examining the effect on delays of altering arrival schedules at a hub airport.

  3. Neural Network Prediction of New Aircraft Design Coefficients

    NASA Technical Reports Server (NTRS)

    Norgaard, Magnus; Jorgensen, Charles C.; Ross, James C.

    1997-01-01

    This paper discusses a neural network tool for more effective aircraft design evaluations during wind tunnel tests. Using a hybrid neural network optimization method, we have produced fast and reliable predictions of aerodynamical coefficients, found optimal flap settings, and flap schedules. For validation, the tool was tested on a 55% scale model of the USAF/NASA Subsonic High Alpha Research Concept aircraft (SHARC). Four different networks were trained to predict coefficients of lift, drag, moment of inertia, and lift drag ratio (C(sub L), C(sub D), C(sub M), and L/D) from angle of attack and flap settings. The latter network was then used to determine an overall optimal flap setting and for finding optimal flap schedules.

  4. Planning and Scheduling for Fleets of Earth Observing Satellites

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Jonsson, Ari; Morris, Robert; Smith, David E.; Norvig, Peter (Technical Monitor)

    2001-01-01

    We address the problem of scheduling observations for a collection of earth observing satellites. This scheduling task is a difficult optimization problem, potentially involving many satellites, hundreds of requests, constraints on when and how to service each request, and resources such as instruments, recording devices, transmitters, and ground stations. High-fidelity models are required to ensure the validity of schedules; at the same time, the size and complexity of the problem makes it unlikely that systematic optimization search methods will be able to solve them in a reasonable time. This paper presents a constraint-based approach to solving the Earth Observing Satellites (EOS) scheduling problem, and proposes a stochastic heuristic search method for solving it.

  5. Scheduled power tracking control of the wind-storage hybrid system based on the reinforcement learning theory

    NASA Astrophysics Data System (ADS)

    Li, Ze

    2017-09-01

    In allusion to the intermittency and uncertainty of the wind electricity, energy storage and wind generator are combined into a hybrid system to improve the controllability of the output power. A scheduled power tracking control method is proposed based on the reinforcement learning theory and Q-learning algorithm. In this method, the state space of the environment is formed with two key factors, i.e. the state of charge of the energy storage and the difference value between the actual wind power and scheduled power, the feasible action is the output power of the energy storage, and the corresponding immediate rewarding function is designed to reflect the rationality of the control action. By interacting with the environment and learning from the immediate reward, the optimal control strategy is gradually formed. After that, it could be applied to the scheduled power tracking control of the hybrid system. Finally, the rationality and validity of the method are verified through simulation examples.

  6. A Model for the Stop Planning and Timetables of Customized Buses

    PubMed Central

    Yang, Yang

    2017-01-01

    Customized buses (CBs) are a new mode of public transportation and an important part of diversified public transportation, providing advanced, attractive and user-led service. The operational activity of a CB is planned by aggregating space–time demand and similar passenger travel demands. Based on an analysis of domestic and international research and the current development of CBs in China and considering passenger travel data, this paper studies the problems associated with the operation of CBs, such as stop selection, line planning and timetables, and establishes a model for the stop planning and timetables of CBs. The improved immune genetic algorithm (IIGA) is used to solve the model with regard to the following: 1) multiple population design and transport operator design, 2) memory library design, 3) mutation probability design and crossover probability design, and 4) the fitness calculation of the gene segment. Finally, a real-world example in Beijing is calculated, and the model and solution results are verified and analyzed. The results illustrate that the IIGA solves the model and is superior to the basic genetic algorithm in terms of the number of passengers, travel time, average passenger travel time, average passenger arrival time ahead of schedule and total line revenue. This study covers the key issues involving operational systems of CBs, combines theoretical research and empirical analysis, and provides a theoretical foundation for the planning and operation of CBs. PMID:28056041

  7. A Model for the Stop Planning and Timetables of Customized Buses.

    PubMed

    Ma, Jihui; Zhao, Yanqing; Yang, Yang; Liu, Tao; Guan, Wei; Wang, Jiao; Song, Cuiying

    2017-01-01

    Customized buses (CBs) are a new mode of public transportation and an important part of diversified public transportation, providing advanced, attractive and user-led service. The operational activity of a CB is planned by aggregating space-time demand and similar passenger travel demands. Based on an analysis of domestic and international research and the current development of CBs in China and considering passenger travel data, this paper studies the problems associated with the operation of CBs, such as stop selection, line planning and timetables, and establishes a model for the stop planning and timetables of CBs. The improved immune genetic algorithm (IIGA) is used to solve the model with regard to the following: 1) multiple population design and transport operator design, 2) memory library design, 3) mutation probability design and crossover probability design, and 4) the fitness calculation of the gene segment. Finally, a real-world example in Beijing is calculated, and the model and solution results are verified and analyzed. The results illustrate that the IIGA solves the model and is superior to the basic genetic algorithm in terms of the number of passengers, travel time, average passenger travel time, average passenger arrival time ahead of schedule and total line revenue. This study covers the key issues involving operational systems of CBs, combines theoretical research and empirical analysis, and provides a theoretical foundation for the planning and operation of CBs.

  8. A variable-gain output feedback control design approach

    NASA Technical Reports Server (NTRS)

    Haylo, Nesim

    1989-01-01

    A multi-model design technique to find a variable-gain control law defined over the whole operating range is proposed. The design is formulated as an optimal control problem which minimizes a cost function weighing the performance at many operating points. The solution is obtained by embedding into the Multi-Configuration Control (MCC) problem, a multi-model robust control design technique. In contrast to conventional gain scheduling which uses a curve fit of single model designs, the optimal variable-gain control law stabilizes the plant at every operating point included in the design. An iterative algorithm to compute the optimal control gains is presented. The methodology has been successfully applied to reconfigurable aircraft flight control and to nonlinear flight control systems.

  9. Applications of Evolutionary Technology to Manufacturing and Logistics Systems : State-of-the Art Survey

    NASA Astrophysics Data System (ADS)

    Gen, Mitsuo; Lin, Lin

    Many combinatorial optimization problems from industrial engineering and operations research in real-world are very complex in nature and quite hard to solve them by conventional techniques. Since the 1960s, there has been an increasing interest in imitating living beings to solve such kinds of hard combinatorial optimization problems. Simulating the natural evolutionary process of human beings results in stochastic optimization techniques called evolutionary algorithms (EAs), which can often outperform conventional optimization methods when applied to difficult real-world problems. In this survey paper, we provide a comprehensive survey of the current state-of-the-art in the use of EA in manufacturing and logistics systems. In order to demonstrate the EAs which are powerful and broadly applicable stochastic search and optimization techniques, we deal with the following engineering design problems: transportation planning models, layout design models and two-stage logistics models in logistics systems; job-shop scheduling, resource constrained project scheduling in manufacturing system.

  10. The R-Shell approach - Using scheduling agents in complex distributed real-time systems

    NASA Technical Reports Server (NTRS)

    Natarajan, Swaminathan; Zhao, Wei; Goforth, Andre

    1993-01-01

    Large, complex real-time systems such as space and avionics systems are extremely demanding in their scheduling requirements. The current OS design approaches are quite limited in the capabilities they provide for task scheduling. Typically, they simply implement a particular uniprocessor scheduling strategy and do not provide any special support for network scheduling, overload handling, fault tolerance, distributed processing, etc. Our design of the R-Shell real-time environment fcilitates the implementation of a variety of sophisticated but efficient scheduling strategies, including incorporation of all these capabilities. This is accomplished by the use of scheduling agents which reside in the application run-time environment and are responsible for coordinating the scheduling of the application.

  11. Correction of contaminated yaw rate signal and estimation of sensor bias for an electric vehicle under normal driving conditions

    NASA Astrophysics Data System (ADS)

    Zhang, Guoguang; Yu, Zitian; Wang, Junmin

    2017-03-01

    Yaw rate is a crucial signal for the motion control systems of ground vehicles. Yet it may be contaminated by sensor bias. In order to correct the contaminated yaw rate signal and estimate the sensor bias, a robust gain-scheduling observer is proposed in this paper. First of all, a two-degree-of-freedom (2DOF) vehicle lateral and yaw dynamic model is presented, and then a Luenberger-like observer is proposed. To make the observer more applicable to real vehicle driving operations, a 2DOF vehicle model with uncertainties on the coefficients of tire cornering stiffness is employed. Further, a gain-scheduling approach and a robustness enhancement are introduced, leading to a robust gain-scheduling observer. Sensor bias detection mechanism is also designed. Case studies are conducted using an electric ground vehicle to assess the performance of signal correction and sensor bias estimation under difference scenarios.

  12. Coordinated scheduling for dynamic real-time systems

    NASA Technical Reports Server (NTRS)

    Natarajan, Swaminathan; Zhao, Wei

    1994-01-01

    In this project, we addressed issues in coordinated scheduling for dynamic real-time systems. In particular, we concentrated on design and implementation of a new distributed real-time system called R-Shell. The design objective of R-Shell is to provide computing support for space programs that have large, complex, fault-tolerant distributed real-time applications. In R-shell, the approach is based on the concept of scheduling agents, which reside in the application run-time environment, and are customized to provide just those resource management functions which are needed by the specific application. With this approach, we avoid the need for a sophisticated OS which provides a variety of generalized functionality, while still not burdening application programmers with heavy responsibility for resource management. In this report, we discuss the R-Shell approach, summarize the achievement of the project, and describe a preliminary prototype of R-Shell system.

  13. Web Audio/Video Streaming Tool

    NASA Technical Reports Server (NTRS)

    Guruvadoo, Eranna K.

    2003-01-01

    In order to promote NASA-wide educational outreach program to educate and inform the public of space exploration, NASA, at Kennedy Space Center, is seeking efficient ways to add more contents to the web by streaming audio/video files. This project proposes a high level overview of a framework for the creation, management, and scheduling of audio/video assets over the web. To support short-term goals, the prototype of a web-based tool is designed and demonstrated to automate the process of streaming audio/video files. The tool provides web-enabled users interfaces to manage video assets, create publishable schedules of video assets for streaming, and schedule the streaming events. These operations are performed on user-defined and system-derived metadata of audio/video assets stored in a relational database while the assets reside on separate repository. The prototype tool is designed using ColdFusion 5.0.

  14. Resilient filtering for time-varying stochastic coupling networks under the event-triggering scheduling

    NASA Astrophysics Data System (ADS)

    Wang, Fan; Liang, Jinling; Dobaie, Abdullah M.

    2018-07-01

    The resilient filtering problem is considered for a class of time-varying networks with stochastic coupling strengths. An event-triggered strategy is adopted to save the network resources by scheduling the signal transmission from the sensors to the filters based on certain prescribed rules. Moreover, the filter parameters to be designed are subject to gain perturbations. The primary aim of the addressed problem is to determine a resilient filter that ensures an acceptable filtering performance for the considered network with event-triggering scheduling. To handle such an issue, an upper bound on the estimation error variance is established for each node according to the stochastic analysis. Subsequently, the resilient filter is designed by locally minimizing the derived upper bound at each iteration. Moreover, rigorous analysis shows the monotonicity of the minimal upper bound regarding the triggering threshold. Finally, a simulation example is presented to show effectiveness of the established filter scheme.

  15. Development of An Intelligent Flight Propulsion Control System

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Rysdyk, R. T.; Leonhardt, B. K.

    1999-01-01

    The initial design and demonstration of an Intelligent Flight Propulsion and Control System (IFPCS) is documented. The design is based on the implementation of a nonlinear adaptive flight control architecture. This initial design of the IFPCS enhances flight safety by using propulsion sources to provide redundancy in flight control. The IFPCS enhances the conventional gain scheduled approach in significant ways: (1) The IFPCS provides a back up flight control system that results in consistent responses over a wide range of unanticipated failures. (2) The IFPCS is applicable to a variety of aircraft models without redesign and,(3) significantly reduces the laborious research and design necessary in a gain scheduled approach. The control augmentation is detailed within an approximate Input-Output Linearization setting. The availability of propulsion only provides two control inputs, symmetric and differential thrust. Earlier Propulsion Control Augmentation (PCA) work performed by NASA provided for a trajectory controller with pilot command input of glidepath and heading. This work is aimed at demonstrating the flexibility of the IFPCS in providing consistency in flying qualities under a variety of failure scenarios. This report documents the initial design phase where propulsion only is used. Results confirm that the engine dynamics and associated hard nonlineaaities result in poor handling qualities at best. However, as demonstrated in simulation, the IFPCS is capable of results similar to the gain scheduled designs of the NASA PCA work. The IFPCS design uses crude estimates of aircraft behaviour. The adaptive control architecture demonstrates robust stability and provides robust performance. In this work, robust stability means that all states, errors, and adaptive parameters remain bounded under a wide class of uncertainties and input and output disturbances. Robust performance is measured in the quality of the tracking. The results demonstrate the flexibility of the IFPCS architecture and the ability to provide robust performance under a broad range of uncertainty. Robust stability is proved using Lyapunov like analysis. Future development of the IFPCS will include integration of conventional control surfaces with the use of propulsion augmentation, and utilization of available lift and drag devices, to demonstrate adaptive control capability under a greater variety of failure scenarios. Further work will specifically address the effects of actuator saturation.

  16. Defense Acquisitions: Assessments of Selected Weapon Programs

    DTIC Science & Technology

    2012-03-01

    knowledge-based practices. As a result , most of these programs will carry technology, design, and production risks into subsequent phases of the...acquisition process that could result in cost growth or schedule delays. GAO also assessed the implementation of selected acquisition reforms and found...knowledge-based practices. As a result , most of these programs will carry technology, design, and production risks into subsequent phases of the

  17. Design of automation tools for management of descent traffic

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz; Nedell, William

    1988-01-01

    The design of an automated air traffic control system based on a hierarchy of advisory tools for controllers is described. Compatibility of the tools with the human controller, a key objective of the design, is achieved by a judicious selection of tasks to be automated and careful attention to the design of the controller system interface. The design comprises three interconnected subsystems referred to as the Traffic Management Advisor, the Descent Advisor, and the Final Approach Spacing Tool. Each of these subsystems provides a collection of tools for specific controller positions and tasks. This paper focuses primarily on the Descent Advisor which provides automation tools for managing descent traffic. The algorithms, automation modes, and graphical interfaces incorporated in the design are described. Information generated by the Descent Advisor tools is integrated into a plan view traffic display consisting of a high-resolution color monitor. Estimated arrival times of aircraft are presented graphically on a time line, which is also used interactively in combination with a mouse input device to select and schedule arrival times. Other graphical markers indicate the location of the fuel-optimum top-of-descent point and the predicted separation distances of aircraft at a designated time-control point. Computer generated advisories provide speed and descent clearances which the controller can issue to aircraft to help them arrive at the feeder gate at the scheduled times or with specified separation distances. Two types of horizontal guidance modes, selectable by the controller, provide markers for managing the horizontal flightpaths of aircraft under various conditions. The entire system consisting of descent advisor algorithm, a library of aircraft performance models, national airspace system data bases, and interactive display software has been implemented on a workstation made by Sun Microsystems, Inc. It is planned to use this configuration in operational evaluations at an en route center.

  18. Controle du vol longitudinal d'un avion civil avec satisfaction de qualiies de manoeuvrabilite

    NASA Astrophysics Data System (ADS)

    Saussie, David Alexandre

    2010-03-01

    Fulfilling handling qualities still remains a challenging problem during flight control design. These criteria of different nature are derived from a wide experience based upon flight tests and data analysis, and they have to be considered if one expects a good behaviour of the aircraft. The goal of this thesis is to develop synthesis methods able to satisfy these criteria with fixed classical architectures imposed by the manufacturer or with a new flight control architecture. This is applied to the longitudinal flight model of a Bombardier Inc. business jet aircraft, namely the Challenger 604. A first step of our work consists in compiling the most commonly used handling qualities in order to compare them. A special attention is devoted to the dropback criterion for which theoretical analysis leads us to establish a practical formulation for synthesis purpose. Moreover, the comparison of the criteria through a reference model highlighted dominant criteria that, once satisfied, ensure that other ones are satisfied too. Consequently, we are able to consider the fulfillment of these criteria in the fixed control architecture framework. Guardian maps (Saydy et al., 1990) are then considered to handle the problem. Initially for robustness study, they are integrated in various algorithms for controller synthesis. Incidently, this fixed architecture problem is similar to the static output feedback stabilization problem and reduced-order controller synthesis. Algorithms performing stabilization and pole assignment in a specific region of the complex plane are then proposed. Afterwards, they are extended to handle the gain-scheduling problem. The controller is then scheduled through the entire flight envelope with respect to scheduling parameters. Thereafter, the fixed architecture is put aside while only conserving the same output signals. The main idea is to use Hinfinity synthesis to obtain an initial controller satisfying handling qualities thanks to reference model pairing and robust versus mass and center of gravity variations. Using robust modal control (Magni, 2002), we are able to reduce substantially the controller order and to structure it in order to come close to a classical architecture. An auto-scheduling method finally allows us to schedule the controller with respect to scheduling parameters. Two different paths are used to solve the same problem; each one exhibits its own advantages and disadvantages.

  19. Thermal Optimization of an On-Orbit Long Duration Cryogenic Propellant Depot

    NASA Technical Reports Server (NTRS)

    Honour, Ryan; Kwas, Robert; O'Neil, Gary; Kutter, Gary

    2012-01-01

    A Cryogenic Propellant Depot (CPD) operating in Low Earth Orbit (LEO) could provide many near term benefits to NASA's space exploration efforts. These benefits include elongation/extension of spacecraft missions and requirement reduction of launch vehicle up-mass. Some of the challenges include controlling cryogenic propellant evaporation and managing the high costs and long schedules associated with the new development of spacecraft hardware. This paper describes a conceptual CPD design that is thermally optimized to achieve extremely low propellant boil-off rates. The CPD design is based on existing launch vehicle architecture, and its thermal optimization is achieved using current passive thermal control technology. Results from an integrated thermal model are presented showing that this conceptual CPD design can achieve propellant boil-off rates well under 0.05% per day, even when subjected to the LEO thermal environment.

  20. Thermal Optimization and Assessment of a Long Duration Cryogenic Propellant Depot

    NASA Technical Reports Server (NTRS)

    Honour, Ryan; Kwas, Robert; O'Neil, Gary; Kutter, Bernard

    2012-01-01

    A Cryogenic Propellant Depot (CPD) operating in Low Earth Orbit (LEO) could provide many near term benefits to NASA space exploration efforts. These benefits include elongation/extension of spacecraft missions and reduction of launch vehicle up-mass requirements. Some of the challenges include controlling cryogenic propellant evaporation and managing the high costs and long schedules associated with new spacecraft hardware development. This paper describes a conceptual CPD design that is thermally optimized to achieve extremely low propellant boil-off rates. The CPD design is based on existing launch vehicle architecture, and its thermal optimization is achieved using current passive thermal control technology. Results from an integrated thermal model are presented showing that this conceptual CPD design can achieve propellant boil-off rates well under 0.05% per day, even when subjected to the LEO thermal environment.

  1. Conception of Self-Construction Production Scheduling System

    NASA Astrophysics Data System (ADS)

    Xue, Hai; Zhang, Xuerui; Shimizu, Yasuhiro; Fujimura, Shigeru

    With the high speed innovation of information technology, many production scheduling systems have been developed. However, a lot of customization according to individual production environment is required, and then a large investment for development and maintenance is indispensable. Therefore now the direction to construct scheduling systems should be changed. The final objective of this research aims at developing a system which is built by it extracting the scheduling technique automatically through the daily production scheduling work, so that an investment will be reduced. This extraction mechanism should be applied for various production processes for the interoperability. Using the master information extracted by the system, production scheduling operators can be supported to accelerate the production scheduling work easily and accurately without any restriction of scheduling operations. By installing this extraction mechanism, it is easy to introduce scheduling system without a lot of expense for customization. In this paper, at first a model for expressing a scheduling problem is proposed. Then the guideline to extract the scheduling information and use the extracted information is shown and some applied functions are also proposed based on it.

  2. From Science To Design: Systems Engineering For The Lsst

    NASA Astrophysics Data System (ADS)

    Claver, Chuck F.; Axelrod, T.; Fouts, K.; Kantor, J.; Nordby, M.; Sebag, J.; LSST Collaboration

    2009-01-01

    The LSST is a universal-purpose survey telescope that will address scores of scientific missions. To assist the technical teams to convergence to a specific engineering design, the LSST Science Requirements Document (SRD) selects four stressing principle scientific missions: 1) Constraining Dark Matter and Dark Energy; 2) taking an Inventory of the Solar System; 3) Exploring the Transient Optical Sky; and 4) mapping the Milky Way. From these 4 missions the SRD specifies the needed requirements for single images and the full 10 year survey that enables a wide range of science beyond the 4 principle missions. Through optical design and analysis, operations simulation, and throughput modeling the systems engineering effort in the LSST has largely focused on taking the SRD specifications and deriving system functional requirements that define the system design. A Model Based Systems Engineering approach with SysML is used to manage the flow down of requirements from science to system function to sub-system. The rigor of requirements flow and management assists the LSST in keeping the overall scope, hence budget and schedule, under control.

  3. Improving Hospital-wide Patient Scheduling Decisions by Clinical Pathway Mining.

    PubMed

    Gartner, Daniel; Arnolds, Ines V; Nickel, Stefan

    2015-01-01

    Recent research has highlighted the need for solving hospital-wide patient scheduling problems. Inpatient scheduling, patient activities have to be scheduled on scarce hospital resources such that temporal relations between activities (e.g. for recovery times) are ensured. Common objectives are, among others, the minimization of the length of stay (LOS). In this paper, we consider a hospital-wide patient scheduling problem with LOS minimization based on uncertain clinical pathways. We approach the problem in three stages: First, we learn most likely clinical pathways using a sequential pattern mining approach. Second, we provide a mathematical model for patient scheduling and finally, we combine the two approaches. In an experimental study carried out using real-world data, we show that our approach outperforms baseline approaches on two metrics.

  4. Implications of Measurement Assay Type in Design of HIV Experiments.

    PubMed

    Cannon, LaMont; Jagarapu, Aditya; Vargas-Garcia, Cesar A; Piovoso, Michael J; Zurakowski, Ryan

    2017-12-01

    Time series measurements of circular viral episome (2-LTR) concentrations enable indirect quantification of persistent low-level Human Immunodeficiency Virus (HIV) replication in patients on Integrase-Inhibitor intensified Combined Antiretroviral Therapy (cART). In order to determine the magnitude of these low level infection events, blood has to be drawn from a patients at a frequency and volume that is strictly regulated by the Institutional Review Board (IRB). Once the blood is drawn, the 2-LTR concentration is determined by quantifying the amount of HIV DNA present in the sample via a PCR (Polymerase Chain Reaction) assay. Real time quantitative Polymerase Chain Reaction (qPCR) is a widely used method of performing PCR; however, a newer droplet digital Polymerase Chain Reaction (ddPCR) method has been shown to provide more accurate quantification of DNA. Using a validated model of HIV viral replication, this paper demonstrates the importance of considering DNA quantification assay type when optimizing experiment design conditions. Experiments are optimized using a Genetic Algorithm (GA) to locate a family of suboptimal sample schedules which yield the highest fitness. Fitness is defined as the expected information gained in the experiment, measured by the Kullback-Leibler Divergence (KLD) between the prior and posterior distributions of the model parameters. We compare the information content of the optimized schedules to uniform schedules as well as two clinical schedules implemented by researchers at UCSF and the University of Melbourne. This work shows that there is a significantly greater gain information in experiments using a ddPCR assay vs. a qPCR assay and that certain experiment design considerations should be taken when using either assay.

  5. Computer code for analyzing the performance of aquifer thermal energy storage systems

    NASA Astrophysics Data System (ADS)

    Vail, L. W.; Kincaid, C. T.; Kannberg, L. D.

    1985-05-01

    A code called Aquifer Thermal Energy Storage System Simulator (ATESSS) has been developed to analyze the operational performance of ATES systems. The ATESSS code provides an ability to examine the interrelationships among design specifications, general operational strategies, and unpredictable variations in the demand for energy. The uses of the code can vary the well field layout, heat exchanger size, and pumping/injection schedule. Unpredictable aspects of supply and demand may also be examined through the use of a stochastic model of selected system parameters. While employing a relatively simple model of the aquifer, the ATESSS code plays an important role in the design and operation of ATES facilities by augmenting experience provided by the relatively few field experiments and demonstration projects. ATESSS has been used to characterize the effect of different pumping/injection schedules on a hypothetical ATES system and to estimate the recovery at the St. Paul, Minnesota, field experiment.

  6. Design and architecture of the Mars relay network planning and analysis framework

    NASA Technical Reports Server (NTRS)

    Cheung, K. M.; Lee, C. H.

    2002-01-01

    In this paper we describe the design and architecture of the Mars Network planning and analysis framework that supports generation and validation of efficient planning and scheduling strategy. The goals are to minimize the transmitting time, minimize the delaying time, and/or maximize the network throughputs. The proposed framework would require (1) a client-server architecture to support interactive, batch, WEB, and distributed analysis and planning applications for the relay network analysis scheme, (2) a high-fidelity modeling and simulation environment that expresses link capabilities between spacecraft to spacecraft and spacecraft to Earth stations as time-varying resources, and spacecraft activities, link priority, Solar System dynamic events, the laws of orbital mechanics, and other limiting factors as spacecraft power and thermal constraints, (3) an optimization methodology that casts the resource and constraint models into a standard linear and nonlinear constrained optimization problem that lends itself to commercial off-the-shelf (COTS)planning and scheduling algorithms.

  7. Separation Assurance and Scheduling Coordination in the Arrival Environment

    NASA Technical Reports Server (NTRS)

    Aweiss, Arwa S.; Cone, Andrew C.; Holladay, Joshua J.; Munoz, Epifanio; Lewis, Timothy A.

    2016-01-01

    Separation assurance (SA) automation has been proposed as either a ground-based or airborne paradigm. The arrival environment is complex because aircraft are being sequenced and spaced to the arrival fix. This paper examines the effect of the allocation of the SA and scheduling functions on the performance of the system. Two coordination configurations between an SA and an arrival management system are tested using both ground and airborne implementations. All configurations have a conflict detection and resolution (CD&R) system and either an integrated or separated scheduler. Performance metrics are presented for the ground and airborne systems based on arrival traffic headed to Dallas/ Fort Worth International airport. The total delay, time-spacing conformance, and schedule conformance are used to measure efficiency. The goal of the analysis is to use the metrics to identify performance differences between the configurations that are based on different function allocations. A surveillance range limitation of 100 nmi and a time delay for sharing updated trajectory intent of 30 seconds were implemented for the airborne system. Overall, these results indicate that the surveillance range and the sharing of trajectories and aircraft schedules are important factors in determining the efficiency of an airborne arrival management system. These parameters are not relevant to the ground-based system as modeled for this study because it has instantaneous access to all aircraft trajectories and intent. Creating a schedule external to the CD&R and the scheduling conformance system was seen to reduce total delays for the airborne system, and had a minor effect on the ground-based system. The effect of an external scheduler on other metrics was mixed.

  8. Three hybridization models based on local search scheme for job shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Balbi Fraga, Tatiana

    2015-05-01

    This work presents three different hybridization models based on the general schema of Local Search Heuristics, named Hybrid Successive Application, Hybrid Neighborhood, and Hybrid Improved Neighborhood. Despite similar approaches might have already been presented in the literature in other contexts, in this work these models are applied to analyzes the solution of the job shop scheduling problem, with the heuristics Taboo Search and Particle Swarm Optimization. Besides, we investigate some aspects that must be considered in order to achieve better solutions than those obtained by the original heuristics. The results demonstrate that the algorithms derived from these three hybrid models are more robust than the original algorithms and able to get better results than those found by the single Taboo Search.

  9. Test Scheduling for Core-Based SOCs Using Genetic Algorithm Based Heuristic Approach

    NASA Astrophysics Data System (ADS)

    Giri, Chandan; Sarkar, Soumojit; Chattopadhyay, Santanu

    This paper presents a Genetic algorithm (GA) based solution to co-optimize test scheduling and wrapper design for core based SOCs. Core testing solutions are generated as a set of wrapper configurations, represented as rectangles with width equal to the number of TAM (Test Access Mechanism) channels and height equal to the corresponding testing time. A locally optimal best-fit heuristic based bin packing algorithm has been used to determine placement of rectangles minimizing the overall test times, whereas, GA has been utilized to generate the sequence of rectangles to be considered for placement. Experimental result on ITC'02 benchmark SOCs shows that the proposed method provides better solutions compared to the recent works reported in the literature.

  10. Develop a Model Component

    NASA Technical Reports Server (NTRS)

    Ensey, Tyler S.

    2013-01-01

    During my internship at NASA, I was a model developer for Ground Support Equipment (GSE). The purpose of a model developer is to develop and unit test model component libraries (fluid, electrical, gas, etc.). The models are designed to simulate software for GSE (Ground Special Power, Crew Access Arm, Cryo, Fire and Leak Detection System, Environmental Control System (ECS), etc. .) before they are implemented into hardware. These models support verifying local control and remote software for End-Item Software Under Test (SUT). The model simulates the physical behavior (function, state, limits and 110) of each end-item and it's dependencies as defined in the Subsystem Interface Table, Software Requirements & Design Specification (SRDS), Ground Integrated Schematic (GIS), and System Mechanical Schematic.(SMS). The software of each specific model component is simulated through MATLAB's Simulink program. The intensiv model development life cycle is a.s follows: Identify source documents; identify model scope; update schedule; preliminary design review; develop model requirements; update model.. scope; update schedule; detailed design review; create/modify library component; implement library components reference; implement subsystem components; develop a test script; run the test script; develop users guide; send model out for peer review; the model is sent out for verifictionlvalidation; if there is empirical data, a validation data package is generated; if there is not empirical data, a verification package is generated; the test results are then reviewed; and finally, the user. requests accreditation, and a statement of accreditation is prepared. Once each component model is reviewed and approved, they are intertwined together into one integrated model. This integrated model is then tested itself, through a test script and autotest, so that it can be concluded that all models work conjointly, for a single purpose. The component I was assigned, specifically, was a fluid component, a discrete pressure switch. The switch takes a fluid pressure input, and if the pressure is greater than a designated cutoff pressure, the switch would stop fluid flow.

  11. Study of entry and landing probes for exploration of Titan

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Saturn's largest moon, Titan, is a totally unique planetary body which is certain to yield exciting new phenomena. Current information is lacking in detail to distinguish between a thin methane rich atmosphere and a thick nitrogen rich atmosphere. Therefore, both the thin and thick atmospheric models were used for the study of various Titan probe classes described in this report. The technical requirements, conceptual design, science return, schedule, cost and mission implications of three probe classes that could be used for exploration of Titan are defined. The three probe classes were based on a wide range of exploration mission possibilities.

  12. New Model Exhaust System Supports Testing in NASA Lewis' 10- by 10-Foot Supersonic Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Roeder, James W., Jr.

    1998-01-01

    In early 1996, the ability to run NASA Lewis Research Center's Abe Silverstein 10- by 10- Foot Supersonic Wind Tunnel (10x10) at subsonic test section speeds was reestablished. Taking advantage of this new speed range, a subsonic research test program was scheduled for the 10x10 in the fall of 1996. However, many subsonic aircraft test models require an exhaust source to simulate main engine flow, engine bleed flows, and other phenomena. This was also true of the proposed test model, but at the time the 10x10 did not have a model exhaust capability. So, through an in-house effort over a period of only 5 months, a new model exhaust system was designed, installed, checked out, and made ready in time to support the scheduled test program.

  13. The schedule effect: can recurrent peak infections be reduced without vaccines, quarantines or school closings?

    PubMed

    Diedrichs, Danilo R; Isihara, Paul A; Buursma, Doeke D

    2014-02-01

    Using a basic, two transmission level seasonal SIR model, we introduce mathematical evidence for the schedule effect which asserts that major recurring peak infections can be significantly reduced by modification of the traditional school calendar. The schedule effect is observed first in simulated time histories of the infectious population. Schedules with higher average transmission rate may exhibit reduced peak infections. Splitting vacations changes the period of the oscillating transmission function and can confine limit cycles in the proportion susceptible/proportion infected phase plane. Numerical analysis of the phase plane shows the relationship between the transmission period and the maximum recurring infection peaks and period of the response. For certain transmission periods, this response may exhibit period-doubling and chaos, leading to increased peaks. Non-monotonic infectious response is also observed in conjunction with changing birth rate. We discuss how to take these effects into consideration to design an optimum school schedule with particular reference to a hypothetical developing world context. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. EUROPA2: Plan Database Services for Planning and Scheduling Applications

    NASA Technical Reports Server (NTRS)

    Bedrax-Weiss, Tania; Frank, Jeremy; Jonsson, Ari; McGann, Conor

    2004-01-01

    NASA missions require solving a wide variety of planning and scheduling problems with temporal constraints; simple resources such as robotic arms, communications antennae and cameras; complex replenishable resources such as memory, power and fuel; and complex constraints on geometry, heat and lighting angles. Planners and schedulers that solve these problems are used in ground tools as well as onboard systems. The diversity of planning problems and applications of planners and schedulers precludes a one-size fits all solution. However, many of the underlying technologies are common across planning domains and applications. We describe CAPR, a formalism for planning that is general enough to cover a wide variety of planning and scheduling domains of interest to NASA. We then describe EUROPA(sub 2), a software framework implementing CAPR. EUROPA(sub 2) provides efficient, customizable Plan Database Services that enable the integration of CAPR into a wide variety of applications. We describe the design of EUROPA(sub 2) from the perspective of both modeling, customization and application integration to different classes of NASA missions.

  15. Routing and Scheduling Optimization Model of Sea Transportation

    NASA Astrophysics Data System (ADS)

    barus, Mika debora br; asyrafy, Habib; nababan, Esther; mawengkang, Herman

    2018-01-01

    This paper examines the routing and scheduling optimization model of sea transportation. One of the issues discussed is about the transportation of ships carrying crude oil (tankers) which is distributed to many islands. The consideration is the cost of transportation which consists of travel costs and the cost of layover at the port. Crude oil to be distributed consists of several types. This paper develops routing and scheduling model taking into consideration some objective functions and constraints. The formulation of the mathematical model analyzed is to minimize costs based on the total distance visited by the tanker and minimize the cost of the ports. In order for the model of the problem to be more realistic and the cost calculated to be more appropriate then added a parameter that states the multiplier factor of cost increases as the charge of crude oil is filled.

  16. Enhanced Software for Scheduling Space-Shuttle Processing

    NASA Technical Reports Server (NTRS)

    Barretta, Joseph A.; Johnson, Earl P.; Bierman, Rocky R.; Blanco, Juan; Boaz, Kathleen; Stotz, Lisa A.; Clark, Michael; Lebovitz, George; Lotti, Kenneth J.; Moody, James M.; hide

    2004-01-01

    The Ground Processing Scheduling System (GPSS) computer program is used to develop streamlined schedules for the inspection, repair, and refurbishment of space shuttles at Kennedy Space Center. A scheduling computer program is needed because space-shuttle processing is complex and it is frequently necessary to modify schedules to accommodate unanticipated events, unavailability of specialized personnel, unexpected delays, and the need to repair newly discovered defects. GPSS implements constraint-based scheduling algorithms and provides an interactive scheduling software environment. In response to inputs, GPSS can respond with schedules that are optimized in the sense that they contain minimal violations of constraints while supporting the most effective and efficient utilization of space-shuttle ground processing resources. The present version of GPSS is a product of re-engineering of a prototype version. While the prototype version proved to be valuable and versatile as a scheduling software tool during the first five years, it was characterized by design and algorithmic deficiencies that affected schedule revisions, query capability, task movement, report capability, and overall interface complexity. In addition, the lack of documentation gave rise to difficulties in maintenance and limited both enhanceability and portability. The goal of the GPSS re-engineering project was to upgrade the prototype into a flexible system that supports multiple- flow, multiple-site scheduling and that retains the strengths of the prototype while incorporating improvements in maintainability, enhanceability, and portability.

  17. Impact of referral letters on scheduling of hospital appointments: a randomised control trial

    PubMed Central

    Jiwa, Moyez; Meng, Xingqiong; O’Shea, Carolyn; Magin, Parker; Dadich, Ann; Pillai, Vinita

    2014-01-01

    Background Communication is essential for triage, but intervention trials to improve it are scarce. Referral Writer (RW), a referral letter software program, enables documentation of clinical data and extracts relevant patient details from clinical software. Aim To evaluate whether specialists are more confident about scheduling appointments when they receive more information in referral letters. Design and setting Single-blind, parallel-groups, controlled design with a 1:1 randomisation. Australian GPs watched video vignettes virtually. Method GPs wrote referral letters after watching vignettes of patients with cancer symptoms. Letter content was scored against a benchmark. The proportions of referral letters triagable by a specialist with confidence, and in which the specialist was confident the patient had potentially life-limiting pathology were determined. Categorical outcomes were tested with χ2 and continuous outcomes with t-tests. A random-effects logistic model assessed the influence of group randomisation (RW versus control), GP demographics, clinical specialty, and specialist referral assessor on specialist confidence in the information provided. Results The intervention (RW) group referred more patients and scored significantly higher on information relayed (mean difference 21.6 [95% confidence intervals {CI} = 20.1 to 23.2]). There was no difference in the proportion of letters for which specialists were confident they had sufficient information for appointment scheduling (RW 77.7% versus control 80.6%, P = 0.16). In the logistic model, limited agreement among specialists contributed substantially to the observed differences in appointment scheduling (P = 35% [95% CI 16% to 59%]). Conclusion In isolation, referral letter templates are unlikely to improve the scheduling of specialist appointments, even when more information is relayed. PMID:24982494

  18. 77 FR 18297 - Air Traffic Noise, Fuel Burn, and Emissions Modeling Using the Aviation Environmental Design Tool...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-27

    ... Integrated Routing System-- NIRS].'' The FAA developed the AEDT 2a to model aircraft noise, fuel burn, and... operations schedule. These data are used to compute aircraft noise, fuel burn and emissions simultaneously... DEPARTMENT OF TRANSPORTATION Federal Aviation Administration Air Traffic Noise, Fuel Burn, and...

  19. Optimization Models for Scheduling of Jobs

    PubMed Central

    Indika, S. H. Sathish; Shier, Douglas R.

    2006-01-01

    This work is motivated by a particular scheduling problem that is faced by logistics centers that perform aircraft maintenance and modification. Here we concentrate on a single facility (hangar) which is equipped with several work stations (bays). Specifically, a number of jobs have already been scheduled for processing at the facility; the starting times, durations, and work station assignments for these jobs are assumed to be known. We are interested in how best to schedule a number of new jobs that the facility will be processing in the near future. We first develop a mixed integer quadratic programming model (MIQP) for this problem. Since the exact solution of this MIQP formulation is time consuming, we develop a heuristic procedure, based on existing bin packing techniques. This heuristic is further enhanced by application of certain local optimality conditions. PMID:27274921

  20. Service-Oriented Node Scheduling Scheme for Wireless Sensor Networks Using Markov Random Field Model

    PubMed Central

    Cheng, Hongju; Su, Zhihuang; Lloret, Jaime; Chen, Guolong

    2014-01-01

    Future wireless sensor networks are expected to provide various sensing services and energy efficiency is one of the most important criterions. The node scheduling strategy aims to increase network lifetime by selecting a set of sensor nodes to provide the required sensing services in a periodic manner. In this paper, we are concerned with the service-oriented node scheduling problem to provide multiple sensing services while maximizing the network lifetime. We firstly introduce how to model the data correlation for different services by using Markov Random Field (MRF) model. Secondly, we formulate the service-oriented node scheduling issue into three different problems, namely, the multi-service data denoising problem which aims at minimizing the noise level of sensed data, the representative node selection problem concerning with selecting a number of active nodes while determining the services they provide, and the multi-service node scheduling problem which aims at maximizing the network lifetime. Thirdly, we propose a Multi-service Data Denoising (MDD) algorithm, a novel multi-service Representative node Selection and service Determination (RSD) algorithm, and a novel MRF-based Multi-service Node Scheduling (MMNS) scheme to solve the above three problems respectively. Finally, extensive experiments demonstrate that the proposed scheme efficiently extends the network lifetime. PMID:25384005

  1. Real-time distributed scheduling algorithm for supporting QoS over WDM networks

    NASA Astrophysics Data System (ADS)

    Kam, Anthony C.; Siu, Kai-Yeung

    1998-10-01

    Most existing or proposed WDM networks employ circuit switching, typically with one session having exclusive use of one entire wavelength. Consequently they are not suitable for data applications involving bursty traffic patterns. The MIT AON Consortium has developed an all-optical LAN/MAN testbed which provides time-slotted WDM service and employs fast-tunable transceivers in each optical terminal. In this paper, we explore extensions of this service to achieve fine-grained statistical multiplexing with different virtual circuits time-sharing the wavelengths in a fair manner. In particular, we develop a real-time distributed protocol for best-effort traffic over this time-slotted WDM service with near-optical fairness and throughput characteristics. As an additional design feature, our protocol supports the allocation of guaranteed bandwidths to selected connections. This feature acts as a first step towards supporting integrated services and quality-of-service guarantees over WDM networks. To achieve high throughput, our approach is based on scheduling transmissions, as opposed to collision- based schemes. Our distributed protocol involves one MAN scheduler and several LAN schedulers (one per LAN) in a master-slave arrangement. Because of propagation delays and limits on control channel capacities, all schedulers are designed to work with partial, delayed traffic information. Our distributed protocol is of the `greedy' type to ensure fast execution in real-time in response to dynamic traffic changes. It employs a hybrid form of rate and credit control for resource allocation. We have performed extensive simulations, which show that our protocol allocates resources (transmitters, receivers, wavelengths) fairly with high throughput, and supports bandwidth guarantees.

  2. Alternative scheduling models and their effect on science achievement at the high school level

    NASA Astrophysics Data System (ADS)

    Dostal, Jay Roland

    This study will evaluate alternative scheduling methods implemented in secondary level schools. Students were selected based on parent selection of programs. Traditional scheduling involves numerous academic subjects with small increments of time in each class and block scheduling focuses on fewer academic subjects and more instructional time. This study will compare office referral numbers, absence frequency, and Essential Learner Outcome (ELO) science strand scores in the 8th-grade (pretest) to the same students office referrals, absence frequency, and ELO science strand scores in the 11th-grade (posttest) between Seven Period Traditional Scheduling (SPTS) and Four Period Block Scheduling (FPBS) in the hopes that no matter what schedule students are a part of, the achievement results will be similar. (Study participants had completed both grade level ELO assessments and were continuously enrolled in one high school through their junior year.

  3. Have Your Computer Call My Computer.

    ERIC Educational Resources Information Center

    Carabi, Peter

    1992-01-01

    As more school systems adopt site-based management, local decision makers need greater access to all kinds of information. Microcomputer-based networks can help with classroom management, scheduling, student program design, counselor recommendations, and financial reporting operations. Administrators are provided with planning tips and a sample…

  4. Population dynamics of Varroa destructor (Acari: Varroidae) in commercial honey bee colonies and implications for control

    USDA-ARS?s Scientific Manuscript database

    Treatment schedules to maintain low levels of Varroa mites in honey bee colonies were tested in hives started from either package bees or splits of larger colonies. The schedules were developed based on predictions of Varroa population growth generated from a mathematical model of honey bee colony ...

  5. An Integrated Approach to Locality-Conscious Processor Allocation and Scheduling of Mixed-Parallel Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vydyanathan, Naga; Krishnamoorthy, Sriram; Sabin, Gerald M.

    2009-08-01

    Complex parallel applications can often be modeled as directed acyclic graphs of coarse-grained application-tasks with dependences. These applications exhibit both task- and data-parallelism, and combining these two (also called mixedparallelism), has been shown to be an effective model for their execution. In this paper, we present an algorithm to compute the appropriate mix of task- and data-parallelism required to minimize the parallel completion time (makespan) of these applications. In other words, our algorithm determines the set of tasks that should be run concurrently and the number of processors to be allocated to each task. The processor allocation and scheduling decisionsmore » are made in an integrated manner and are based on several factors such as the structure of the taskgraph, the runtime estimates and scalability characteristics of the tasks and the inter-task data communication volumes. A locality conscious scheduling strategy is used to improve inter-task data reuse. Evaluation through simulations and actual executions of task graphs derived from real applications as well as synthetic graphs shows that our algorithm consistently generates schedules with lower makespan as compared to CPR and CPA, two previously proposed scheduling algorithms. Our algorithm also produces schedules that have lower makespan than pure taskand data-parallel schedules. For task graphs with known optimal schedules or lower bounds on the makespan, our algorithm generates schedules that are closer to the optima than other scheduling approaches.« less

  6. Orion MPCV Service Module Avionics Ring Pallet Testing, Correlation, and Analysis

    NASA Technical Reports Server (NTRS)

    Staab, Lucas; Akers, James; Suarez, Vicente; Jones, Trevor

    2012-01-01

    The NASA Orion Multi-Purpose Crew Vehicle (MPCV) is being designed to replace the Space Shuttle as the main manned spacecraft for the agency. Based on the predicted environments in the Service Module avionics ring, an isolation system was deemed necessary to protect the avionics packages carried by the spacecraft. Impact, sinusoidal, and random vibration testing were conducted on a prototype Orion Service Module avionics pallet in March 2010 at the NASA Glenn Research Center Structural Dynamics Laboratory (SDL). The pallet design utilized wire rope isolators to reduce the vibration levels seen by the avionics packages. The current pallet design utilizes the same wire rope isolators (M6-120-10) that were tested in March 2010. In an effort to save cost and schedule, the Finite Element Models of the prototype pallet tested in March 2010 were correlated. Frequency Response Function (FRF) comparisons, mode shape and frequency were all part of the correlation process. The non-linear behavior and the modeling the wire rope isolators proved to be the most difficult part of the correlation process. The correlated models of the wire rope isolators were taken from the prototype design and integrated into the current design for future frequency response analysis and component environment specification.

  7. Integrated coding-aware intra-ONU scheduling for passive optical networks with inter-ONU traffic

    NASA Astrophysics Data System (ADS)

    Li, Yan; Dai, Shifang; Wu, Weiwei

    2016-12-01

    Recently, with the soaring of traffic among optical network units (ONUs), network coding (NC) is becoming an appealing technique for improving the performance of passive optical networks (PONs) with such inter-ONU traffic. However, in the existed NC-based PONs, NC can only be implemented by buffering inter-ONU traffic at the optical line terminal (OLT) to wait for the establishment of coding condition, such passive uncertain waiting severely limits the effect of NC technique. In this paper, we will study integrated coding-aware intra-ONU scheduling in which the scheduling of inter-ONU traffic within each ONU will be undertaken by the OLT to actively facilitate the forming of coding inter-ONU traffic based on the global inter-ONU traffic distribution, and then the performance of PONs with inter-ONU traffic can be significantly improved. We firstly design two report message patterns and an inter-ONU traffic transmission framework as the basis for the integrated coding-aware intra-ONU scheduling. Three specific scheduling strategies are then proposed for adapting diverse global inter-ONU traffic distributions. The effectiveness of the work is finally evaluated by both theoretical analysis and simulations.

  8. A Conceptual Level Design for a Static Scheduler for Hard Real-Time Systems

    DTIC Science & Technology

    1988-03-01

    The design of hard real - time systems is gaining a great deal of attention in the software engineering field as more and more real-world processes are...for these hard real - time systems . PSDL, as an executable design language, is supported by an execution support system consisting of a static scheduler, dynamic scheduler, and translator.

  9. Quantification of Dynamic Model Validation Metrics Using Uncertainty Propagation from Requirements

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Peck, Jeffrey A.; Stewart, Eric C.

    2018-01-01

    The Space Launch System, NASA's new large launch vehicle for long range space exploration, is presently in the final design and construction phases, with the first launch scheduled for 2019. A dynamic model of the system has been created and is critical for calculation of interface loads and natural frequencies and mode shapes for guidance, navigation, and control (GNC). Because of the program and schedule constraints, a single modal test of the SLS will be performed while bolted down to the Mobile Launch Pad just before the first launch. A Monte Carlo and optimization scheme will be performed to create thousands of possible models based on given dispersions in model properties and to determine which model best fits the natural frequencies and mode shapes from modal test. However, the question still remains as to whether this model is acceptable for the loads and GNC requirements. An uncertainty propagation and quantification (UP and UQ) technique to develop a quantitative set of validation metrics that is based on the flight requirements has therefore been developed and is discussed in this paper. There has been considerable research on UQ and UP and validation in the literature, but very little on propagating the uncertainties from requirements, so most validation metrics are "rules-of-thumb;" this research seeks to come up with more reason-based metrics. One of the main assumptions used to achieve this task is that the uncertainty in the modeling of the fixed boundary condition is accurate, so therefore that same uncertainty can be used in propagating the fixed-test configuration to the free-free actual configuration. The second main technique applied here is the usage of the limit-state formulation to quantify the final probabilistic parameters and to compare them with the requirements. These techniques are explored with a simple lumped spring-mass system and a simplified SLS model. When completed, it is anticipated that this requirements-based validation metric will provide a quantified confidence and probability of success for the final SLS dynamics model, which will be critical for a successful launch program, and can be applied in the many other industries where an accurate dynamic model is required.

  10. LPV Modeling and Control for Active Flutter Suppression of a Smart Airfoil

    NASA Technical Reports Server (NTRS)

    Al-Hajjar, Ali M. H.; Al-Jiboory, Ali Khudhair; Swei, Sean Shan-Min; Zhu, Guoming

    2018-01-01

    In this paper, a novel technique of linear parameter varying (LPV) modeling and control of a smart airfoil for active flutter suppression is proposed, where the smart airfoil has a groove along its chord and contains a moving mass that is used to control the airfoil pitching and plunging motions. The new LPV modeling technique is proposed that uses mass position as a scheduling parameter to describe the physical constraint of the moving mass, in addition the hard constraint at the boundaries is realized by proper selection of the parameter varying function. Therefore, the position of the moving mass and the free stream airspeed are considered the scheduling parameters in the study. A state-feedback based LPV gain-scheduling controller with guaranteed H infinity performance is presented by utilizing the dynamics of the moving mass as scheduling parameter at a given airspeed. The numerical simulations demonstrate the effectiveness of the proposed LPV control architecture by significantly improving the performance while reducing the control effort.

  11. The application of artificial intelligence to astronomical scheduling problems

    NASA Technical Reports Server (NTRS)

    Johnston, Mark D.

    1992-01-01

    Efficient utilization of expensive space- and ground-based observatories is an important goal for the astronomical community; the cost of modern observing facilities is enormous, and the available observing time is much less than the demand from astronomers around the world. The complexity and variety of scheduling constraints and goals has led several groups to investigate how artificial intelligence (AI) techniques might help solve these kinds of problems. The earliest and most successful of these projects was started at Space Telescope Science Institute in 1987 and has led to the development of the Spike scheduling system to support the scheduling of Hubble Space Telescope (HST). The aim of Spike at STScI is to allocate observations to timescales of days to a week observing all scheduling constraints and maximizing preferences that help ensure that observations are made at optimal times. Spike has been in use operationally for HST since shortly after the observatory was launched in Apr. 1990. Although developed specifically for HST scheduling, Spike was carefully designed to provide a general framework for similar (activity-based) scheduling problems. In particular, the tasks to be scheduled are defined in the system in general terms, and no assumptions about the scheduling timescale are built in. The mechanisms for describing, combining, and propagating temporal and other constraints and preferences are quite general. The success of this approach has been demonstrated by the application of Spike to the scheduling of other satellite observatories: changes to the system are required only in the specific constraints that apply, and not in the framework itself. In particular, the Spike framework is sufficiently flexible to handle both long-term and short-term scheduling, on timescales of years down to minutes or less. This talk will discuss recent progress made in scheduling search techniques, the lessons learned from early HST operations, the application of Spike to other problem domains, and plans for the future evolution of the system.

  12. Fabricated torque shaft

    DOEpatents

    Mashey, Thomas Charles

    2002-01-01

    A fabricated torque shaft is provided that features a bolt-together design to allow vane schedule revisions with minimal hardware cost. The bolt-together design further facilitates on-site vane schedule revisions with parts that are comparatively small. The fabricated torque shaft also accommodates stage schedules that are different one from another in non-linear inter-relationships as well as non-linear schedules for a particular stage of vanes.

  13. Effectiveness of Instruction Performed through Computer-Assisted Activity Schedules on On-Schedule and Role-Play Skills of Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Ulke-Kurkcuoglu, Burcu; Bozkurt, Funda; Cuhadar, Selmin

    2015-01-01

    This study aims to investigate the effectiveness of the instruction process provided through computer-assisted activity schedules in the instruction of on-schedule and role-play skills to children with autism spectrum disorder. Herein, a multiple probe design with probe conditions across participants among single subject designs was used. Four…

  14. A two-stage stochastic rule-based model to determine pre-assembly buffer content

    NASA Astrophysics Data System (ADS)

    Gunay, Elif Elcin; Kula, Ufuk

    2018-01-01

    This study considers instant decision-making needs of the automobile manufactures for resequencing vehicles before final assembly (FA). We propose a rule-based two-stage stochastic model to determine the number of spare vehicles that should be kept in the pre-assembly buffer to restore the altered sequence due to paint defects and upstream department constraints. First stage of the model decides the spare vehicle quantities, where the second stage model recovers the scrambled sequence respect to pre-defined rules. The problem is solved by sample average approximation (SAA) algorithm. We conduct a numerical study to compare the solutions of heuristic model with optimal ones and provide following insights: (i) as the mismatch between paint entrance and scheduled sequence decreases, the rule-based heuristic model recovers the scrambled sequence as good as the optimal resequencing model, (ii) the rule-based model is more sensitive to the mismatch between the paint entrance and scheduled sequences for recovering the scrambled sequence, (iii) as the defect rate increases, the difference in recovery effectiveness between rule-based heuristic and optimal solutions increases, (iv) as buffer capacity increases, the recovery effectiveness of the optimization model outperforms heuristic model, (v) as expected the rule-based model holds more inventory than the optimization model.

  15. Achieving reutilization of scheduling software through abstraction and generalization

    NASA Technical Reports Server (NTRS)

    Wilkinson, George J.; Monteleone, Richard A.; Weinstein, Stuart M.; Mohler, Michael G.; Zoch, David R.; Tong, G. Michael

    1995-01-01

    Reutilization of software is a difficult goal to achieve particularly in complex environments that require advanced software systems. The Request-Oriented Scheduling Engine (ROSE) was developed to create a reusable scheduling system for the diverse scheduling needs of the National Aeronautics and Space Administration (NASA). ROSE is a data-driven scheduler that accepts inputs such as user activities, available resources, timing contraints, and user-defined events, and then produces a conflict-free schedule. To support reutilization, ROSE is designed to be flexible, extensible, and portable. With these design features, applying ROSE to a new scheduling application does not require changing the core scheduling engine, even if the new application requires significantly larger or smaller data sets, customized scheduling algorithms, or software portability. This paper includes a ROSE scheduling system description emphasizing its general-purpose features, reutilization techniques, and tasks for which ROSE reuse provided a low-risk solution with significant cost savings and reduced software development time.

  16. Present and future hydropower scheduling in Statkraft

    NASA Astrophysics Data System (ADS)

    Bruland, O.

    2012-12-01

    Statkraft produces close to 40 TWH in an average year and is one of the largest hydropower producers in Europe. For hydropower producers the scheduling of electricity generation is the key to success and this depend on optimal use of the water resources. The hydrologist and his forecasts both on short and on long terms are crucial to this success. The hydrological forecasts in Statkraft and most hydropower companies in Scandinavia are based on lumped models and the HBV concept. But before the hydrological model there is a complex system for collecting, controlling and correcting data applied in the models and the production scheduling and, equally important, routines for surveillance of the processes and manual intervention. Prior to the forecasting the states in the hydrological models are updated based on observations. When snow is present in the catchments snow surveys are an important source for model updating. The meteorological forecast is another premise provider to the hydrological forecast and to get as precise meteorological forecast as possible Statkraft hires resources from the governmental forecasting center. Their task is to interpret the meteorological situation, describe the uncertainties and if necessary use their knowledge and experience to manually correct the forecast in the hydropower production regions. This is one of several forecast applied further in the scheduling process. Both to be able to compare and evaluate different forecast providers and to ensure that we get the best available forecast, forecasts from different sources are applied. Some of these forecasts have undergone statistical corrections to reduce biases. The uncertainties related to the meteorological forecast have for a long time been approached and described by ensemble forecasts. But also the observations used for updating the model have a related uncertainty. Both to the observations itself and to how well they represent the catchment. Though well known, these uncertainties have thus far been handled superficially. Statkraft has initiated a program called ENKI to approach these issues. A part of this program is to apply distributed models for hydrological forecasting. Developing methodologies to handle uncertainties in the observations, the meteorological forecasts, the model itself and how to update the model with this information are other parts of the program. Together with energy price expectations and information about the state of the energy production system the hydrological forecast is input to the next step in the production scheduling both on short and long term. The long term schedule for reservoir filling is premise provider to the short term optimizing of water. The long term schedule is based on the actual reservoir levels, snow storages and a long history of meteorological observations and gives an overall schedule at a regional level. Within the regions a more detailed tool is used for short term optimizing of the hydropower production Each reservoir is scheduled taking into account restrictions in the water courses and cost of start and stop of aggregates. The value of the water is calculated for each reservoir and reflects the risk of water spillage. This compared to the energy price determines whether an aggregate will run or not. In a gradually more complex energy system with relatively lower regulated capacity this is an increasingly more challenging task.

  17. The MSFC UNIVAC 1108 EXEC 8 simulation model

    NASA Technical Reports Server (NTRS)

    Williams, T. G.; Richards, F. M.; Weatherbee, J. E.; Paul, L. K.

    1972-01-01

    A model is presented which simulates the MSFC Univac 1108 multiprocessor system. The hardware/operating system is described to enable a good statistical measurement of the system behavior. The performance of the 1108 is evaluated by performing twenty-four different experiments designed to locate system bottlenecks and also to test the sensitivity of system throughput with respect to perturbation of the various Exec 8 scheduling algorithms. The model is implemented in the general purpose system simulation language and the techniques described can be used to assist in the design, development, and evaluation of multiprocessor systems.

  18. Multi-layer service function chaining scheduling based on auxiliary graph in IP over optical network

    NASA Astrophysics Data System (ADS)

    Li, Yixuan; Li, Hui; Liu, Yuze; Ji, Yuefeng

    2017-10-01

    Software Defined Optical Network (SDON) can be considered as extension of Software Defined Network (SDN) in optical networks. SDON offers a unified control plane and makes optical network an intelligent transport network with dynamic flexibility and service adaptability. For this reason, a comprehensive optical transmission service, able to achieve service differentiation all the way down to the optical transport layer, can be provided to service function chaining (SFC). IP over optical network, as a promising networking architecture to interconnect data centers, is the most widely used scenarios of SFC. In this paper, we offer a flexible and dynamic resource allocation method for diverse SFC service requests in the IP over optical network. To do so, we firstly propose the concept of optical service function (OSF) and a multi-layer SFC model. OSF represents the comprehensive optical transmission service (e.g., multicast, low latency, quality of service, etc.), which can be achieved in multi-layer SFC model. OSF can also be considered as a special SF. Secondly, we design a resource allocation algorithm, which we call OSF-oriented optical service scheduling algorithm. It is able to address multi-layer SFC optical service scheduling and provide comprehensive optical transmission service, while meeting multiple optical transmission requirements (e.g., bandwidth, latency, availability). Moreover, the algorithm exploits the concept of Auxiliary Graph. Finally, we compare our algorithm with the Baseline algorithm in simulation. And simulation results show that our algorithm achieves superior performance than Baseline algorithm in low traffic load condition.

  19. Predicting Exposure to Consumer-Products Using Agent-Based Models Embedded with Needs-Based Artificial Intelligence and Empirically -Based Scheduling Models

    EPA Science Inventory

    Information on human behavior and consumer product use is important for characterizing exposures to chemicals in consumer products and in indoor environments. Traditionally, exposure-assessors have relied on time-use surveys to obtain information on exposure-related behavior. In ...

  20. SPANR planning and scheduling

    NASA Astrophysics Data System (ADS)

    Freund, Richard F.; Braun, Tracy D.; Kussow, Matthew; Godfrey, Michael; Koyama, Terry

    2001-07-01

    SPANR (Schedule, Plan, Assess Networked Resources) is (i) a pre-run, off-line planning and (ii) a runtime, just-in-time scheduling mechanism. It is designed to support primarily commercial applications in that it optimizes throughput rather than individual jobs (unless they have highest priority). Thus it is a tool for a commercial production manager to maximize total work. First the SPANR Planner is presented showing the ability to do predictive 'what-if' planning. It can answer such questions as, (i) what is the overall effect of acquiring new hardware or (ii) what would be the effect of a different scheduler. The ability of the SPANR Planner to formulate in advance tree-trimming strategies is useful in several commercial applications, such as electronic design or pharmaceutical simulations. The SPANR Planner is demonstrated using a variety of benchmarks. The SPANR Runtime Scheduler (RS) is briefly presented. The SPANR RS can provide benefit for several commercial applications, such as airframe design and financial applications. Finally a design is shown whereby SPANR can provide scheduling advice to most resource management systems.

  1. Teaching emergency medical services management skills using a computer simulation exercise.

    PubMed

    Hubble, Michael W; Richards, Michael E; Wilfong, Denise

    2011-02-01

    Simulation exercises have long been used to teach management skills in business schools. However, this pedagogical approach has not been reported in emergency medical services (EMS) management education. We sought to develop, deploy, and evaluate a computerized simulation exercise for teaching EMS management skills. Using historical data, a computer simulation model of a regional EMS system was developed. After validation, the simulation was used in an EMS management course. Using historical operational and financial data of the EMS system under study, students designed an EMS system and prepared a budget based on their design. The design of each group was entered into the model that simulated the performance of the EMS system. Students were evaluated on operational and financial performance of their system design and budget accuracy and then surveyed about their experiences with the exercise. The model accurately simulated the performance of the real-world EMS system on which it was based. The exercise helped students identify operational inefficiencies in their system designs and highlighted budget inaccuracies. Most students rated the exercise as moderately or very realistic in ambulance deployment scheduling, budgeting, personnel cost calculations, demand forecasting, system design, and revenue projections. All students indicated the exercise was helpful in gaining a top management perspective, and 89% stated the exercise was helpful in bridging the gap between theory and reality. Preliminary experience with a computer simulator to teach EMS management skills was well received by students in a baccalaureate paramedic program and seems to be a valuable teaching tool. Copyright © 2011 Society for Simulation in Healthcare

  2. Completable scheduling: An integrated approach to planning and scheduling

    NASA Technical Reports Server (NTRS)

    Gervasio, Melinda T.; Dejong, Gerald F.

    1992-01-01

    The planning problem has traditionally been treated separately from the scheduling problem. However, as more realistic domains are tackled, it becomes evident that the problem of deciding on an ordered set of tasks to achieve a set of goals cannot be treated independently of the problem of actually allocating resources to the tasks. Doing so would result in losing the robustness and flexibility needed to deal with imperfectly modeled domains. Completable scheduling is an approach which integrates the two problems by allowing an a priori planning module to defer particular planning decisions, and consequently the associated scheduling decisions, until execution time. This allows a completable scheduling system to maximize plan flexibility by allowing runtime information to be taken into consideration when making planning and scheduling decision. Furthermore, through the criteria of achievability placed on deferred decision, a completable scheduling system is able to retain much of the goal-directedness and guarantees of achievement afforded by a priori planning. The completable scheduling approach is further enhanced by the use of contingent explanation-based learning, which enables a completable scheduling system to learn general completable plans from example and improve its performance through experience. Initial experimental results show that completable scheduling outperforms classical scheduling as well as pure reactive scheduling in a simple scheduling domain.

  3. Design of LPV fault-tolerant controller for pitch system of wind turbine

    NASA Astrophysics Data System (ADS)

    Wu, Dinghui; Zhang, Xiaolin

    2017-07-01

    To address failures of wind turbine pitch-angle sensors, traditional wind turbine linear parameter varying (LPV) model is transformed into a double-layer convex polyhedron LPV model. On the basis of this model, when the plurality of the sensor undergoes failure and details of the failure are inconvenient to obtain, each sub-controller is designed using distributed thought and gain scheduling method. The final controller is obtained using all of the sub-controllers by a convex combination. The design method corrects the errors of the linear model, improves the linear degree of the system, and solves the problem of multiple pitch angle faults to ensure stable operation of the wind turbine.

  4. 76 FR 24457 - Proposed Information Collection; Comment Request; Survey of Income and Program Participation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-02

    ... the SIPP, which is a household-based survey designed as a continuous series of national panels. New... the panel. The core is supplemented with questions designed to address specific needs, such as... be measured over time. The 2008 panel is currently scheduled for approximately 6 years and will...

  5. Online Appointment Scheduling for a Nuclear Medicine Department in a Chinese Hospital

    PubMed Central

    Feng, Ya-bing

    2018-01-01

    Nuclear medicine, a subspecialty of radiology, plays an important role in proper diagnosis and timely treatment. Multiple resources, especially short-lived radiopharmaceuticals involved in the process of nuclear medical examination, constitute a unique problem in appointment scheduling. Aiming at achieving scientific and reasonable appointment scheduling in the West China Hospital (WCH), a typical class A tertiary hospital in China, we developed an online appointment scheduling algorithm based on an offline nonlinear integer programming model which considers multiresources allocation, the time window constraints imposed by short-lived radiopharmaceuticals, and the stochastic nature of the patient requests when scheduling patients. A series of experiments are conducted to show the effectiveness of the proposed strategy based on data provided by the WCH. The results show that the examination amount increases by 29.76% compared with the current one with a significant increase in the resource utilization and timely rate. Besides, it also has a high stability for stochastic factors and bears the advantage of convenient and economic operation. PMID:29849748

  6. A trade-off analysis design tool. Aircraft interior noise-motion/passenger satisfaction model

    NASA Technical Reports Server (NTRS)

    Jacobson, I. D.

    1977-01-01

    A design tool was developed to enhance aircraft passenger satisfaction. The effect of aircraft interior motion and noise on passenger comfort and satisfaction was modelled. Effects of individual aircraft noise sources were accounted for, and the impact of noise on passenger activities and noise levels to safeguard passenger hearing were investigated. The motion noise effect models provide a means for tradeoff analyses between noise and motion variables, and also provide a framework for optimizing noise reduction among noise sources. Data for the models were collected onboard commercial aircraft flights and specially scheduled tests.

  7. Aerospace Toolbox--a flight vehicle design, analysis, simulation, and software development environment II: an in-depth overview

    NASA Astrophysics Data System (ADS)

    Christian, Paul M.

    2002-07-01

    This paper presents a demonstrated approach to significantly reduce the cost and schedule of non real-time modeling and simulation, real-time HWIL simulation, and embedded code development. The tool and the methodology presented capitalize on a paradigm that has become a standard operating procedure in the automotive industry. The tool described is known as the Aerospace Toolbox, and it is based on the MathWorks Matlab/Simulink framework, which is a COTS application. Extrapolation of automotive industry data and initial applications in the aerospace industry show that the use of the Aerospace Toolbox can make significant contributions in the quest by NASA and other government agencies to meet aggressive cost reduction goals in development programs. The part I of this paper provided a detailed description of the GUI based Aerospace Toolbox and how it is used in every step of a development program; from quick prototyping of concept developments that leverage built-in point of departure simulations through to detailed design, analysis, and testing. Some of the attributes addressed included its versatility in modeling 3 to 6 degrees of freedom, its library of flight test validated library of models (including physics, environments, hardware, and error sources), and its built-in Monte Carlo capability. Other topics that were covered in part I included flight vehicle models and algorithms, and the covariance analysis package, Navigation System Covariance Analysis Tools (NavSCAT). Part II of this series will cover a more in-depth look at the analysis and simulation capability and provide an update on the toolbox enhancements. It will also address how the Toolbox can be used as a design hub for Internet based collaborative engineering tools such as NASA's Intelligent Synthesis Environment (ISE) and Lockheed Martin's Interactive Missile Design Environment (IMD).

  8. Validating and Verifying Biomathematical Models of Human Fatigue

    NASA Technical Reports Server (NTRS)

    Martinez, Siera Brooke; Quintero, Luis Ortiz; Flynn-Evans, Erin

    2015-01-01

    Airline pilots experience acute and chronic sleep deprivation, sleep inertia, and circadian desynchrony due to the need to schedule flight operations around the clock. This sleep loss and circadian desynchrony gives rise to cognitive impairments, reduced vigilance and inconsistent performance. Several biomathematical models, based principally on patterns observed in circadian rhythms and homeostatic drive, have been developed to predict a pilots levels of fatigue or alertness. These models allow for the Federal Aviation Administration (FAA) and commercial airlines to make decisions about pilot capabilities and flight schedules. Although these models have been validated in a laboratory setting, they have not been thoroughly tested in operational environments where uncontrolled factors, such as environmental sleep disrupters, caffeine use and napping, may impact actual pilot alertness and performance. We will compare the predictions of three prominent biomathematical fatigue models (McCauley Model, Harvard Model, and the privately-sold SAFTE-FAST Model) to actual measures of alertness and performance. We collected sleep logs, movement and light recordings, psychomotor vigilance task (PVT), and urinary melatonin (a marker of circadian phase) from 44 pilots in a short-haul commercial airline over one month. We will statistically compare with the model predictions to lapses on the PVT and circadian phase. We will calculate the sensitivity and specificity of each model prediction under different scheduling conditions. Our findings will aid operational decision-makers in determining the reliability of each model under real-world scheduling situations.

  9. A Hybrid Task Graph Scheduler for High Performance Image Processing Workflows.

    PubMed

    Blattner, Timothy; Keyrouz, Walid; Bhattacharyya, Shuvra S; Halem, Milton; Brady, Mary

    2017-12-01

    Designing applications for scalability is key to improving their performance in hybrid and cluster computing. Scheduling code to utilize parallelism is difficult, particularly when dealing with data dependencies, memory management, data motion, and processor occupancy. The Hybrid Task Graph Scheduler (HTGS) improves programmer productivity when implementing hybrid workflows for multi-core and multi-GPU systems. The Hybrid Task Graph Scheduler (HTGS) is an abstract execution model, framework, and API that increases programmer productivity when implementing hybrid workflows for such systems. HTGS manages dependencies between tasks, represents CPU and GPU memories independently, overlaps computations with disk I/O and memory transfers, keeps multiple GPUs occupied, and uses all available compute resources. Through these abstractions, data motion and memory are explicit; this makes data locality decisions more accessible. To demonstrate the HTGS application program interface (API), we present implementations of two example algorithms: (1) a matrix multiplication that shows how easily task graphs can be used; and (2) a hybrid implementation of microscopy image stitching that reduces code size by ≈ 43% compared to a manually coded hybrid workflow implementation and showcases the minimal overhead of task graphs in HTGS. Both of the HTGS-based implementations show good performance. In image stitching the HTGS implementation achieves similar performance to the hybrid workflow implementation. Matrix multiplication with HTGS achieves 1.3× and 1.8× speedup over the multi-threaded OpenBLAS library for 16k × 16k and 32k × 32k size matrices, respectively.

  10. Integrating LMINET with TAAM and SIMMOD: A Feasibility Study

    NASA Technical Reports Server (NTRS)

    Long, Dou; Stouffer-Coston, Virginia; Kostiuk, Peter; Kula, Richard; Yackovetsky, Robert (Technical Monitor)

    2001-01-01

    LMINET is a queuing network air traffic simulation model implemented at 64 large airports and the entire National Airspace System in the United States. TAAM and SIMMOD are two widely used air traffic event-driven simulation models mostly for airports. Based on our proposed Progressive Augmented window approach, TAAM and SIMMOD are integrated with LMINET though flight schedules. In the integration, the flight schedules are modified through the flight delays reported by the other models. The benefit to the local simulation study is to let TAAM or SIMMOD take the modified schedule from LMINET, which takes into account of the air traffic congestion and flight delays at the national network level. We demonstrate the value of the integrated models by the case studies at Chicago O'Hare International Airport and Washington Dulles International Airport. Details of the integration are reported and future work for a full-blown integration is identified.

  11. Scheduling viability tests for seeds in long-term storage based on a Bayesian Multi-Level Model

    USDA-ARS?s Scientific Manuscript database

    Genebank managers conduct viability tests on stored seeds so they can replace lots that have viability near a critical threshold, such as 50 or 85% germination. Currently, these tests are typically scheduled at uniform intervals; testing every 5 years is common. A manager needs to balance the cost...

  12. Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms.

    PubMed

    Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel

    2014-01-01

    With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies.

  13. DORCA computer program. Volume 1: User's guide

    NASA Technical Reports Server (NTRS)

    Wray, S. T., Jr.

    1971-01-01

    The Dynamic Operational Requirements and Cost Analysis Program (DORCA) was written to provide a top level analysis tool for NASA. DORCA relies on a man-machine interaction to optimize results based on external criteria. DORCA relies heavily on outside sources to provide cost information and vehicle performance parameters as the program does not determine these quantities but rather uses them. Given data describing missions, vehicles, payloads, containers, space facilities, schedules, cost values and costing procedures, the program computes flight schedules, cargo manifests, vehicle fleet requirements, acquisition schedules and cost summaries. The program is designed to consider the Earth Orbit, Lunar, Interplanetary and Automated Satellite Programs. A general outline of the capabilities of the program are provided.

  14. gPKPDSim: a SimBiology®-based GUI application for PKPD modeling in drug development.

    PubMed

    Hosseini, Iraj; Gajjala, Anita; Bumbaca Yadav, Daniela; Sukumaran, Siddharth; Ramanujan, Saroja; Paxson, Ricardo; Gadkar, Kapil

    2018-04-01

    Modeling and simulation (M&S) is increasingly used in drug development to characterize pharmacokinetic-pharmacodynamic (PKPD) relationships and support various efforts such as target feasibility assessment, molecule selection, human PK projection, and preclinical and clinical dose and schedule determination. While model development typically require mathematical modeling expertise, model exploration and simulations could in many cases be performed by scientists in various disciplines to support the design, analysis and interpretation of experimental studies. To this end, we have developed a versatile graphical user interface (GUI) application to enable easy use of any model constructed in SimBiology ® to execute various common PKPD analyses. The MATLAB ® -based GUI application, called gPKPDSim, has a single screen interface and provides functionalities including simulation, data fitting (parameter estimation), population simulation (exploring the impact of parameter variability on the outputs of interest), and non-compartmental PK analysis. Further, gPKPDSim is a user-friendly tool with capabilities including interactive visualization, exporting of results and generation of presentation-ready figures. gPKPDSim was designed primarily for use in preclinical and translational drug development, although broader applications exist. gPKPDSim is a MATLAB ® -based open-source application and is publicly available to download from MATLAB ® Central™. We illustrate the use and features of gPKPDSim using multiple PKPD models to demonstrate the wide applications of this tool in pharmaceutical sciences. Overall, gPKPDSim provides an integrated, multi-purpose user-friendly GUI application to enable efficient use of PKPD models by scientists from various disciplines, regardless of their modeling expertise.

  15. Knowledge-based systems for power management

    NASA Technical Reports Server (NTRS)

    Lollar, L. F.

    1992-01-01

    NASA-Marshall's Electrical Power Branch has undertaken the development of expert systems in support of further advancements in electrical power system automation. Attention is given to the features (1) of the Fault Recovery and Management Expert System, (2) a resource scheduler or Master of Automated Expert Scheduling Through Resource Orchestration, and (3) an adaptive load-priority manager, or Load Priority List Management System. The characteristics of an advisory battery manager for the Hubble Space Telescope, designated the 'nickel-hydrogen expert system', are also noted.

  16. Model of load distribution for earth observation satellite

    NASA Astrophysics Data System (ADS)

    Tu, Shumin; Du, Min; Li, Wei

    2017-03-01

    For the system of multiple types of EOS (Earth Observing Satellites), it is a vital issue to assure that each type of payloads carried by the group of EOS can be used efficiently and reasonably for in astronautics fields. Currently, most of researches on configuration of satellite and payloads focus on the scheduling for launched satellites. However, the assignments of payloads for un-launched satellites are bit researched, which are the same crucial as the scheduling of tasks. Moreover, the current models of satellite resources scheduling lack of more general characteristics. Referring the idea about roles-based access control (RBAC) of information system, this paper brings forward a model based on role-mining of RBAC to improve the generality and foresight of the method of assignments of satellite-payload. By this way, the assignment of satellite-payload can be mapped onto the problem of role-mining. A novel method will be introduced, based on the idea of biclique-combination in graph theory and evolutionary algorithm in intelligence computing, to address the role-mining problem of satellite-payload assignments. The simulation experiments are performed to verify the novel method. Finally, the work of this paper is concluded.

  17. MIC-SVM: Designing A Highly Efficient Support Vector Machine For Advanced Modern Multi-Core and Many-Core Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, Yang; Song, Shuaiwen; Fu, Haohuan

    2014-08-16

    Support Vector Machine (SVM) has been widely used in data-mining and Big Data applications as modern commercial databases start to attach an increasing importance to the analytic capabilities. In recent years, SVM was adapted to the field of High Performance Computing for power/performance prediction, auto-tuning, and runtime scheduling. However, even at the risk of losing prediction accuracy due to insufficient runtime information, researchers can only afford to apply offline model training to avoid significant runtime training overhead. To address the challenges above, we designed and implemented MICSVM, a highly efficient parallel SVM for x86 based multi-core and many core architectures,more » such as the Intel Ivy Bridge CPUs and Intel Xeon Phi coprocessor (MIC).« less

  18. Preliminary design-lift/cruise fan research and technology airplane flight control system

    NASA Technical Reports Server (NTRS)

    Gotlieb, P.; Lewis, G. E.; Little, L. J.

    1976-01-01

    This report presents the preliminary design of a stability augmentation system for a NASA V/STOL research and technology airplane. This stability augmentation system is postulated as the simplest system that meets handling qualities levels for research and technology missions flown by NASA test pilots. The airplane studied in this report is a T-39 fitted with tilting lift/cruise fan nacelles and a nose fan. The propulsion system features a shaft interconnecting the three variable pitch fans and three power plants. The mathematical modeling is based on pre-wind tunnel test estimated data. The selected stability augmentation system uses variable gains scheduled with airspeed. Failure analysis of the system illustrates the benign effect of engine failure. Airplane rate sensor failure must be solved with redundancy.

  19. The preliminary design of a lift-cruise fan airplane flight control system

    NASA Technical Reports Server (NTRS)

    Gotlieb, P.

    1977-01-01

    This paper presents the preliminary design of a stability augmentation system for a NASA V/STOL research and technology airplane. This stability augmentation system is postulated as the simplest system that meets handling-quality levels for research and technology missions flown by NASA test pilots. The airplane studied in this report is a modified T-39 fitted with tilting lift/cruise fan nacelles and a nose fan. The propulsion system features a shaft that interconnects three variable-pitch fans and three powerplants. The mathematical modeling is based on pre-wind tunnel test estimated data. The selected stability augmentation system uses variable gains scheduled with airspeed. Failure analysis of the system illustrates the benign effect of engine failure. Airplane rate sensor failure must be solved with redundancy.

  20. Framework for Architecture Trade Study Using MBSE and Performance Simulation

    NASA Technical Reports Server (NTRS)

    Ryan, Jessica; Sarkani, Shahram; Mazzuchim, Thomas

    2012-01-01

    Increasing complexity in modern systems as well as cost and schedule constraints require a new paradigm of system engineering to fulfill stakeholder needs. Challenges facing efficient trade studies include poor tool interoperability, lack of simulation coordination (design parameters) and requirements flowdown. A recent trend toward Model Based System Engineering (MBSE) includes flexible architecture definition, program documentation, requirements traceability and system engineering reuse. As a new domain MBSE still lacks governing standards and commonly accepted frameworks. This paper proposes a framework for efficient architecture definition using MBSE in conjunction with Domain Specific simulation to evaluate trade studies. A general framework is provided followed with a specific example including a method for designing a trade study, defining candidate architectures, planning simulations to fulfill requirements and finally a weighted decision analysis to optimize system objectives.

  1. The cinema LED lighting system design based on SCM

    NASA Astrophysics Data System (ADS)

    En, De; Wang, Xiaobin

    2010-11-01

    A LED lighting system in the modern theater and the corresponding control program is introduced. Studies show that moderate and mutative brightness in the space would attract audiences' attention on the screen easily. SCM controls LED dynamically by outputting PWM pulse in different duty cycle. That cinema dome lights' intensity can vary with the plot changed, make people get a better view of experience. This article expounds the architecture of hardware system in the schedule and the control flow of the host of the solution. Besides, it introduces the design of software as well. At last, the system which is proved energy-saving, reliable, good visual effect and having using value by means of producing a small-scale model, which reproduce the whole system and achieves the desired result.

  2. Convection-Enhanced Delivery (CED) in an Animal Model of Malignant Peripheral Nerve Sheath Tumors and Plexiform Neurofibromas

    DTIC Science & Technology

    2011-09-01

    with an accelerated schedule Convection-Enhanced Delivery ( CED ), Malignant Peripheral Nerve Sheath ( MPNST ), Plexiform Neurofibromas (PN...the distribution of macromolecules delivered to intraneural PNs and MPNST via CED . Design: Orthotopic xenograft models of sciatic intraneural NF1...determine the efficacy CED of the epidermal growth factor receptor (EGFR) inhibitor erlotinib in animal models of intraneural PNs and MPNST

  3. Leveraging model-based study designs and serial micro-sampling techniques to understand the oral pharmacokinetics of the potent LTB4 inhibitor, CP-105696, for mouse pharmacology studies.

    PubMed

    Spilker, Mary E; Chung, Heekyung; Visswanathan, Ravi; Bagrodia, Shubha; Gernhardt, Steven; Fantin, Valeria R; Ellies, Lesley G

    2017-07-01

    1. Leukotriene B4 (LTB4) is a proinflammatory mediator important in the progression of a number of inflammatory diseases. Preclinical models can explore the role of LTB4 in pathophysiology using tool compounds, such as CP-105696, that modulate its activity. To support preclinical pharmacology studies, micro-sampling techniques and mathematical modeling were used to determine the pharmacokinetics of CP-105696 in mice within the context of systemic inflammation induced by a high-fat diet (HFD). 2. Following oral administration of doses > 35 mg/kg, CP-105696 kinetics can be described by a one-compartment model with first order absorption. The compound's half-life is 44-62 h with an apparent volume of distribution of 0.51-0.72 L/kg. Exposures in animals fed an HFD are within 2-fold of those fed a normal chow diet. Daily dosing at 100 mg/kg was not tolerated and resulted in a >20% weight loss in the mice. 3. CP-105696's long half-life has the potential to support a twice weekly dosing schedule. Given that most chronic inflammatory diseases will require long-term therapies, these results are useful in determining the optimal dosing schedules for preclinical studies using CP-105696.

  4. Scheduling language and algorithm development study. Volume 1, phase 2: Design considerations for a scheduling and resource allocation system

    NASA Technical Reports Server (NTRS)

    Morrell, R. A.; Odoherty, R. J.; Ramsey, H. R.; Reynolds, C. C.; Willoughby, J. K.; Working, R. D.

    1975-01-01

    Data and analyses related to a variety of algorithms for solving typical large-scale scheduling and resource allocation problems are presented. The capabilities and deficiencies of various alternative problem solving strategies are discussed from the viewpoint of computer system design.

  5. Reference H Piloted Assessment (LaRC.1) Pilot Briefing Guide

    NASA Technical Reports Server (NTRS)

    Jackson, E. Bruce; Raney, David L.; Hahne, David E.; Derry, Stephen D.; Glaab, Louis J.

    1999-01-01

    This document describes the purpose of and method by which an assessment of the Boeing Reference H High-Speed Civil Transport design was evaluated in the NASA Langley Research Center's Visual/Motion Simulator in January 1997. Six pilots were invited to perform approximately 60 different Mission Task Elements that represent most normal and emergency flight operations of concern to the High Speed Research program. The Reference H design represents a candidate configuration for a High-Speed Civil Transport, a second generation supersonic civilian transport aircraft. The High-Speed Civil Transport is intended to be economically sound and environmentally safe while carrying passengers and cargo at supersonic speeds with a trans-Pacific range. This simulation study was designated "LaRC. 1" for the purposes of planning, scheduling and reporting within the Guidance and Flight Controls super-element of the High-Speed Research program. The study was based upon Cycle 3 release of the Reference H simulation model.

  6. Designing Asynchronous, Text-Based Computer Conferences: Ten Research-Based Suggestions

    ERIC Educational Resources Information Center

    Choitz, Paul; Lee, Doris

    2006-01-01

    Asynchronous computer conferencing refers to the use of computer software and a network enabling participants to post messages that allow discourse to continue even though interactions may be extended over days and weeks. Asynchronous conferences are time-independent, adapting to multiple time zones and learner schedules. Such activities as…

  7. Low energy stage study. Volume 2: Requirements and candidate propulsion modes. [orbital launching of shuttle payloads

    NASA Technical Reports Server (NTRS)

    1978-01-01

    A payload mission model covering 129 launches, was examined and compared against the space transportation system shuttle standard orbit inclinations and a shuttle launch site implementation schedule. Based on this examination and comparison, a set of six reference missions were defined in terms of spacecraft weight and velocity requirements to deliver the payload from a 296 km circular Shuttle standard orbit to the spacecraft's planned orbit. Payload characteristics and requirements representative of the model payloads included in the regime bounded by each of the six reference missions were determined. A set of launch cost envelopes were developed and defined based on the characteristics of existing/planned Shuttle upper stages and expendable launch systems in terms of launch cost and velocity delivered. These six reference missions were used to define the requirements for the candidate propulsion modes which were developed and screened to determine the propulsion approaches for conceptual design.

  8. Modeling of a production system using the multi-agent approach

    NASA Astrophysics Data System (ADS)

    Gwiazda, A.; Sękala, A.; Banaś, W.

    2017-08-01

    The method that allows for the analysis of complex systems is a multi-agent simulation. The multi-agent simulation (Agent-based modeling and simulation - ABMS) is modeling of complex systems consisting of independent agents. In the case of the model of the production system agents may be manufactured pieces set apart from other types of agents like machine tools, conveyors or replacements stands. Agents are magazines and buffers. More generally speaking, the agents in the model can be single individuals, but you can also be defined as agents of collective entities. They are allowed hierarchical structures. It means that a single agent could belong to a certain class. Depending on the needs of the agent may also be a natural or physical resource. From a technical point of view, the agent is a bundle of data and rules describing its behavior in different situations. Agents can be autonomous or non-autonomous in making the decision about the types of classes of agents, class sizes and types of connections between elements of the system. Multi-agent modeling is a very flexible technique for modeling and model creating in the convention that could be adapted to any research problem analyzed from different points of views. One of the major problems associated with the organization of production is the spatial organization of the production process. Secondly, it is important to include the optimal scheduling. For this purpose use can approach multi-purposeful. In this regard, the model of the production process will refer to the design and scheduling of production space for four different elements. The program system was developed in the environment NetLogo. It was also used elements of artificial intelligence. The main agent represents the manufactured pieces that, according to previously assumed rules, generate the technological route and allow preprint the schedule of that line. Machine lines, reorientation stands, conveyors and transport devices also represent the other type of agent that are utilized in the described simulation. The article presents the idea of an integrated program approach and shows the resulting production layout as a virtual model. This model was developed in the NetLogo multi-agent program environment.

  9. Throughput Optimization of Continuous Biopharmaceutical Manufacturing Facilities.

    PubMed

    Garcia, Fernando A; Vandiver, Michael W

    2017-01-01

    In order to operate profitably under different product demand scenarios, biopharmaceutical companies must design their facilities with mass output flexibility in mind. Traditional biologics manufacturing technologies pose operational challenges in this regard due to their high costs and slow equipment turnaround times, restricting the types of products and mass quantities that can be processed. Modern plant design, however, has facilitated the development of lean and efficient bioprocessing facilities through footprint reduction and adoption of disposable and continuous manufacturing technologies. These development efforts have proven to be crucial in seeking to drastically reduce the high costs typically associated with the manufacturing of recombinant proteins. In this work, mathematical modeling is used to optimize annual production schedules for a single-product commercial facility operating with a continuous upstream and discrete batch downstream platform. Utilizing cell culture duration and volumetric productivity as process variables in the model, and annual plant throughput as the optimization objective, 3-D surface plots are created to understand the effect of process and facility design on expected mass output. The model shows that once a plant has been fully debottlenecked it is capable of processing well over a metric ton of product per year. Moreover, the analysis helped to uncover a major limiting constraint on plant performance, the stability of the neutralized viral inactivated pool, which may indicate that this should be a focus of attention during future process development efforts. LAY ABSTRACT: Biopharmaceutical process modeling can be used to design and optimize manufacturing facilities and help companies achieve a predetermined set of goals. One way to perform optimization is by making the most efficient use of process equipment in order to minimize the expenditure of capital, labor and plant resources. To that end, this paper introduces a novel mathematical algorithm used to determine the most optimal equipment scheduling configuration that maximizes the mass output for a facility producing a single product. The paper also illustrates how different scheduling arrangements can have a profound impact on the availability of plant resources, and identifies limiting constraints on the plant design. In addition, simulation data is presented using visualization techniques that aid in the interpretation of the scientific concepts discussed. © PDA, Inc. 2017.

  10. Biomonitoring of physiological status and cognitive performance of underway submariners undergoing a novel watch-standing schedule

    NASA Astrophysics Data System (ADS)

    Duplessis, C. A.; Cullum, M. E.; Crepeau, L. J.

    2005-05-01

    Submarine watch-standers adhere to a 6 hour-on, 12 hour-off (6/12) watch-standing schedule, yoking them to an 18-hr day, engendering circadian desynchronization and chronic sleep deprivation. Moreover, the chronic social crowding, shift work, and confinement of submarine life provide additional stressors known to correlate with elevated secretory immunoglobulin A (sIgA) and cortisol levels, reduced performance, immunologic dysfunction, malignancies, infections, gastrointestinal illness, coronary disease, anxiety, and depression. We evaluated an alternative, compressed, fixed work schedule designed to enhance circadian rhythm entrainment, sleep hygiene, performance, and health on 10 underway submariners, who followed the alternative and 6/12 schedules for approximately 2 weeks each. We measured subjects" sleep, cognitive performance, and salivary biomarker levels. Pilot analysis of the salivary data on one subject utilizing ELISA suggests elevated biomarker levels of stress. Average PM cortisol levels were 0.2 μg/L (normal range: nondetectable - 0.15 μg/L), and mean sIgA levels were 562 μg/ml (normal range: 100-500 μg/ml). Future research exploiting real-time salivary bioassays, via fluorescent polarimetry technology, identified by the Office of Naval Research (ONR) as a future Naval requirement, allows researchers to address correlations between stress-induced elaboration of salivary biomarkers with physiological and performance decrements, thereby fostering insight into the underway submariner"s psychoimmunological status. This may help identify strategies that enhance resilience to stressors. Specifically, empirically-based modeling can identify optimal watch-standing schedules and stress-mitigating procedures -- within the operational constraints of the submarine milieu and the mission --that foster improved circadian entrainment and reduced stress reactivity, enhancing physiological health, operational performance, safety, and job satisfaction.

  11. Energy-saving scheme based on downstream packet scheduling in ethernet passive optical networks

    NASA Astrophysics Data System (ADS)

    Zhang, Lincong; Liu, Yejun; Guo, Lei; Gong, Xiaoxue

    2013-03-01

    With increasing network sizes, the energy consumption of Passive Optical Networks (PONs) has grown significantly. Therefore, it is important to design effective energy-saving schemes in PONs. Generally, energy-saving schemes have focused on sleeping the low-loaded Optical Network Units (ONUs), which tends to bring large packet delays. Further, the traditional ONU sleep modes are not capable of sleeping the transmitter and receiver independently, though they are not required to transmit or receive packets. Clearly, this approach contributes to wasted energy. Thus, in this paper, we propose an Energy-Saving scheme that is based on downstream Packet Scheduling (ESPS) in Ethernet PON (EPON). First, we design both an algorithm and a rule for downstream packet scheduling at the inter- and intra-ONU levels, respectively, to reduce the downstream packet delay. After that, we propose a hybrid sleep mode that contains not only ONU deep sleep mode but also independent sleep modes for the transmitter and the receiver. This ensures that the energy consumed by the ONUs is minimal. To realize the hybrid sleep mode, a modified GATE control message is designed that involves 10 time points for sleep processes. In ESPS, the 10 time points are calculated according to the allocated bandwidths in both the upstream and the downstream. The simulation results show that ESPS outperforms traditional Upstream Centric Scheduling (UCS) scheme in terms of energy consumption and the average delay for both real-time and non-real-time packets downstream. The simulation results also show that the average energy consumption of each ONU in larger-sized networks is less than that in smaller-sized networks; hence, our ESPS is better suited for larger-sized networks.

  12. Discrete harmony search algorithm for scheduling and rescheduling the reprocessing problems in remanufacturing: a case study

    NASA Astrophysics Data System (ADS)

    Gao, Kaizhou; Wang, Ling; Luo, Jianping; Jiang, Hua; Sadollah, Ali; Pan, Quanke

    2018-06-01

    In this article, scheduling and rescheduling problems with increasing processing time and new job insertion are studied for reprocessing problems in the remanufacturing process. To handle the unpredictability of reprocessing time, an experience-based strategy is used. Rescheduling strategies are applied for considering the effect of increasing reprocessing time and the new subassembly insertion. To optimize the scheduling and rescheduling objective, a discrete harmony search (DHS) algorithm is proposed. To speed up the convergence rate, a local search method is designed. The DHS is applied to two real-life cases for minimizing the maximum completion time and the mean of earliness and tardiness (E/T). These two objectives are also considered together as a bi-objective problem. Computational optimization results and comparisons show that the proposed DHS is able to solve the scheduling and rescheduling problems effectively and productively. Using the proposed approach, satisfactory optimization results can be achieved for scheduling and rescheduling on a real-life shop floor.

  13. Electronic Design Automation (EDA) Roadmap Taskforce Report, Design of Microprocessors

    DTIC Science & Technology

    1999-04-01

    through on time. Hence, the study is not a crystal-ball- gazing exercise, but a rigorous, schedulable plan of action to attain the goal. NTRS97...formats so as not to impose too heavy a maintenance burden on their users Object Interfaces eliminate these problems: • A tool that binds the interface ...and User Interface - Design Tool Communication - EDA System Extension Language - EDA Standards- Based Software Development Environment - Design and

  14. Appointment standardization evaluation in a primary care facility.

    PubMed

    Huang, Yu-Li

    2016-07-11

    Purpose - The purpose of this paper is to evaluate the performance on standardizing appointment slot length in a primary care clinic to understand the impact of providers' preferences and practice differences. Design/methodology/approach - The treatment time data were collected for each provider. There were six patient types: emergency/urgent care (ER/UC), follow-up patient (FU), new patient, office visit (OV), physical exam, and well-child care. Simulation model was developed to capture patient flow and measure patient wait time, provider idle time, cost, overtime, finish time, and the number of patients scheduled. Four scheduling scenarios were compared: scheduled all patients at 20 minutes; scheduled ER/UC, FU, OV at 20 minutes and others at 40 minutes; scheduled patient types on individual provider preference; and scheduled patient types on combined provider preference. Findings - Standardized scheduling among providers increase cost by 57 per cent, patient wait time by 83 per cent, provider idle time by five minutes per patient, overtime by 22 minutes, finish time by 30 minutes, and decrease patient access to care by approximately 11 per cent. An individualized scheduling approach could save as much as 14 per cent on cost and schedule 1.5 more patients. The combined preference method could save about 8 per cent while the number of patients scheduled remained the same. Research limitations/implications - The challenge is to actually disseminate the findings to medical providers and adjust scheduling systems accordingly. Originality/value - This paper concluded standardization of providers' clinic preference and practice negatively impact clinic service quality and access to care.

  15. Some Results of Weak Anticipative Concept Applied in Simulation Based Decision Support in Enterprise

    NASA Astrophysics Data System (ADS)

    Kljajić, Miroljub; Kofjač, Davorin; Kljajić Borštnar, Mirjana; Škraba, Andrej

    2010-11-01

    The simulation models are used as for decision support and learning in enterprises and in schools. Tree cases of successful applications demonstrate usefulness of weak anticipative information. Job shop scheduling production with makespan criterion presents a real case customized flexible furniture production optimization. The genetic algorithm for job shop scheduling optimization is presented. Simulation based inventory control for products with stochastic lead time and demand describes inventory optimization for products with stochastic lead time and demand. Dynamic programming and fuzzy control algorithms reduce the total cost without producing stock-outs in most cases. Values of decision making information based on simulation were discussed too. All two cases will be discussed from optimization, modeling and learning point of view.

  16. Systems cost/performance analysis (study 2.3). Volume 3: Programmer's manual and user's guide. [for unmanned spacecraft

    NASA Technical Reports Server (NTRS)

    Janz, R. F.

    1974-01-01

    The systems cost/performance model was implemented as a digital computer program to perform initial program planning, cost/performance tradeoffs, and sensitivity analyses. The computer is described along with the operating environment in which the program was written and checked, the program specifications such as discussions of logic and computational flow, the different subsystem models involved in the design of the spacecraft, and routines involved in the nondesign area such as costing and scheduling of the design. Preliminary results for the DSCS-II design are also included.

  17. Design and Test of Fan/Nacelle Models Quiet High-Speed Fan

    NASA Technical Reports Server (NTRS)

    Miller, Christopher J. (Technical Monitor); Weir, Donald

    2003-01-01

    The Quiet High-Speed Fan program is a cooperative effort between Honeywell Engines & Systems (formerly AlliedSignal Engines & Systems) and the NASA Glenn Research Center. Engines & Systems has designed an advanced high-speed fan that will be tested on the Ultra High Bypass Propulsion Simulator in the NASA Glenn 9 x 15 foot wind tunnel, currently scheduled for the second quarter of 2000. An Engines & Systems modern fan design will be used as a baseline. A nacelle model is provided that is characteristic of a typical, modern regional aircraft nacelle and meets all of the program test objectives.

  18. Energy-saving framework for passive optical networks with ONU sleep/doze mode.

    PubMed

    Van, Dung Pham; Valcarenghi, Luca; Dias, Maluge Pubuduni Imali; Kondepu, Koteswararao; Castoldi, Piero; Wong, Elaine

    2015-02-09

    This paper proposes an energy-saving passive optical network framework (ESPON) that aims to incorporate optical network unit (ONU) sleep/doze mode into dynamic bandwidth allocation (DBA) algorithms to reduce ONU energy consumption. In the ESPON, the optical line terminal (OLT) schedules both downstream (DS) and upstream (US) transmissions in the same slot in an online and dynamic fashion whereas the ONU enters sleep mode outside the slot. The ONU sleep time is maximized based on both DS and US traffic. Moreover, during the slot, the ONU might enter doze mode when only its transmitter is idle to further improve energy efficiency. The scheduling order of data transmission, control message exchange, sleep period, and doze period defines an energy-efficient scheme under the ESPON. Three schemes are designed and evaluated in an extensive FPGA-based evaluation. Results show that whilst all the schemes significantly save ONU energy for different evaluation scenarios, the scheduling order has great impact on their performance. In addition, the ESPON allows for a scheduling order that saves ONU energy independently of the network reach.

  19. Real-Time Robust Adaptive Modeling and Scheduling for an Electronic Commerce Server

    NASA Astrophysics Data System (ADS)

    Du, Bing; Ruan, Chun

    With the increasing importance and pervasiveness of Internet services, it is becoming a challenge for the proliferation of electronic commerce services to provide performance guarantees under extreme overload. This paper describes a real-time optimization modeling and scheduling approach for performance guarantee of electronic commerce servers. We show that an electronic commerce server may be simulated as a multi-tank system. A robust adaptive server model is subject to unknown additive load disturbances and uncertain model matching. Overload control techniques are based on adaptive admission control to achieve timing guarantees. We evaluate the performance of the model using a complex simulation that is subjected to varying model parameters and massive overload.

  20. Distributed network scheduling

    NASA Technical Reports Server (NTRS)

    Clement, Bradley J.; Schaffer, Steven R.

    2004-01-01

    Distributed Network Scheduling is the scheduling of future communications of a network by nodes in the network. This report details software for doing this onboard spacecraft in a remote network. While prior work on distributed scheduling has been applied to remote spacecraft networks, the software reported here focuses on modeling communication activities in greater detail and including quality of service constraints. Our main results are based on a Mars network of spacecraft and include identifying a maximum opportunity of improving traverse exploration rate a factor of three; a simulation showing reduction in one-way delivery times from a rover to Earth from as much as 5 to 1.5 hours; simulated response to unexpected events averaging under an hour onboard; and ground schedule generation ranging from seconds to 50 minutes for 15 to 100 communication goals.

Top