Sample records for schedulability-driven reliability optimization

  1. Scheduling for energy and reliability management on multiprocessor real-time systems

    NASA Astrophysics Data System (ADS)

    Qi, Xuan

    Scheduling algorithms for multiprocessor real-time systems have been studied for years with many well-recognized algorithms proposed. However, it is still an evolving research area and many problems remain open due to their intrinsic complexities. With the emergence of multicore processors, it is necessary to re-investigate the scheduling problems and design/develop efficient algorithms for better system utilization, low scheduling overhead, high energy efficiency, and better system reliability. Focusing cluster schedulings with optimal global schedulers, we study the utilization bound and scheduling overhead for a class of cluster-optimal schedulers. Then, taking energy/power consumption into consideration, we developed energy-efficient scheduling algorithms for real-time systems, especially for the proliferating embedded systems with limited energy budget. As the commonly deployed energy-saving technique (e.g. dynamic voltage frequency scaling (DVFS)) will significantly affect system reliability, we study schedulers that have intelligent mechanisms to recuperate system reliability to satisfy the quality assurance requirements. Extensive simulation is conducted to evaluate the performance of the proposed algorithms on reduction of scheduling overhead, energy saving, and reliability improvement. The simulation results show that the proposed reliability-aware power management schemes could preserve the system reliability while still achieving substantial energy saving.

  2. System-level power optimization for real-time distributed embedded systems

    NASA Astrophysics Data System (ADS)

    Luo, Jiong

    Power optimization is one of the crucial design considerations for modern electronic systems. In this thesis, we present several system-level power optimization techniques for real-time distributed embedded systems, based on dynamic voltage scaling, dynamic power management, and management of peak power and variance of the power profile. Dynamic voltage scaling has been widely acknowledged as an important and powerful technique to trade off dynamic power consumption and delay. Efficient dynamic voltage scaling requires effective variable-voltage scheduling mechanisms that can adjust voltages and clock frequencies adaptively based on workloads and timing constraints. For this purpose, we propose static variable-voltage scheduling algorithms utilizing criticalpath driven timing analysis for the case when tasks are assumed to have uniform switching activities, as well as energy-gradient driven slack allocation for a more general scenario. The proposed techniques can achieve closeto-optimal power savings with very low computational complexity, without violating any real-time constraints. We also present algorithms for power-efficient joint scheduling of multi-rate periodic task graphs along with soft aperiodic tasks. The power issue is addressed through both dynamic voltage scaling and power management. Periodic task graphs are scheduled statically. Flexibility is introduced into the static schedule to allow the on-line scheduler to make local changes to PE schedules through resource reclaiming and slack stealing, without interfering with the validity of the global schedule. We provide a unified framework in which the response times of aperiodic tasks and power consumption are dynamically optimized simultaneously. Interconnection network fabrics point to a new generation of power-efficient and scalable interconnection architectures for distributed embedded systems. As the system bandwidth continues to increase, interconnection networks become power/energy limited as well. Variable-frequency links have been designed by circuit designers for both parallel and serial links, which can adaptively regulate the supply voltage of transceivers to a desired link frequency, to exploit the variations in bandwidth requirement for power savings. We propose solutions for simultaneous dynamic voltage scaling of processors and links. The proposed solution considers real-time scheduling, flow control, and packet routing jointly. It can trade off the power consumption on processors and communication links via efficient slack allocation, and lead to more power savings than dynamic voltage scaling on processors alone. For battery-operated systems, the battery lifespan is an important concern. Due to the effects of discharge rate and battery recovery, the discharge pattern of batteries has an impact on the battery lifespan. Battery models indicate that even under the same average power consumption, reducing peak power current and variance in the power profile can increase the battery efficiency and thereby prolong battery lifetime. To take advantage of these effects, we propose battery-driven scheduling techniques for embedded applications, to reduce the peak power and the variance in the power profile of the overall system under real-time constraints. The proposed scheduling algorithms are also beneficial in addressing reliability and signal integrity concerns by effectively controlling peak power and variance of the power profile.

  3. Cost Minimization for Joint Energy Management and Production Scheduling Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Shah, Rahul H.

    Production costs account for the largest share of the overall cost of manufacturing facilities. With the U.S. industrial sector becoming more and more competitive, manufacturers are looking for more cost and resource efficient working practices. Operations management and production planning have shown their capability to dramatically reduce manufacturing costs and increase system robustness. When implementing operations related decision making and planning, two fields that have shown to be most effective are maintenance and energy. Unfortunately, the current research that integrates both is limited. Additionally, these studies fail to consider parameter domains and optimization on joint energy and maintenance driven production planning. Accordingly, production planning methodology that considers maintenance and energy is investigated. Two models are presented to achieve well-rounded operating strategy. The first is a joint energy and maintenance production scheduling model. The second is a cost per part model considering maintenance, energy, and production. The proposed methodology will involve a Time-of-Use electricity demand response program, buffer and holding capacity, station reliability, production rate, station rated power, and more. In practice, the scheduling problem can be used to determine a joint energy, maintenance, and production schedule. Meanwhile, the cost per part model can be used to: (1) test the sensitivity of the obtained optimal production schedule and its corresponding savings by varying key production system parameters; and (2) to determine optimal system parameter combinations when using the joint energy, maintenance, and production planning model. Additionally, a factor analysis on the system parameters is conducted and the corresponding performance of the production schedule under variable parameter conditions, is evaluated. Also, parameter optimization guidelines that incorporate maintenance and energy parameter decision making in the production planning framework are discussed. A modified Particle Swarm Optimization solution technique is adopted to solve the proposed scheduling problem. The algorithm is described in detail and compared to Genetic Algorithm. Case studies are presented to illustrate the benefits of using the proposed model and the effectiveness of the Particle Swarm Optimization approach. Numerical Experiments are implemented and analyzed to test the effectiveness of the proposed model. The proposed scheduling strategy can achieve savings of around 19 to 27 % in cost per part when compared to the baseline scheduling scenarios. By optimizing key production system parameters from the cost per part model, the baseline scenarios can obtain around 20 to 35 % in savings for the cost per part. These savings further increase by 42 to 55 % when system parameter optimization is integrated with the proposed scheduling problem. Using this method, the most influential parameters on the cost per part are the rated power from production, the production rate, and the initial machine reliabilities. The modified Particle Swarm Optimization algorithm adopted allows greater diversity and exploration compared to Genetic Algorithm for the proposed joint model which results in it being more computationally efficient in determining the optimal scheduling. While Genetic Algorithm could achieve a solution quality of 2,279.63 at an expense of 2,300 seconds in computational effort. In comparison, the proposed Particle Swarm Optimization algorithm achieved a solution quality of 2,167.26 in less than half the computation effort which is required by Genetic Algorithm.

  4. Task Scheduling in Desktop Grids: Open Problems

    NASA Astrophysics Data System (ADS)

    Chernov, Ilya; Nikitina, Natalia; Ivashko, Evgeny

    2017-12-01

    We survey the areas of Desktop Grid task scheduling that seem to be insufficiently studied so far and are promising for efficiency, reliability, and quality of Desktop Grid computing. These topics include optimal task grouping, "needle in a haystack" paradigm, game-theoretical scheduling, domain-imposed approaches, special optimization of the final stage of the batch computation, and Enterprise Desktop Grids.

  5. An intelligent value-driven scheduling system for Space Station Freedom with special emphasis on the electric power system

    NASA Technical Reports Server (NTRS)

    Krupp, Joseph C.

    1991-01-01

    The Electric Power Control System (EPCS) created by Decision-Science Applications, Inc. (DSA) for the Lewis Research Center is discussed. This system makes decisions on what to schedule and when to schedule it, including making choices among various options or ways of performing a task. The system is goal-directed and seeks to shape resource usage in an optimal manner using a value-driven approach. Discussed here are considerations governing what makes a good schedule, how to design a value function to find the best schedule, and how to design the algorithm that finds the schedule that maximizes this value function. Results are shown which demonstrate the usefulness of the techniques employed.

  6. Reliability-based optimization of maintenance scheduling of mechanical components under fatigue

    PubMed Central

    Beaurepaire, P.; Valdebenito, M.A.; Schuëller, G.I.; Jensen, H.A.

    2012-01-01

    This study presents the optimization of the maintenance scheduling of mechanical components under fatigue loading. The cracks of damaged structures may be detected during non-destructive inspection and subsequently repaired. Fatigue crack initiation and growth show inherent variability, and as well the outcome of inspection activities. The problem is addressed under the framework of reliability based optimization. The initiation and propagation of fatigue cracks are efficiently modeled using cohesive zone elements. The applicability of the method is demonstrated by a numerical example, which involves a plate with two holes subject to alternating stress. PMID:23564979

  7. Artificial intelligence for the CTA Observatory scheduler

    NASA Astrophysics Data System (ADS)

    Colomé, Josep; Colomer, Pau; Campreciós, Jordi; Coiffard, Thierry; de Oña, Emma; Pedaletti, Giovanna; Torres, Diego F.; Garcia-Piquer, Alvaro

    2014-08-01

    The Cherenkov Telescope Array (CTA) project will be the next generation ground-based very high energy gamma-ray instrument. The success of the precursor projects (i.e., HESS, MAGIC, VERITAS) motivated the construction of this large infrastructure that is included in the roadmap of the ESFRI projects since 2008. CTA is planned to start the construction phase in 2015 and will consist of two arrays of Cherenkov telescopes operated as a proposal-driven open observatory. Two sites are foreseen at the southern and northern hemispheres. The CTA observatory will handle several observation modes and will have to operate tens of telescopes with a highly efficient and reliable control. Thus, the CTA planning tool is a key element in the control layer for the optimization of the observatory time. The main purpose of the scheduler for CTA is the allocation of multiple tasks to one single array or to multiple sub-arrays of telescopes, while maximizing the scientific return of the facility and minimizing the operational costs. The scheduler considers long- and short-term varying conditions to optimize the prioritization of tasks. A short-term scheduler provides the system with the capability to adapt, in almost real-time, the selected task to the varying execution constraints (i.e., Targets of Opportunity, health or status of the system components, environment conditions). The scheduling procedure ensures that long-term planning decisions are correctly transferred to the short-term prioritization process for a suitable selection of the next task to execute on the array. In this contribution we present the constraints to CTA task scheduling that helped classifying it as a Flexible Job-Shop Problem case and finding its optimal solution based on Artificial Intelligence techniques. We describe the scheduler prototype that uses a Guarded Discrete Stochastic Neural Network (GDSN), for an easy representation of the possible long- and short-term planning solutions, and Constraint Propagation techniques. A simulation platform, an analysis tool and different test case scenarios for CTA were developed to test the performance of the scheduler and are also described.

  8. Scheduling structural health monitoring activities for optimizing life-cycle costs and reliability of wind turbines

    NASA Astrophysics Data System (ADS)

    Hanish Nithin, Anu; Omenzetter, Piotr

    2017-04-01

    Optimization of the life-cycle costs and reliability of offshore wind turbines (OWTs) is an area of immense interest due to the widespread increase in wind power generation across the world. Most of the existing studies have used structural reliability and the Bayesian pre-posterior analysis for optimization. This paper proposes an extension to the previous approaches in a framework for probabilistic optimization of the total life-cycle costs and reliability of OWTs by combining the elements of structural reliability/risk analysis (SRA), the Bayesian pre-posterior analysis with optimization through a genetic algorithm (GA). The SRA techniques are adopted to compute the probabilities of damage occurrence and failure associated with the deterioration model. The probabilities are used in the decision tree and are updated using the Bayesian analysis. The output of this framework would determine the optimal structural health monitoring and maintenance schedules to be implemented during the life span of OWTs while maintaining a trade-off between the life-cycle costs and risk of the structural failure. Numerical illustrations with a generic deterioration model for one monitoring exercise in the life cycle of a system are demonstrated. Two case scenarios, namely to build initially an expensive and robust or a cheaper but more quickly deteriorating structures and to adopt expensive monitoring system, are presented to aid in the decision-making process.

  9. Optimizing preventive maintenance policy: A data-driven application for a light rail braking system.

    PubMed

    Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel

    2017-10-01

    This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions.

  10. Optimizing preventive maintenance policy: A data-driven application for a light rail braking system

    PubMed Central

    Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel

    2017-01-01

    This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions. PMID:29278245

  11. Optimal Preventive Maintenance Schedule based on Lifecycle Cost and Time-Dependent Reliability

    DTIC Science & Technology

    2011-11-10

    Page 1 of 16 UNCLASSIFIED: Distribution Statement A. Approved for public release. 12IDM-0064 Optimal Preventive Maintenance Schedule based... 1 . INTRODUCTION Customers and product manufacturers demand continued functionality of complex equipment and processes. Degradation of material...Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response

  12. Smart sensing to drive real-time loads scheduling algorithm in a domotic architecture

    NASA Astrophysics Data System (ADS)

    Santamaria, Amilcare Francesco; Raimondo, Pierfrancesco; De Rango, Floriano; Vaccaro, Andrea

    2014-05-01

    Nowadays the focus on power consumption represent a very important factor regarding the reduction of power consumption with correlated costs and the environmental sustainability problems. Automatic control load based on power consumption and use cycle represents the optimal solution to costs restraint. The purpose of these systems is to modulate the power request of electricity avoiding an unorganized work of the loads, using intelligent techniques to manage them based on real time scheduling algorithms. The goal is to coordinate a set of electrical loads to optimize energy costs and consumptions based on the stipulated contract terms. The proposed algorithm use two new main notions: priority driven loads and smart scheduling loads. The priority driven loads can be turned off (stand by) according to a priority policy established by the user if the consumption exceed a defined threshold, on the contrary smart scheduling loads are scheduled in a particular way to don't stop their Life Cycle (LC) safeguarding the devices functions or allowing the user to freely use the devices without the risk of exceeding the power threshold. The algorithm, using these two kind of notions and taking into account user requirements, manages loads activation and deactivation allowing the completion their operation cycle without exceeding the consumption threshold in an off-peak time range according to the electricity fare. This kind of logic is inspired by industrial lean manufacturing which focus is to minimize any kind of power waste optimizing the available resources.

  13. Mixed Criticality Scheduling for Industrial Wireless Sensor Networks

    PubMed Central

    Jin, Xi; Xia, Changqing; Xu, Huiting; Wang, Jintao; Zeng, Peng

    2016-01-01

    Wireless sensor networks (WSNs) have been widely used in industrial systems. Their real-time performance and reliability are fundamental to industrial production. Many works have studied the two aspects, but only focus on single criticality WSNs. Mixed criticality requirements exist in many advanced applications in which different data flows have different levels of importance (or criticality). In this paper, first, we propose a scheduling algorithm, which guarantees the real-time performance and reliability requirements of data flows with different levels of criticality. The algorithm supports centralized optimization and adaptive adjustment. It is able to improve both the scheduling performance and flexibility. Then, we provide the schedulability test through rigorous theoretical analysis. We conduct extensive simulations, and the results demonstrate that the proposed scheduling algorithm and analysis significantly outperform existing ones. PMID:27589741

  14. A Simulation Based Approach to Optimize Berth Throughput Under Uncertainty at Marine Container Terminals

    NASA Technical Reports Server (NTRS)

    Golias, Mihalis M.

    2011-01-01

    Berth scheduling is a critical function at marine container terminals and determining the best berth schedule depends on several factors including the type and function of the port, size of the port, location, nearby competition, and type of contractual agreement between the terminal and the carriers. In this paper we formulate the berth scheduling problem as a bi-objective mixed-integer problem with the objective to maximize customer satisfaction and reliability of the berth schedule under the assumption that vessel handling times are stochastic parameters following a discrete and known probability distribution. A combination of an exact algorithm, a Genetic Algorithms based heuristic and a simulation post-Pareto analysis is proposed as the solution approach to the resulting problem. Based on a number of experiments it is concluded that the proposed berth scheduling policy outperforms the berth scheduling policy where reliability is not considered.

  15. A reliability as an independent variable (RAIV) methodology for optimizing test planning for liquid rocket engines

    NASA Astrophysics Data System (ADS)

    Strunz, Richard; Herrmann, Jeffrey W.

    2011-12-01

    The hot fire test strategy for liquid rocket engines has always been a concern of space industry and agency alike because no recognized standard exists. Previous hot fire test plans focused on the verification of performance requirements but did not explicitly include reliability as a dimensioning variable. The stakeholders are, however, concerned about a hot fire test strategy that balances reliability, schedule, and affordability. A multiple criteria test planning model is presented that provides a framework to optimize the hot fire test strategy with respect to stakeholder concerns. The Staged Combustion Rocket Engine Demonstrator, a program of the European Space Agency, is used as example to provide the quantitative answer to the claim that a reduced thrust scale demonstrator is cost beneficial for a subsequent flight engine development. Scalability aspects of major subsystems are considered in the prior information definition inside the Bayesian framework. The model is also applied to assess the impact of an increase of the demonstrated reliability level on schedule and affordability.

  16. Automated control of hierarchical systems using value-driven methods

    NASA Technical Reports Server (NTRS)

    Pugh, George E.; Burke, Thomas E.

    1990-01-01

    An introduction is given to the Value-driven methodology, which has been successfully applied to solve a variety of difficult decision, control, and optimization problems. Many real-world decision processes (e.g., those encountered in scheduling, allocation, and command and control) involve a hierarchy of complex planning considerations. For such problems it is virtually impossible to define a fixed set of rules that will operate satisfactorily over the full range of probable contingencies. Decision Science Applications' value-driven methodology offers a systematic way of automating the intuitive, common-sense approach used by human planners. The inherent responsiveness of value-driven systems to user-controlled priorities makes them particularly suitable for semi-automated applications in which the user must remain in command of the systems operation. Three examples of the practical application of the approach in the automation of hierarchical decision processes are discussed: the TAC Brawler air-to-air combat simulation is a four-level computerized hierarchy; the autonomous underwater vehicle mission planning system is a three-level control system; and the Space Station Freedom electrical power control and scheduling system is designed as a two-level hierarchy. The methodology is compared with rule-based systems and with other more widely-known optimization techniques.

  17. Exploring a QoS Driven Scheduling Approach for Peer-to-Peer Live Streaming Systems with Network Coding

    PubMed Central

    Cui, Laizhong; Lu, Nan; Chen, Fu

    2014-01-01

    Most large-scale peer-to-peer (P2P) live streaming systems use mesh to organize peers and leverage pull scheduling to transmit packets for providing robustness in dynamic environment. The pull scheduling brings large packet delay. Network coding makes the push scheduling feasible in mesh P2P live streaming and improves the efficiency. However, it may also introduce some extra delays and coding computational overhead. To improve the packet delay, streaming quality, and coding overhead, in this paper are as follows. we propose a QoS driven push scheduling approach. The main contributions of this paper are: (i) We introduce a new network coding method to increase the content diversity and reduce the complexity of scheduling; (ii) we formulate the push scheduling as an optimization problem and transform it to a min-cost flow problem for solving it in polynomial time; (iii) we propose a push scheduling algorithm to reduce the coding overhead and do extensive experiments to validate the effectiveness of our approach. Compared with previous approaches, the simulation results demonstrate that packet delay, continuity index, and coding ratio of our system can be significantly improved, especially in dynamic environments. PMID:25114968

  18. Optimal Wind Power Uncertainty Intervals for Electricity Market Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Ying; Zhou, Zhi; Botterud, Audun

    It is important to select an appropriate uncertainty level of the wind power forecast for power system scheduling and electricity market operation. Traditional methods hedge against a predefined level of wind power uncertainty, such as a specific confidence interval or uncertainty set, which leaves the questions of how to best select the appropriate uncertainty levels. To bridge this gap, this paper proposes a model to optimize the forecast uncertainty intervals of wind power for power system scheduling problems, with the aim of achieving the best trade-off between economics and reliability. Then we reformulate and linearize the models into a mixedmore » integer linear programming (MILP) without strong assumptions on the shape of the probability distribution. In order to invest the impacts on cost, reliability, and prices in a electricity market, we apply the proposed model on a twosettlement electricity market based on a six-bus test system and on a power system representing the U.S. state of Illinois. The results show that the proposed method can not only help to balance the economics and reliability of the power system scheduling, but also help to stabilize the energy prices in electricity market operation.« less

  19. Designing an optimal software intensive system acquisition: A game theoretic approach

    NASA Astrophysics Data System (ADS)

    Buettner, Douglas John

    The development of schedule-constrained software-intensive space systems is challenging. Case study data from national security space programs developed at the U.S. Air Force Space and Missile Systems Center (USAF SMC) provide evidence of the strong desire by contractors to skip or severely reduce software development design and early defect detection methods in these schedule-constrained environments. The research findings suggest recommendations to fully address these issues at numerous levels. However, the observations lead us to investigate modeling and theoretical methods to fundamentally understand what motivated this behavior in the first place. As a result, Madachy's inspection-based system dynamics model is modified to include unit testing and an integration test feedback loop. This Modified Madachy Model (MMM) is used as a tool to investigate the consequences of this behavior on the observed defect dynamics for two remarkably different case study software projects. Latin Hypercube sampling of the MMM with sample distributions for quality, schedule and cost-driven strategies demonstrate that the higher cost and effort quality-driven strategies provide consistently better schedule performance than the schedule-driven up-front effort-reduction strategies. Game theory reasoning for schedule-driven engineers cutting corners on inspections and unit testing is based on the case study evidence and Austin's agency model to describe the observed phenomena. Game theory concepts are then used to argue that the source of the problem and hence the solution to developers cutting corners on quality for schedule-driven system acquisitions ultimately lies with the government. The game theory arguments also lead to the suggestion that the use of a multi-player dynamic Nash bargaining game provides a solution for our observed lack of quality game between the government (the acquirer) and "large-corporation" software developers. A note is provided that argues this multi-player dynamic Nash bargaining game also provides the solution to Freeman Dyson's problem, for a way to place a label of good or bad on systems.

  20. Neural Network Prediction of New Aircraft Design Coefficients

    NASA Technical Reports Server (NTRS)

    Norgaard, Magnus; Jorgensen, Charles C.; Ross, James C.

    1997-01-01

    This paper discusses a neural network tool for more effective aircraft design evaluations during wind tunnel tests. Using a hybrid neural network optimization method, we have produced fast and reliable predictions of aerodynamical coefficients, found optimal flap settings, and flap schedules. For validation, the tool was tested on a 55% scale model of the USAF/NASA Subsonic High Alpha Research Concept aircraft (SHARC). Four different networks were trained to predict coefficients of lift, drag, moment of inertia, and lift drag ratio (C(sub L), C(sub D), C(sub M), and L/D) from angle of attack and flap settings. The latter network was then used to determine an overall optimal flap setting and for finding optimal flap schedules.

  1. Scheduling Independent Partitions in Integrated Modular Avionics Systems

    PubMed Central

    Du, Chenglie; Han, Pengcheng

    2016-01-01

    Recently the integrated modular avionics (IMA) architecture has been widely adopted by the avionics industry due to its strong partition mechanism. Although the IMA architecture can achieve effective cost reduction and reliability enhancement in the development of avionics systems, it results in a complex allocation and scheduling problem. All partitions in an IMA system should be integrated together according to a proper schedule such that their deadlines will be met even under the worst case situations. In order to help provide a proper scheduling table for all partitions in IMA systems, we study the schedulability of independent partitions on a multiprocessor platform in this paper. We firstly present an exact formulation to calculate the maximum scaling factor and determine whether all partitions are schedulable on a limited number of processors. Then with a Game Theory analogy, we design an approximation algorithm to solve the scheduling problem of partitions, by allowing each partition to optimize its own schedule according to the allocations of the others. Finally, simulation experiments are conducted to show the efficiency and reliability of the approach proposed in terms of time consumption and acceptance ratio. PMID:27942013

  2. Fault-tolerant bandwidth reservation strategies for data transfers in high-performance networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zuo, Liudong; Zhu, Michelle M.; Wu, Chase Q.

    2016-11-22

    Many next-generation e-science applications need fast and reliable transfer of large volumes of data with guaranteed performance, which is typically enabled by the bandwidth reservation service in high-performance networks. One prominent issue in such network environments with large footprints is that node and link failures are inevitable, hence potentially degrading the quality of data transfer. We consider two generic types of bandwidth reservation requests (BRRs) concerning data transfer reliability: (i) to achieve the highest data transfer reliability under a given data transfer deadline, and (ii) to achieve the earliest data transfer completion time while satisfying a given data transfer reliabilitymore » requirement. We propose two periodic bandwidth reservation algorithms with rigorous optimality proofs to optimize the scheduling of individual BRRs within BRR batches. The efficacy of the proposed algorithms is illustrated through extensive simulations in comparison with scheduling algorithms widely adopted in production networks in terms of various performance metrics.« less

  3. Determining optimal selling price and lot size with process reliability and partial backlogging considerations

    NASA Astrophysics Data System (ADS)

    Hsieh, Tsu-Pang; Cheng, Mei-Chuan; Dye, Chung-Yuan; Ouyang, Liang-Yuh

    2011-01-01

    In this article, we extend the classical economic production quantity (EPQ) model by proposing imperfect production processes and quality-dependent unit production cost. The demand rate is described by any convex decreasing function of the selling price. In addition, we allow for shortages and a time-proportional backlogging rate. For any given selling price, we first prove that the optimal production schedule not only exists but also is unique. Next, we show that the total profit per unit time is a concave function of price when the production schedule is given. We then provide a simple algorithm to find the optimal selling price and production schedule for the proposed model. Finally, we use a couple of numerical examples to illustrate the algorithm and conclude this article with suggestions for possible future research.

  4. A new multi-objective optimization model for preventive maintenance and replacement scheduling of multi-component systems

    NASA Astrophysics Data System (ADS)

    Moghaddam, Kamran S.; Usher, John S.

    2011-07-01

    In this article, a new multi-objective optimization model is developed to determine the optimal preventive maintenance and replacement schedules in a repairable and maintainable multi-component system. In this model, the planning horizon is divided into discrete and equally-sized periods in which three possible actions must be planned for each component, namely maintenance, replacement, or do nothing. The objective is to determine a plan of actions for each component in the system while minimizing the total cost and maximizing overall system reliability simultaneously over the planning horizon. Because of the complexity, combinatorial and highly nonlinear structure of the mathematical model, two metaheuristic solution methods, generational genetic algorithm, and a simulated annealing are applied to tackle the problem. The Pareto optimal solutions that provide good tradeoffs between the total cost and the overall reliability of the system can be obtained by the solution approach. Such a modeling approach should be useful for maintenance planners and engineers tasked with the problem of developing recommended maintenance plans for complex systems of components.

  5. The Business Change Initiative: A Novel Approach to Improved Cost and Schedule Management

    NASA Technical Reports Server (NTRS)

    Shinn, Stephen A.; Bryson, Jonathan; Klein, Gerald; Lunz-Ruark, Val; Majerowicz, Walt; McKeever, J.; Nair, Param

    2016-01-01

    Goddard Space Flight Center's Flight Projects Directorate employed a Business Change Initiative (BCI) to infuse a series of activities coordinated to drive improved cost and schedule performance across Goddard's missions. This sustaining change framework provides a platform to manage and implement cost and schedule control techniques throughout the project portfolio. The BCI concluded in December 2014, deploying over 100 cost and schedule management changes including best practices, tools, methods, training, and knowledge sharing. The new business approach has driven the portfolio to improved programmatic performance. The last eight launched GSFC missions have optimized cost, schedule, and technical performance on a sustained basis to deliver on time and within budget, returning funds in many cases. While not every future mission will boast such strong performance, improved cost and schedule tools, management practices, and ongoing comprehensive evaluations of program planning and control methods to refine and implement best practices will continue to provide a framework for sustained performance. This paper will describe the tools, techniques, and processes developed during the BCI and the utilization of collaborative content management tools to disseminate project planning and control techniques to ensure continuous collaboration and optimization of cost and schedule management in the future.

  6. ROBUS-2: A Fault-Tolerant Broadcast Communication System

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Miner, Paul S.

    2005-01-01

    The Reliable Optical Bus (ROBUS) is the core communication system of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER), a general-purpose fault-tolerant integrated modular architecture currently under development at NASA Langley Research Center. The ROBUS is a time-division multiple access (TDMA) broadcast communication system with medium access control by means of time-indexed communication schedule. ROBUS-2 is a developmental version of the ROBUS providing guaranteed fault-tolerant services to the attached processing elements (PEs), in the presence of a bounded number of faults. These services include message broadcast (Byzantine Agreement), dynamic communication schedule update, clock synchronization, and distributed diagnosis (group membership). The ROBUS also features fault-tolerant startup and restart capabilities. ROBUS-2 is tolerant to internal as well as PE faults, and incorporates a dynamic self-reconfiguration capability driven by the internal diagnostic system. This version of the ROBUS is intended for laboratory experimentation and demonstrations of the capability to reintegrate failed nodes, dynamically update the communication schedule, and tolerate and recover from correlated transient faults.

  7. The Value of Weather Forecast in Irrigation

    NASA Astrophysics Data System (ADS)

    Cai, X.; Wang, D.

    2007-12-01

    This paper studies irrigation scheduling (when and how much water to apply during the crop growth season) in the Havana Lowlands region, Illinois, using meteorological, agronomic and agricultural production data from 2002. Irrigation scheduling determines the timing and amount of water applied to an irrigated cropland during the crop growing season. In this study a hydrologic-agronomic simulation is coupled with an optimization algorithm to search for the optimal irrigation schedule under various weather forecast horizons. The economic profit of irrigated corn from an optimized scheduling is compared to that from and the actual schedule, which is adopted from a pervious study. Extended and reliable climate prediction and weather forecast are found to be significantly valuable. If a weather forecast horizon is long enough to include the critical crop growth stage, in which crop yield bears the maximum loss over all stages, much economic loss can be avoided. Climate predictions of one to two months, which can cover the critical period, might be even more beneficial during a dry year. The other purpose of this paper is to analyze farmers' behavior in irrigation scheduling by comparing the "actual" schedule to the "optimized" ones. The ultimate goal of irrigation schedule optimization is to provide information to farmers so that they may modify their behavior. In practice, farmers' decision may not follow an optimal irrigation schedule due to the impact of various factors such as natural conditions, policies, farmers' habits and empirical knowledge, and the uncertain or inexact information that they receive. In this study farmers' behavior in irrigation decision making is analyzed by comparing the "actual" schedule to the "optimized" ones. This study finds that the identification of the crop growth stage with the most severe water stress is critical for irrigation scheduling. For the case study site in the year of 2002, framers' response to water stress was found to be late; they did not even respond appropriately to a major rainfall just 3 days ahead, which might be due to either an unreliable weather forecast or farmer's ignorance of the forecast.

  8. OGUPSA sensor scheduling architecture and algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Zhixiong; Hintz, Kenneth J.

    1996-06-01

    This paper introduces a new architecture for a sensor measurement scheduler as well as a dynamic sensor scheduling algorithm called the on-line, greedy, urgency-driven, preemptive scheduling algorithm (OGUPSA). OGUPSA incorporates a preemptive mechanism which uses three policies, (1) most-urgent-first (MUF), (2) earliest- completed-first (ECF), and (3) least-versatile-first (LVF). The three policies are used successively to dynamically allocate and schedule and distribute a set of arriving tasks among a set of sensors. OGUPSA also can detect the failure of a task to meet a deadline as well as generate an optimal schedule in the sense of minimum makespan for a group of tasks with the same priorities. A side benefit is OGUPSA's ability to improve dynamic load balance among all sensors while being a polynomial time algorithm. Results of a simulation are presented for a simple sensor system.

  9. Heimdall System for MSSS Sensor Tasking

    NASA Astrophysics Data System (ADS)

    Herz, A.; Jones, B.; Herz, E.; George, D.; Axelrad, P.; Gehly, S.

    In Norse Mythology, Heimdall uses his foreknowledge and keen eyesight to keep watch for disaster from his home near the Rainbow Bridge. Orbit Logic and the Colorado Center for Astrodynamics Research (CCAR) at the University of Colorado (CU) have developed the Heimdall System to schedule observations of known and uncharacterized objects and search for new objects from the Maui Space Surveillance Site. Heimdall addresses the current need for automated and optimized SSA sensor tasking driven by factors associated with improved space object catalog maintenance. Orbit Logic and CU developed an initial baseline prototype SSA sensor tasking capability for select sensors at the Maui Space Surveillance Site (MSSS) using STK and STK Scheduler, and then added a new Track Prioritization Component for FiSST-inspired computations for predicted Information Gain and Probability of Detection, and a new SSA-specific Figure-of-Merit (FOM) for optimized SSA sensor tasking. While the baseline prototype addresses automation and some of the multi-sensor tasking optimization, the SSA-improved prototype addresses all of the key elements required for improved tasking leading to enhanced object catalog maintenance. The Heimdall proof-of-concept was demonstrated for MSSS SSA sensor tasking for a 24 hour period to attempt observations of all operational satellites in the unclassified NORAD catalog, observe a small set of high priority GEO targets every 30 minutes, make a sky survey of the GEO belt region accessible to MSSS sensors, and observe particular GEO regions that have a high probability of finding new objects with any excess sensor time. This Heimdall prototype software paves the way for further R&D that will integrate this technology into the MSSS systems for operational scheduling, improve the software's scalability, and further tune and enhance schedule optimization. The Heimdall software for SSA sensor tasking provides greatly improved performance over manual tasking, improved coordinated sensor usage, and tasking schedules driven by catalog improvement goals (reduced overall covariance, etc.). The improved performance also enables more responsive sensor tasking to address external events, newly detected objects, newly detected object activity, and sensor anomalies. Instead of having to wait until the next day's scheduling phase, events can be addressed with new tasking schedules immediately (within seconds or minutes). Perhaps the most important benefit is improved SSA based on an overall improvement to the quality of the space catalog. By driving sensor tasking and scheduling based on predicted Information Gain and other relevant factors, better decisions are made in the application of available sensor resources, leading to an improved catalog and better information about the objects of most interest. The Heimdall software solution provides a configurable, automated system to improve sensor tasking efficiency and responsiveness for SSA applications. The FISST algorithms for Track Prioritization, SSA specific task and resource attributes, Scheduler algorithms, and configurable SSA-specific Figure-of-Merit together provide optimized and tunable scheduling for the Maui Space Surveillance Site and possibly other sites and organizations across the U.S. military and for allies around the world.

  10. Parts and Components Reliability Assessment: A Cost Effective Approach

    NASA Technical Reports Server (NTRS)

    Lee, Lydia

    2009-01-01

    System reliability assessment is a methodology which incorporates reliability analyses performed at parts and components level such as Reliability Prediction, Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) to assess risks, perform design tradeoffs, and therefore, to ensure effective productivity and/or mission success. The system reliability is used to optimize the product design to accommodate today?s mandated budget, manpower, and schedule constraints. Stand ard based reliability assessment is an effective approach consisting of reliability predictions together with other reliability analyses for electronic, electrical, and electro-mechanical (EEE) complex parts and components of large systems based on failure rate estimates published by the United States (U.S.) military or commercial standards and handbooks. Many of these standards are globally accepted and recognized. The reliability assessment is especially useful during the initial stages when the system design is still in the development and hard failure data is not yet available or manufacturers are not contractually obliged by their customers to publish the reliability estimates/predictions for their parts and components. This paper presents a methodology to assess system reliability using parts and components reliability estimates to ensure effective productivity and/or mission success in an efficient manner, low cost, and tight schedule.

  11. Smart EV Energy Management System to Support Grid Services

    NASA Astrophysics Data System (ADS)

    Wang, Bin

    Under smart grid scenarios, the advanced sensing and metering technologies have been applied to the legacy power grid to improve the system observability and the real-time situational awareness. Meanwhile, there is increasing amount of distributed energy resources (DERs), such as renewable generations, electric vehicles (EVs) and battery energy storage system (BESS), etc., being integrated into the power system. However, the integration of EVs, which can be modeled as controllable mobile energy devices, brings both challenges and opportunities to the grid planning and energy management, due to the intermittency of renewable generation, uncertainties of EV driver behaviors, etc. This dissertation aims to solve the real-time EV energy management problem in order to improve the overall grid efficiency, reliability and economics, using online and predictive optimization strategies. Most of the previous research on EV energy management strategies and algorithms are based on simplified models with unrealistic assumptions that the EV charging behaviors are perfectly known or following known distributions, such as the arriving time, leaving time and energy consumption values, etc. These approaches fail to obtain the optimal solutions in real-time because of the system uncertainties. Moreover, there is lack of data-driven strategy that performs online and predictive scheduling for EV charging behaviors under microgrid scenarios. Therefore, we develop an online predictive EV scheduling framework, considering uncertainties of renewable generation, building load and EV driver behaviors, etc., based on real-world data. A kernel-based estimator is developed to predict the charging session parameters in real-time with improved estimation accuracy. The efficacy of various optimization strategies that are supported by this framework, including valley-filling, cost reduction, event-based control, etc., has been demonstrated. In addition, the existing simulation-based approaches do not consider a variety of practical concerns of implementing such a smart EV energy management system, including the driver preferences, communication protocols, data models, and customized integration of existing standards to provide grid services. Therefore, this dissertation also solves these issues by designing and implementing a scalable system architecture to capture the user preferences, enable multi-layer communication and control, and finally improve the system reliability and interoperability.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpson, L.; Britt, J.; Birkmire, R.

    ITN Energy Systems, Inc., and Global Solar Energy, Inc., assisted by NREL's PV Manufacturing R&D program, have continued to advance CIGS production technology by developing trajectory-oriented predictive/control models, fault-tolerance control, control platform development, in-situ sensors, and process improvements. Modeling activities included developing physics-based and empirical models for CIGS and sputter-deposition processing, implementing model-based control, and applying predictive models to the construction of new evaporation sources and for control. Model-based control is enabled by implementing reduced or empirical models into a control platform. Reliability improvement activities include implementing preventive maintenance schedules; detecting failed sensors/equipment and reconfiguring to tinue processing; and systematicmore » development of fault prevention and reconfiguration strategies for the full range of CIGS PV production deposition processes. In-situ sensor development activities have resulted in improved control and indicated the potential for enhanced process status monitoring and control of the deposition processes. Substantial process improvements have been made, including significant improvement in CIGS uniformity, thickness control, efficiency, yield, and throughput. In large measure, these gains have been driven by process optimization, which in turn have been enabled by control and reliability improvements due to this PV Manufacturing R&D program.« less

  13. Two tradeoffs between economy and reliability in loss of load probability constrained unit commitment

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Wang, Mingqiang; Ning, Xingyao

    2018-02-01

    Spinning reserve (SR) should be scheduled considering the balance between economy and reliability. To address the computational intractability cursed by the computation of loss of load probability (LOLP), many probabilistic methods use simplified formulations of LOLP to improve the computational efficiency. Two tradeoffs embedded in the SR optimization model are not explicitly analyzed in these methods. In this paper, two tradeoffs including primary tradeoff and secondary tradeoff between economy and reliability in the maximum LOLP constrained unit commitment (UC) model are explored and analyzed in a small system and in IEEE-RTS System. The analysis on the two tradeoffs can help in establishing new efficient simplified LOLP formulations and new SR optimization models.

  14. Multi-objective group scheduling optimization integrated with preventive maintenance

    NASA Astrophysics Data System (ADS)

    Liao, Wenzhu; Zhang, Xiufang; Jiang, Min

    2017-11-01

    This article proposes a single-machine-based integration model to meet the requirements of production scheduling and preventive maintenance in group production. To describe the production for identical/similar and different jobs, this integrated model considers the learning and forgetting effects. Based on machine degradation, the deterioration effect is also considered. Moreover, perfect maintenance and minimal repair are adopted in this integrated model. The multi-objective of minimizing total completion time and maintenance cost is taken to meet the dual requirements of delivery date and cost. Finally, a genetic algorithm is developed to solve this optimization model, and the computation results demonstrate that this integrated model is effective and reliable.

  15. Processing time tolerance-based ACO algorithm for solving job-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Luo, Yabo; Waden, Yongo P.

    2017-06-01

    Ordinarily, Job Shop Scheduling Problem (JSSP) is known as NP-hard problem which has uncertainty and complexity that cannot be handled by a linear method. Thus, currently studies on JSSP are concentrated mainly on applying different methods of improving the heuristics for optimizing the JSSP. However, there still exist many problems for efficient optimization in the JSSP, namely, low efficiency and poor reliability, which can easily trap the optimization process of JSSP into local optima. Therefore, to solve this problem, a study on Ant Colony Optimization (ACO) algorithm combined with constraint handling tactics is carried out in this paper. Further, the problem is subdivided into three parts: (1) Analysis of processing time tolerance-based constraint features in the JSSP which is performed by the constraint satisfying model; (2) Satisfying the constraints by considering the consistency technology and the constraint spreading algorithm in order to improve the performance of ACO algorithm. Hence, the JSSP model based on the improved ACO algorithm is constructed; (3) The effectiveness of the proposed method based on reliability and efficiency is shown through comparative experiments which are performed on benchmark problems. Consequently, the results obtained by the proposed method are better, and the applied technique can be used in optimizing JSSP.

  16. Operation and planning of coordinated natural gas and electricity infrastructures

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaping

    Natural gas is becoming rapidly the optimal choice for fueling new generating units in electric power system driven by abundant natural gas supplies and environmental regulations that are expected to cause coal-fired generation retirements. The growing reliance on natural gas as a dominant fuel for electricity generation throughout North America has brought the interaction between the natural gas and power grids into sharp focus. The primary concern and motivation of this research is to address the emerging interdependency issues faced by the electric power and natural gas industry. This thesis provides a comprehensive analysis of the interactions between the two systems regarding the short-term operation and long-term infrastructure planning. Natural gas and renewable energy appear complementary in many respects regarding fuel price and availability, environmental impact, resource distribution and dispatchability. In addition, demand response has also held the promise of making a significant contribution to enhance system operations by providing incentives to customers for a more flat load profile. We investigated the coordination between natural gas-fired generation and prevailing nontraditional resources including renewable energy, demand response so as to provide economical options for optimizing the short-term scheduling with the intense natural gas delivery constraints. As the amount and dispatch of gas-fired generation increases, the long-term interdependency issue is whether there is adequate pipeline capacity to provide sufficient gas to natural gas-fired generation during the entire planning horizon while it is widely used outside the power sector. This thesis developed a co-optimization planning model by incorporating the natural gas transportation system into the multi-year resource and transmission system planning problem. This consideration would provide a more comprehensive decision for the investment and accurate assessment for system adequacy and reliability. With the growing reliance on natural gas and widespread utilization of highly efficient combined heat and power (CHP), it is also questionable that whether the independent design of infrastructures can meet potential challenges of future energy supply. To address this issue, this thesis proposed an optimization framework for a sustainable multiple energy system expansion planning based on an energy hub model while considering the energy efficiency, emission and reliability performance. In addition, we introduced the probabilistic reliability evaluation and flow network analysis into the multiple energy system design in order to obtain an optimal and reliable network topology.

  17. Kinship-based politics and the optimal size of kin groups

    PubMed Central

    Hammel, E. A.

    2005-01-01

    Kin form important political groups, which change in size and relative inequality with demographic shifts. Increases in the rate of population growth increase the size of kin groups but decrease their inequality and vice versa. The optimal size of kin groups may be evaluated from the marginal political product (MPP) of their members. Culture and institutions affect levels and shapes of MPP. Different optimal group sizes, from different perspectives, can be suggested for any MPP schedule. The relative dominance of competing groups is determined by their MPP schedules. Groups driven to extremes of sustainability may react in Malthusian fashion, including fission and fusion, or in Boserupian fashion, altering social technology to accommodate changes in size. The spectrum of alternatives for actors and groups, shaped by existing institutions and natural and cultural selection, is very broad. Nevertheless, selection may result in survival of particular kinds of political structures. PMID:16091466

  18. Kinship-based politics and the optimal size of kin groups.

    PubMed

    Hammel, E A

    2005-08-16

    Kin form important political groups, which change in size and relative inequality with demographic shifts. Increases in the rate of population growth increase the size of kin groups but decrease their inequality and vice versa. The optimal size of kin groups may be evaluated from the marginal political product (MPP) of their members. Culture and institutions affect levels and shapes of MPP. Different optimal group sizes, from different perspectives, can be suggested for any MPP schedule. The relative dominance of competing groups is determined by their MPP schedules. Groups driven to extremes of sustainability may react in Malthusian fashion, including fission and fusion, or in Boserupian fashion, altering social technology to accommodate changes in size. The spectrum of alternatives for actors and groups, shaped by existing institutions and natural and cultural selection, is very broad. Nevertheless, selection may result in survival of particular kinds of political structures.

  19. Reliability Constrained Priority Load Shedding for Aerospace Power System Automation

    NASA Technical Reports Server (NTRS)

    Momoh, James A.; Zhu, Jizhong; Kaddah, Sahar S.; Dolce, James L. (Technical Monitor)

    2000-01-01

    The need for improving load shedding on board the space station is one of the goals of aerospace power system automation. To accelerate the optimum load-shedding functions, several constraints must be involved. These constraints include congestion margin determined by weighted probability contingency, component/system reliability index, generation rescheduling. The impact of different faults and indices for computing reliability were defined before optimization. The optimum load schedule is done based on priority, value and location of loads. An optimization strategy capable of handling discrete decision making, such as Everett optimization, is proposed. We extended Everett method to handle expected congestion margin and reliability index as constraints. To make it effective for real time load dispatch process, a rule-based scheme is presented in the optimization method. It assists in selecting which feeder load to be shed, the location of the load, the value, priority of the load and cost benefit analysis of the load profile is included in the scheme. The scheme is tested using a benchmark NASA system consisting of generators, loads and network.

  20. Optimizing integrated airport surface and terminal airspace operations under uncertainty

    NASA Astrophysics Data System (ADS)

    Bosson, Christabelle S.

    In airports and surrounding terminal airspaces, the integration of surface, arrival and departure scheduling and routing have the potential to improve the operations efficiency. Moreover, because both the airport surface and the terminal airspace are often altered by random perturbations, the consideration of uncertainty in flight schedules is crucial to improve the design of robust flight schedules. Previous research mainly focused on independently solving arrival scheduling problems, departure scheduling problems and surface management scheduling problems and most of the developed models are deterministic. This dissertation presents an alternate method to model the integrated operations by using a machine job-shop scheduling formulation. A multistage stochastic programming approach is chosen to formulate the problem in the presence of uncertainty and candidate solutions are obtained by solving sample average approximation problems with finite sample size. The developed mixed-integer-linear-programming algorithm-based scheduler is capable of computing optimal aircraft schedules and routings that reflect the integration of air and ground operations. The assembled methodology is applied to a Los Angeles case study. To show the benefits of integrated operations over First-Come-First-Served, a preliminary proof-of-concept is conducted for a set of fourteen aircraft evolving under deterministic conditions in a model of the Los Angeles International Airport surface and surrounding terminal areas. Using historical data, a representative 30-minute traffic schedule and aircraft mix scenario is constructed. The results of the Los Angeles application show that the integration of air and ground operations and the use of a time-based separation strategy enable both significant surface and air time savings. The solution computed by the optimization provides a more efficient routing and scheduling than the First-Come-First-Served solution. Additionally, a data driven analysis is performed for the Los Angeles environment and probabilistic distributions of pertinent uncertainty sources are obtained. A sensitivity analysis is then carried out to assess the methodology performance and find optimal sampling parameters. Finally, simulations of increasing traffic density in the presence of uncertainty are conducted first for integrated arrivals and departures, then for integrated surface and air operations. To compare the optimization results and show the benefits of integrated operations, two aircraft separation methods are implemented that offer different routing options. The simulations of integrated air operations and the simulations of integrated air and surface operations demonstrate that significant traveling time savings, both total and individual surface and air times, can be obtained when more direct routes are allowed to be traveled even in the presence of uncertainty. The resulting routings induce however extra take off delay for departing flights. As a consequence, some flights cannot meet their initial assigned runway slot which engenders runway position shifting when comparing resulting runway sequences computed under both deterministic and stochastic conditions. The optimization is able to compute an optimal runway schedule that represents an optimal balance between total schedule delays and total travel times.

  1. Robust optimisation-based microgrid scheduling with islanding constraints

    DOE PAGES

    Liu, Guodong; Starke, Michael; Xiao, Bailu; ...

    2017-02-17

    This paper proposes a robust optimization based optimal scheduling model for microgrid operation considering constraints of islanding capability. Our objective is to minimize the total operation cost, including generation cost and spinning reserve cost of local resources as well as purchasing cost of energy from the main grid. In order to ensure the resiliency of a microgrid and improve the reliability of the local electricity supply, the microgrid is required to maintain enough spinning reserve (both up and down) to meet local demand and accommodate local renewable generation when the supply of power from the main grid is interrupted suddenly,more » i.e., microgrid transitions from grid-connected into islanded mode. Prevailing operational uncertainties in renewable energy resources and load are considered and captured using a robust optimization method. With proper robust level, the solution of the proposed scheduling model ensures successful islanding of the microgrid with minimum load curtailment and guarantees robustness against all possible realizations of the modeled operational uncertainties. Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator and a battery demonstrate the effectiveness of the proposed scheduling model.« less

  2. Predictive Scheduling for Electric Vehicles Considering Uncertainty of Load and User Behaviors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Bin; Huang, Rui; Wang, Yubo

    2016-05-02

    Un-coordinated Electric Vehicle (EV) charging can create unexpected load in local distribution grid, which may degrade the power quality and system reliability. The uncertainty of EV load, user behaviors and other baseload in distribution grid, is one of challenges that impedes optimal control for EV charging problem. Previous researches did not fully solve this problem due to lack of real-world EV charging data and proper stochastic model to describe these behaviors. In this paper, we propose a new predictive EV scheduling algorithm (PESA) inspired by Model Predictive Control (MPC), which includes a dynamic load estimation module and a predictive optimizationmore » module. The user-related EV load and base load are dynamically estimated based on the historical data. At each time interval, the predictive optimization program will be computed for optimal schedules given the estimated parameters. Only the first element from the algorithm outputs will be implemented according to MPC paradigm. Current-multiplexing function in each Electric Vehicle Supply Equipment (EVSE) is considered and accordingly a virtual load is modeled to handle the uncertainties of future EV energy demands. This system is validated by the real-world EV charging data collected on UCLA campus and the experimental results indicate that our proposed model not only reduces load variation up to 40% but also maintains a high level of robustness. Finally, IEC 61850 standard is utilized to standardize the data models involved, which brings significance to more reliable and large-scale implementation.« less

  3. Advanced Vehicle and Power Initiative

    DTIC Science & Technology

    2010-07-29

    optimize vehicle operation, and capture vehicle kinetic energy during braking ( regenerative energy). As much as two-thirds of this imported oil comes... categories . Figure 4 provides a visual representation of many of the HEV and BEV options available on the 2010 GSA Schedule. Figure 4 - GSA...gallon • Renewable energy generated 24 • Vehicle miles driven by vehicle category • Implementation costs – Infrastructure modifications required

  4. Algorithm of composing the schedule of construction and installation works

    NASA Astrophysics Data System (ADS)

    Nehaj, Rustam; Molotkov, Georgij; Rudchenko, Ivan; Grinev, Anatolij; Sekisov, Aleksandr

    2017-10-01

    An algorithm for scheduling works is developed, in which the priority of the work corresponds to the total weight of the subordinate works, the vertices of the graph, and it is proved that for graphs of the tree type the algorithm is optimal. An algorithm is synthesized to reduce the search for solutions when drawing up schedules of construction and installation works, allocating a subset with the optimal solution of the problem of the minimum power, which is determined by the structure of its initial data and numerical values. An algorithm for scheduling construction and installation work is developed, taking into account the schedule for the movement of brigades, which is characterized by the possibility to efficiently calculate the values of minimizing the time of work performance by the parameters of organizational and technological reliability through the use of the branch and boundary method. The program of the computational algorithm was compiled in the MatLAB-2008 program. For the initial data of the matrix, random numbers were taken, uniformly distributed in the range from 1 to 100. It takes 0.5; 2.5; 7.5; 27 minutes to solve the problem. Thus, the proposed method for estimating the lower boundary of the solution is sufficiently accurate and allows efficient solution of the minimax task of scheduling construction and installation works.

  5. Reliable Adaptive Video Streaming Driven by Perceptual Semantics for Situational Awareness

    PubMed Central

    Pimentel-Niño, M. A.; Saxena, Paresh; Vazquez-Castro, M. A.

    2015-01-01

    A novel cross-layer optimized video adaptation driven by perceptual semantics is presented. The design target is streamed live video to enhance situational awareness in challenging communications conditions. Conventional solutions for recreational applications are inadequate and novel quality of experience (QoE) framework is proposed which allows fully controlled adaptation and enables perceptual semantic feedback. The framework relies on temporal/spatial abstraction for video applications serving beyond recreational purposes. An underlying cross-layer optimization technique takes into account feedback on network congestion (time) and erasures (space) to best distribute available (scarce) bandwidth. Systematic random linear network coding (SRNC) adds reliability while preserving perceptual semantics. Objective metrics of the perceptual features in QoE show homogeneous high performance when using the proposed scheme. Finally, the proposed scheme is in line with content-aware trends, by complying with information-centric-networking philosophy and architecture. PMID:26247057

  6. Environment-Aware Production Scheduling for Paint Shops in Automobile Manufacturing: A Multi-Objective Optimization Approach

    PubMed Central

    Zhang, Rui

    2017-01-01

    The traditional way of scheduling production processes often focuses on profit-driven goals (such as cycle time or material cost) while tending to overlook the negative impacts of manufacturing activities on the environment in the form of carbon emissions and other undesirable by-products. To bridge the gap, this paper investigates an environment-aware production scheduling problem that arises from a typical paint shop in the automobile manufacturing industry. In the studied problem, an objective function is defined to minimize the emission of chemical pollutants caused by the cleaning of painting devices which must be performed each time before a color change occurs. Meanwhile, minimization of due date violations in the downstream assembly shop is also considered because the two shops are interrelated and connected by a limited-capacity buffer. First, we have developed a mixed-integer programming formulation to describe this bi-objective optimization problem. Then, to solve problems of practical size, we have proposed a novel multi-objective particle swarm optimization (MOPSO) algorithm characterized by problem-specific improvement strategies. A branch-and-bound algorithm is designed for accurately assessing the most promising solutions. Finally, extensive computational experiments have shown that the proposed MOPSO is able to match the solution quality of an exact solver on small instances and outperform two state-of-the-art multi-objective optimizers in literature on large instances with up to 200 cars. PMID:29295603

  7. Optimal Rate Schedules with Data Sharing in Energy Harvesting Communication Systems.

    PubMed

    Wu, Weiwei; Li, Huafan; Shan, Feng; Zhao, Yingchao

    2017-12-20

    Despite the abundant research on energy-efficient rate scheduling polices in energy harvesting communication systems, few works have exploited data sharing among multiple applications to further enhance the energy utilization efficiency, considering that the harvested energy from environments is limited and unstable. In this paper, to overcome the energy shortage of wireless devices at transmitting data to a platform running multiple applications/requesters, we design rate scheduling policies to respond to data requests as soon as possible by encouraging data sharing among data requests and reducing the redundancy. We formulate the problem as a transmission completion time minimization problem under constraints of dynamical data requests and energy arrivals. We develop offline and online algorithms to solve this problem. For the offline setting, we discover the relationship between two problems: the completion time minimization problem and the energy consumption minimization problem with a given completion time. We first derive the optimal algorithm for the min-energy problem and then adopt it as a building block to compute the optimal solution for the min-completion-time problem. For the online setting without future information, we develop an event-driven online algorithm to complete the transmission as soon as possible. Simulation results validate the efficiency of the proposed algorithm.

  8. Optimal Rate Schedules with Data Sharing in Energy Harvesting Communication Systems

    PubMed Central

    Wu, Weiwei; Li, Huafan; Shan, Feng; Zhao, Yingchao

    2017-01-01

    Despite the abundant research on energy-efficient rate scheduling polices in energy harvesting communication systems, few works have exploited data sharing among multiple applications to further enhance the energy utilization efficiency, considering that the harvested energy from environments is limited and unstable. In this paper, to overcome the energy shortage of wireless devices at transmitting data to a platform running multiple applications/requesters, we design rate scheduling policies to respond to data requests as soon as possible by encouraging data sharing among data requests and reducing the redundancy. We formulate the problem as a transmission completion time minimization problem under constraints of dynamical data requests and energy arrivals. We develop offline and online algorithms to solve this problem. For the offline setting, we discover the relationship between two problems: the completion time minimization problem and the energy consumption minimization problem with a given completion time. We first derive the optimal algorithm for the min-energy problem and then adopt it as a building block to compute the optimal solution for the min-completion-time problem. For the online setting without future information, we develop an event-driven online algorithm to complete the transmission as soon as possible. Simulation results validate the efficiency of the proposed algorithm. PMID:29261135

  9. A pragmatic decision model for inventory management with heterogeneous suppliers

    NASA Astrophysics Data System (ADS)

    Nakandala, Dilupa; Lau, Henry; Zhang, Jingjing; Gunasekaran, Angappa

    2018-05-01

    For enterprises, it is imperative that the trade-off between the cost of inventory and risk implications is managed in the most efficient manner. To explore this, we use the common example of a wholesaler operating in an environment where suppliers demonstrate heterogeneous reliability. The wholesaler has partial orders with dual suppliers and uses lateral transshipments. While supplier reliability is a key concern in inventory management, reliable suppliers are more expensive and investment in strategic approaches that improve supplier performance carries a high cost. Here we consider the operational strategy of dual sourcing with reliable and unreliable suppliers and model the total inventory cost where the likely scenario lead-time of the unreliable suppliers extends beyond the scheduling period. We then develop a Customized Integer Programming Optimization Model to determine the optimum size of partial orders with multiple suppliers. In addition to the objective of total cost optimization, this study takes into account the volatility of the cost associated with the uncertainty of an inventory system.

  10. Optimizing Hydropower Day-Ahead Scheduling for the Oroville-Thermalito Project

    NASA Astrophysics Data System (ADS)

    Veselka, T. D.; Mahalik, M.

    2012-12-01

    Under an award from the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, Water Power Program, a team of national laboratories is developing and demonstrating a suite of advanced, integrated analytical tools to assist managers and planners increase hydropower resources while enhancing the environment. As part of the project, Argonne National Laboratory is developing the Conventional Hydropower Energy and Environmental Systems (CHEERS) model to optimize day-ahead scheduling and real-time operations. We will present the application of CHEERS to the Oroville-Thermalito Project located in Northern California. CHEERS will aid California Department of Water Resources (CDWR) schedulers in making decisions about unit commitments and turbine-level operating points using a system-wide approach to increase hydropower efficiency and the value of power generation and ancillary services. The model determines schedules and operations that are constrained by physical limitations, characteristics of plant components, operational preferences, reliability, and environmental considerations. The optimization considers forebay and afterbay implications, interactions between cascaded power plants, turbine efficiency curves and rough zones, and operator preferences. CHEERS simultaneously considers over time the interactions among all CDWR power and water resources, hydropower economics, reservoir storage limitations, and a set of complex environmental constraints for the Thermalito Afterbay and Feather River habitats. Power marketers, day-ahead schedulers, and plant operators provide system configuration and detailed operational data, along with feedback on model design and performance. CHEERS is integrated with CDWR data systems to obtain historic and initial conditions of the system as the basis from which future operations are then optimized. Model results suggest alternative operational regimes that improve the value of CDWR resources to the grid while enhancing the environment and complying with water delivery obligations for non-power uses.

  11. Solving a real-world problem using an evolving heuristically driven schedule builder.

    PubMed

    Hart, E; Ross, P; Nelson, J

    1998-01-01

    This work addresses the real-life scheduling problem of a Scottish company that must produce daily schedules for the catching and transportation of large numbers of live chickens. The problem is complex and highly constrained. We show that it can be successfully solved by division into two subproblems and solving each using a separate genetic algorithm (GA). We address the problem of whether this produces locally optimal solutions and how to overcome this. We extend the traditional approach of evolving a "permutation + schedule builder" by concentrating on evolving the schedule builder itself. This results in a unique schedule builder being built for each daily scheduling problem, each individually tailored to deal with the particular features of that problem. This results in a robust, fast, and flexible system that can cope with most of the circumstances imaginable at the factory. We also compare the performance of a GA approach to several other evolutionary methods and show that population-based methods are superior to both hill-climbing and simulated annealing in the quality of solutions produced. Population-based methods also have the distinct advantage of producing multiple, equally fit solutions, which is of particular importance when considering the practical aspects of the problem.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpson, L.

    ITN Energy Systems, Inc., and Global Solar Energy, Inc., with the assistance of NREL's PV Manufacturing R&D program, have continued the advancement of CIGS production technology through the development of trajectory-oriented predictive/control models, fault-tolerance control, control-platform development, in-situ sensors, and process improvements. Modeling activities to date include the development of physics-based and empirical models for CIGS and sputter-deposition processing, implementation of model-based control, and application of predictive models to the construction of new evaporation sources and for control. Model-based control is enabled through implementation of reduced or empirical models into a control platform. Reliability improvement activities include implementation of preventivemore » maintenance schedules; detection of failed sensors/equipment and reconfiguration to continue processing; and systematic development of fault prevention and reconfiguration strategies for the full range of CIGS PV production deposition processes. In-situ sensor development activities have resulted in improved control and indicated the potential for enhanced process status monitoring and control of the deposition processes. Substantial process improvements have been made, including significant improvement in CIGS uniformity, thickness control, efficiency, yield, and throughput. In large measure, these gains have been driven by process optimization, which, in turn, have been enabled by control and reliability improvements due to this PV Manufacturing R&D program. This has resulted in substantial improvements of flexible CIGS PV module performance and efficiency.« less

  13. Energy latency tradeoffs for medium access and sleep scheduling in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Gang, Lu

    Wireless sensor networks are expected to be used in a wide range of applications from environment monitoring to event detection. The key challenge is to provide energy efficient communication; however, latency remains an important concern for many applications that require fast response. The central thesis of this work is that energy efficient medium access and sleep scheduling mechanisms can be designed without necessarily sacrificing application-specific latency performance. We validate this thesis through results from four case studies that cover various aspects of medium access and sleep scheduling design in wireless sensor networks. Our first effort, DMAC, is to design an adaptive low latency and energy efficient MAC for data gathering to reduce the sleep latency. We propose staggered schedule, duty cycle adaptation, data prediction and the use of more-to-send packets to enable seamless packet forwarding under varying traffic load and channel contentions. Simulation and experimental results show significant energy savings and latency reduction while ensuring high data reliability. The second research effort, DESS, investigates the problem of designing sleep schedules in arbitrary network communication topologies to minimize the worst case end-to-end latency (referred to as delay diameter). We develop a novel graph-theoretical formulation, derive and analyze optimal solutions for the tree and ring topologies and heuristics for arbitrary topologies. The third study addresses the problem of minimum latency joint scheduling and routing (MLSR). By constructing a novel delay graph, the optimal joint scheduling and routing can be solved by M node-disjoint paths algorithm under multiple channel model. We further extended the algorithm to handle dynamic traffic changes and topology changes. A heuristic solution is proposed for MLSR under single channel interference. In the fourth study, EEJSPC, we first formulate a fundamental optimization problem that provides tunable energy-latency-throughput tradeoffs with joint scheduling and power control and present both exponential and polynomial complexity solutions. Then we investigate the problem of minimizing total transmission energy while satisfying transmission requests within a latency bound, and present an iterative approach which converges rapidly to the optimal parameter settings.

  14. A reliable data collection/control system

    NASA Technical Reports Server (NTRS)

    Maughan, Thom

    1988-01-01

    The Cal Poly Space Project requires a data collection/control system which must be able to reliably record temperature, pressure and vibration data. It must also schedule the 16 electroplating and 2 immiscible alloy experiments so as to optimize use of the batteries, maintain a safe package temperature profile, and run the experiment during conditions of microgravity (and minimum vibration). This system must operate unattended in the harsh environment of space and consume very little power due to limited battery supply. The design of a system which meets these requirements is addressed.

  15. A Near-Term, High-Confidence Heavy Lift Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Rothschild, William J.; Talay, Theodore A.

    2009-01-01

    The use of well understood, legacy elements of the Space Shuttle system could yield a near-term, high-confidence Heavy Lift Launch Vehicle that offers significant performance, reliability, schedule, risk, cost, and work force transition benefits. A side-mount Shuttle-Derived Vehicle (SDV) concept has been defined that has major improvements over previous Shuttle-C concepts. This SDV is shown to carry crew plus large logistics payloads to the ISS, support an operationally efficient and cost effective program of lunar exploration, and offer the potential to support commercial launch operations. This paper provides the latest data and estimates on the configurations, performance, concept of operations, reliability and safety, development schedule, risks, costs, and work force transition opportunities for this optimized side-mount SDV concept. The results presented in this paper have been based on established models and fully validated analysis tools used by the Space Shuttle Program, and are consistent with similar analysis tools commonly used throughout the aerospace industry. While these results serve as a factual basis for comparisons with other launch system architectures, no such comparisons are presented in this paper. The authors welcome comparisons between this optimized SDV and other Heavy Lift Launch Vehicle concepts.

  16. RSM 1.0 user's guide: A resupply scheduler using integer optimization

    NASA Technical Reports Server (NTRS)

    Viterna, Larry A.; Green, Robert D.; Reed, David M.

    1991-01-01

    The Resupply Scheduling Model (RSM) is a PC based, fully menu-driven computer program. It uses integer programming techniques to determine an optimum schedule to replace components on or before a fixed replacement period, subject to user defined constraints such as transportation mass and volume limits or available repair crew time. Principal input for RSJ includes properties such as mass and volume and an assembly sequence. Resource constraints are entered for each period corresponding to the component properties. Though written to analyze the electrical power system on the Space Station Freedom, RSM is quite general and can be used to model the resupply of almost any system subject to user defined resource constraints. Presented here is a step by step procedure for preparing the input, performing the analysis, and interpreting the results. Instructions for installing the program and information on the algorithms are given.

  17. The operations of quantum logic gates with pure and mixed initial states.

    PubMed

    Chen, Jun-Liang; Li, Che-Ming; Hwang, Chi-Chuan; Ho, Yi-Hui

    2011-04-07

    The implementations of quantum logic gates realized by the rovibrational states of a C(12)O(16) molecule in the X((1)Σ(+)) electronic ground state are investigated. Optimal laser fields are obtained by using the modified multitarget optimal theory (MTOCT) which combines the maxima of the cost functional and the fidelity for state and quantum process. The projection operator technique together with modified MTOCT is used to get optimal laser fields. If initial states of the quantum gate are pure states, states at target time approach well to ideal target states. However, if the initial states are mixed states, the target states do not approach well to ideal ones. The process fidelity is introduced to investigate the reliability of the quantum gate operation driven by the optimal laser field. We found that the quantum gates operate reliably whether the initial states are pure or mixed.

  18. Directions in propulsion control

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.

    1990-01-01

    Discussed here is research at NASA Lewis in the area of propulsion controls as driven by trends in advanced aircraft. The objective of the Lewis program is to develop the technology for advanced reliable propulsion control systems and to integrate the propulsion control with the flight control for optimal full-system control.

  19. Optimizing Automatic Deployment Using Non-functional Requirement Annotations

    NASA Astrophysics Data System (ADS)

    Kugele, Stefan; Haberl, Wolfgang; Tautschnig, Michael; Wechs, Martin

    Model-driven development has become common practice in design of safety-critical real-time systems. High-level modeling constructs help to reduce the overall system complexity apparent to developers. This abstraction caters for fewer implementation errors in the resulting systems. In order to retain correctness of the model down to the software executed on a concrete platform, human faults during implementation must be avoided. This calls for an automatic, unattended deployment process including allocation, scheduling, and platform configuration.

  20. Science Goal Driven Observing and Spacecraft Autonomy

    NASA Technical Reports Server (NTRS)

    Koratkar, Amuradha; Grosvenor, Sandy; Jones, Jeremy; Wolf, Karl

    2002-01-01

    Spacecraft autonomy will be an integral part of mission operations in the coming decade. While recent missions have made great strides in the ability to autonomously monitor and react to changing health and physical status of spacecraft, little progress has been made in responding quickly to science driven events. For observations of inherently variable targets and targets of opportunity, the ability to recognize early if an observation will meet the science goals of a program, and react accordingly, can have a major positive impact on the overall scientific returns of an observatory and on its operational costs. If the onboard software can reprioritize the schedule to focus on alternate targets, discard uninteresting observations prior to downloading, or download a subset of observations at a reduced resolution, the spacecraft's overall efficiency will be dramatically increased. The science goal monitoring (SGM) system is a proof-of-concept effort to address the above challenge. The SGM will have an interface to help capture higher level science goals from the scientists and translate them into a flexible observing strategy that SGM can execute and monitor. We are developing an interactive distributed system that will use on-board processing and storage combined with event-driven interfaces with ground-based processing and operations, to enable fast re-prioritization of observing schedules, and to minimize time spent on non-optimized observations.

  1. An implementation of particle swarm optimization to evaluate optimal under-voltage load shedding in competitive electricity markets

    NASA Astrophysics Data System (ADS)

    Hosseini-Bioki, M. M.; Rashidinejad, M.; Abdollahi, A.

    2013-11-01

    Load shedding is a crucial issue in power systems especially under restructured electricity environment. Market-driven load shedding in reregulated power systems associated with security as well as reliability is investigated in this paper. A technoeconomic multi-objective function is introduced to reveal an optimal load shedding scheme considering maximum social welfare. The proposed optimization problem includes maximum GENCOs and loads' profits as well as maximum loadability limit under normal and contingency conditions. Particle swarm optimization (PSO) as a heuristic optimization technique, is utilized to find an optimal load shedding scheme. In a market-driven structure, generators offer their bidding blocks while the dispatchable loads will bid their price-responsive demands. An independent system operator (ISO) derives a market clearing price (MCP) while rescheduling the amount of generating power in both pre-contingency and post-contingency conditions. The proposed methodology is developed on a 3-bus system and then is applied to a modified IEEE 30-bus test system. The obtained results show the effectiveness of the proposed methodology in implementing the optimal load shedding satisfying social welfare by maintaining voltage stability margin (VSM) through technoeconomic analyses.

  2. An on-time power-aware scheduling scheme for medical sensor SoC-based WBAN systems.

    PubMed

    Hwang, Tae-Ho; Kim, Dong-Sun; Kim, Jung-Guk

    2012-12-27

    The focus of many leading technologies in the field of medical sensor systems is on low power consumption and robust data transmission. For example, the implantable cardioverter-defibrillator (ICD), which is used to maintain the heart in a healthy state, requires a reliable wireless communication scheme with an extremely low duty-cycle, high bit rate, and energy-efficient media access protocols. Because such devices must be sustained for over 5 years without access to battery replacement, they must be designed to have extremely low power consumption in sleep mode. Here, an on-time, energy-efficient scheduling scheme is proposed that performs power adjustments to minimize the sleep-mode current. The novelty of this scheduler is that it increases the determinacy of power adjustment and the predictability of scheduling by employing non-pre-emptible dual priority scheduling. This predictable scheduling also guarantees the punctuality of important periodic tasks based on their serialization, by using their worst case execution time) and the power consumption optimization. The scheduler was embedded into a system on chip (SoC) developed to support the wireless body area network-a wakeup-radio and wakeup-timer for implantable medical devices. This scheduling system is validated by the experimental results of its performance when used with life-time extensions of ICD devices.

  3. An On-Time Power-Aware Scheduling Scheme for Medical Sensor SoC-Based WBAN Systems

    PubMed Central

    Hwang, Tae-Ho; Kim, Dong-Sun; Kim, Jung-Guk

    2013-01-01

    The focus of many leading technologies in the field of medical sensor systems is on low power consumption and robust data transmission. For example, the implantable cardioverter-defibrillator (ICD), which is used to maintain the heart in a healthy state, requires a reliable wireless communication scheme with an extremely low duty-cycle, high bit rate, and energy-efficient media access protocols. Because such devices must be sustained for over 5 years without access to battery replacement, they must be designed to have extremely low power consumption in sleep mode. Here, an on-time, energy-efficient scheduling scheme is proposed that performs power adjustments to minimize the sleep-mode current. The novelty of this scheduler is that it increases the determinacy of power adjustment and the predictability of scheduling by employing non-pre-emptible dual priority scheduling. This predictable scheduling also guarantees the punctuality of important periodic tasks based on their serialization, by using their worst case execution time) and the power consumption optimization. The scheduler was embedded into a system on chip (SoC) developed to support the wireless body area network—a wakeup-radio and wakeup-timer for implantable medical devices. This scheduling system is validated by the experimental results of its performance when used with life-time extensions of ICD devices. PMID:23271602

  4. A particle swarm model for estimating reliability and scheduling system maintenance

    NASA Astrophysics Data System (ADS)

    Puzis, Rami; Shirtz, Dov; Elovici, Yuval

    2016-05-01

    Modifying data and information system components may introduce new errors and deteriorate the reliability of the system. Reliability can be efficiently regained with reliability centred maintenance, which requires reliability estimation for maintenance scheduling. A variant of the particle swarm model is used to estimate reliability of systems implemented according to the model view controller paradigm. Simulations based on data collected from an online system of a large financial institute are used to compare three component-level maintenance policies. Results show that appropriately scheduled component-level maintenance greatly reduces the cost of upholding an acceptable level of reliability by reducing the need in system-wide maintenance.

  5. Self-balancing dynamic scheduling of electrical energy for energy-intensive enterprises

    NASA Astrophysics Data System (ADS)

    Gao, Yunlong; Gao, Feng; Zhai, Qiaozhu; Guan, Xiaohong

    2013-06-01

    Balancing production and consumption with self-generation capacity in energy-intensive enterprises has huge economic and environmental benefits. However, balancing production and consumption with self-generation capacity is a challenging task since the energy production and consumption must be balanced in real time with the criteria specified by power grid. In this article, a mathematical model for minimising the production cost with exactly realisable energy delivery schedule is formulated. And a dynamic programming (DP)-based self-balancing dynamic scheduling algorithm is developed to obtain the complete solution set for such a multiple optimal solutions problem. For each stage, a set of conditions are established to determine whether a feasible control trajectory exists. The state space under these conditions is partitioned into subsets and each subset is viewed as an aggregate state, the cost-to-go function is then expressed as a function of initial and terminal generation levels of each stage and is proved to be a staircase function with finite steps. This avoids the calculation of the cost-to-go of every state to resolve the issue of dimensionality in DP algorithm. In the backward sweep process of the algorithm, an optimal policy is determined to maximise the realisability of energy delivery schedule across the entire time horizon. And then in the forward sweep process, the feasible region of the optimal policy with the initial and terminal state at each stage is identified. Different feasible control trajectories can be identified based on the region; therefore, optimising for the feasible control trajectory is performed based on the region with economic and reliability objectives taken into account.

  6. Verification of reliability and validity of a Japanese version of the Rathus Assertiveness Schedule.

    PubMed

    Suzuki, Eiko; Kanoya, Yuka; Katsuki, Takeshi; Sato, Chifumi

    2007-07-01

    To verify the reliability and validity of a Japanese version of the Rathus Assertiveness Schedule in novice nurses to contribute to nursing management. An adequate scale is needed to measure the assertiveness and the effect of assertion training for Japanese nurses and to compare them with those in other countries. Rathus Assertiveness Schedule was adapted to Japanese with back-translation and its validity was examined in 989 novice nurses. The Japanese version showed a high coefficient of reliability in a split-half reliability test (r=0.76; P<0.01). The coefficient of reliability of Cronbach's alpha was high (r=0.84; P<0.01) indicating high internal consistency. The similarity with the concept of stress coping was shown. We extracted eight principal factors using factor analysis with varimax rotation. Elements of these factors were similar to those of the original Rathus Assertiveness Schedule. The Japanese version of Rathus Assertiveness Schedule was verified.

  7. Microgrid optimal scheduling considering impact of high penetration wind generation

    NASA Astrophysics Data System (ADS)

    Alanazi, Abdulaziz

    The objective of this thesis is to study the impact of high penetration wind energy in economic and reliable operation of microgrids. Wind power is variable, i.e., constantly changing, and nondispatchable, i.e., cannot be controlled by the microgrid controller. Thus an accurate forecasting of wind power is an essential task in order to study its impacts in microgrid operation. Two commonly used forecasting methods including Autoregressive Integrated Moving Average (ARIMA) and Artificial Neural Network (ANN) have been used in this thesis to improve the wind power forecasting. The forecasting error is calculated using a Mean Absolute Percentage Error (MAPE) and is improved using the ANN. The wind forecast is further used in the microgrid optimal scheduling problem. The microgrid optimal scheduling is performed by developing a viable model for security-constrained unit commitment (SCUC) based on mixed-integer linear programing (MILP) method. The proposed SCUC is solved for various wind penetration levels and the relationship between the total cost and the wind power penetration is found. In order to reduce microgrid power transfer fluctuations, an additional constraint is proposed and added to the SCUC formulation. The new constraint would control the time-based fluctuations. The impact of the constraint on microgrid SCUC results is tested and validated with numerical analysis. Finally, the applicability of proposed models is demonstrated through numerical simulations.

  8. Modeling and Simulation Reliable Spacecraft On-Board Computing

    NASA Technical Reports Server (NTRS)

    Park, Nohpill

    1999-01-01

    The proposed project will investigate modeling and simulation-driven testing and fault tolerance schemes for Spacecraft On-Board Computing, thereby achieving reliable spacecraft telecommunication. A spacecraft communication system has inherent capabilities of providing multipoint and broadcast transmission, connectivity between any two distant nodes within a wide-area coverage, quick network configuration /reconfiguration, rapid allocation of space segment capacity, and distance-insensitive cost. To realize the capabilities above mentioned, both the size and cost of the ground-station terminals have to be reduced by using reliable, high-throughput, fast and cost-effective on-board computing system which has been known to be a critical contributor to the overall performance of space mission deployment. Controlled vulnerability of mission data (measured in sensitivity), improved performance (measured in throughput and delay) and fault tolerance (measured in reliability) are some of the most important features of these systems. The system should be thoroughly tested and diagnosed before employing a fault tolerance into the system. Testing and fault tolerance strategies should be driven by accurate performance models (i.e. throughput, delay, reliability and sensitivity) to find an optimal solution in terms of reliability and cost. The modeling and simulation tools will be integrated with a system architecture module, a testing module and a module for fault tolerance all of which interacting through a centered graphical user interface.

  9. Real Time Energy Management Control Strategies for Hybrid Powertrains

    NASA Astrophysics Data System (ADS)

    Zaher, Mohamed Hegazi Mohamed

    In order to improve fuel efficiency and reduce emissions of mobile vehicles, various hybrid power-train concepts have been developed over the years. This thesis focuses on embedded control of hybrid powertrain concepts for mobile vehicle applications. Optimal robust control approach is used to develop a real time energy management strategy for continuous operations. The main idea is to store the normally wasted mechanical regenerative energy in energy storage devices for later usage. The regenerative energy recovery opportunity exists in any condition where the speed of motion is in opposite direction to the applied force or torque. This is the case when the vehicle is braking, decelerating, or the motion is driven by gravitational force, or load driven. There are three main concepts for regernerative energy storing devices in hybrid vehicles: electric, hydraulic, and flywheel. The real time control challenge is to balance the system power demand from the engine and the hybrid storage device, without depleting the energy storage device or stalling the engine in any work cycle, while making optimal use of the energy saving opportunities in a given operational, often repetitive cycle. In the worst case scenario, only engine is used and hybrid system completely disabled. A rule based control is developed and tuned for different work cycles and linked to a gain scheduling algorithm. A gain scheduling algorithm identifies the cycle being performed by the machine and its position via GPS, and maps them to the gains.

  10. Designing a data-driven decision support tool for nurse scheduling in the emergency department: a case study of a southern New Jersey emergency department.

    PubMed

    Otegbeye, Mojisola; Scriber, Roslyn; Ducoin, Donna; Glasofer, Amy

    2015-01-01

    A health system serving Burlington and Camden Counties, New Jersey, sought to improve labor productivity for its emergency departments, with emphasis on optimizing nursing staff schedules. Using historical emergency department visit data and operating constraints, a decision support tool was designed to recommend the number of emergency nurses needed in each hour for each day of the week. The pilot emergency department nurse managers used the decision support tool's recommendations to redeploy nurse hours from weekends into a float pool to support periods of demand spikes on weekdays. Productivity improved significantly, with no unfavorable impact on patient throughput, and patient and staff satisfaction. Today's emergency department manager can leverage the increasing ease of access to the emergency department information system's data repository to successfully design a simple but effective tool to support the alignment of its nursing schedule with demand patterns. Copyright © 2015 Emergency Nurses Association. Published by Elsevier Inc. All rights reserved.

  11. Production scheduling with ant colony optimization

    NASA Astrophysics Data System (ADS)

    Chernigovskiy, A. S.; Kapulin, D. V.; Noskova, E. E.; Yamskikh, T. N.; Tsarev, R. Yu

    2017-10-01

    The optimum solution of the production scheduling problem for manufacturing processes at an enterprise is crucial as it allows one to obtain the required amount of production within a specified time frame. Optimum production schedule can be found using a variety of optimization algorithms or scheduling algorithms. Ant colony optimization is one of well-known techniques to solve the global multi-objective optimization problem. In the article, the authors present a solution of the production scheduling problem by means of an ant colony optimization algorithm. A case study of the algorithm efficiency estimated against some others production scheduling algorithms is presented. Advantages of the ant colony optimization algorithm and its beneficial effect on the manufacturing process are provided.

  12. The IPHI Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferdinand, Robin; Beauvais, Pierre-Yves

    High Power Proton Accelerators (HPPAs) are studied for several projects based on high-flux neutron sources driven by proton or deuteron beams. Since the front end is considered as the most critical part of such accelerators, the two French national research agencies CEA and CNRS decided to collaborate in 1997 to study and build a High-Intensity Proton Injector (IPHI). The main objective of this project is to master the complex technologies used and the concepts of manufacturing and controlling the HPPAs. Recently, a collaboration agreement was signed with CERN and led to some evolutions in the design and in the schedule.more » The IPHI design current was maintained at 100 mA in Continuous Wave mode. This choice should allow to produce a high reliability beam at reduced intensity (typically 30 mA) tending to fulfill the Accelerator Driven System requirements. The output energy of the Radio Frequency Quadrupole (RFQ), was reduced from 5 to 3 MeV, allowing then the adjunction and the test, in pulsed operation of a chopper line developed by CERN for the Superconducting Proton Linac (SPL). In a final step, the IPHI RFQ and the chopper line should become parts of the SPL injector. In this paper, the IPHI project and the recent evolutions are reported together with the construction and operation schedule.« less

  13. Environment-Aware Production Schedulingfor Paint Shops in Automobile Manufacturing: A Multi-Objective Optimization Approach.

    PubMed

    Zhang, Rui

    2017-12-25

    The traditional way of scheduling production processes often focuses on profit-driven goals (such as cycle time or material cost) while tending to overlook the negative impacts of manufacturing activities on the environment in the form of carbon emissions and other undesirable by-products. To bridge the gap, this paper investigates an environment-aware production scheduling problem that arises from a typical paint shop in the automobile manufacturing industry. In the studied problem, an objective function is defined to minimize the emission of chemical pollutants caused by the cleaning of painting devices which must be performed each time before a color change occurs. Meanwhile, minimization of due date violations in the downstream assembly shop is also considered because the two shops are interrelated and connected by a limited-capacity buffer. First, we have developed a mixed-integer programming formulation to describe this bi-objective optimization problem. Then, to solve problems of practical size, we have proposed a novel multi-objective particle swarm optimization (MOPSO) algorithm characterized by problem-specific improvement strategies. A branch-and-bound algorithm is designed for accurately assessing the most promising solutions. Finally, extensive computational experiments have shown that the proposed MOPSO is able to match the solution quality of an exact solver on small instances and outperform two state-of-the-art multi-objective optimizers in literature on large instances with up to 200 cars.

  14. Optimization of Statistical Methods Impact on Quantitative Proteomics Data.

    PubMed

    Pursiheimo, Anna; Vehmas, Anni P; Afzal, Saira; Suomi, Tomi; Chand, Thaman; Strauss, Leena; Poutanen, Matti; Rokka, Anne; Corthals, Garry L; Elo, Laura L

    2015-10-02

    As tools for quantitative label-free mass spectrometry (MS) rapidly develop, a consensus about the best practices is not apparent. In the work described here we compared popular statistical methods for detecting differential protein expression from quantitative MS data using both controlled experiments with known quantitative differences for specific proteins used as standards as well as "real" experiments where differences in protein abundance are not known a priori. Our results suggest that data-driven reproducibility-optimization can consistently produce reliable differential expression rankings for label-free proteome tools and are straightforward in their application.

  15. Human-Machine Collaborative Optimization via Apprenticeship Scheduling

    DTIC Science & Technology

    2016-09-09

    prenticeship Scheduling (COVAS), which performs ma- chine learning using human expert demonstration, in conjunction with optimization, to automatically and ef...ficiently produce optimal solutions to challenging real- world scheduling problems. COVAS first learns a policy from human scheduling demonstration via...apprentice- ship learning , then uses this initial solution to provide a tight bound on the value of the optimal solution, thereby substantially

  16. Design of the Protocol Processor for the ROBUS-2 Communication System

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Miner, Paul S.

    2005-01-01

    The ROBUS-2 Protocol Processor (RPP) is a custom-designed hardware component implementing the functionality of the ROBUS-2 fault-tolerant communication system. The Reliable Optical Bus (ROBUS) is the core communication system of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER), a general-purpose fault tolerant integrated modular architecture currently under development at NASA Langley Research Center. ROBUS is a time-division multiple access (TDMA) broadcast communication system with medium access control by means of time-indexed communication schedule. ROBUS-2 is a developmental version of the ROBUS providing guaranteed fault-tolerant services to the attached processing elements (PEs), in the presence of a bounded number of faults. These services include message broadcast (Byzantine Agreement), dynamic communication schedule update, time reference (clock synchronization), and distributed diagnosis (group membership). ROBUS also features fault-tolerant startup and restart capabilities. ROBUS-2 tolerates internal as well as PE faults, and incorporates a dynamic self-reconfiguration capability driven by the internal diagnostic system. ROBUS consists of RPPs connected to each other by a lower-level physical communication network. The RPP has a pipelined architecture and the design is parameterized in the behavioral and structural domains. The design of the RPP enables the bus to achieve a PE-message throughput that approaches the available bandwidth at the physical layer.

  17. Customer-Driven Reliability Models for Multistate Coherent Systems

    DTIC Science & Technology

    1992-01-01

    AENCYUSEONLY(Leae bank)2. RPO- COVERED 1 11992DISSERTATION 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS Customer -Driven Reliability Models For Multistate Coherent...UNIVERSITY OF OKLAHOMA GRADUATE COLLEGE CUSTOMER -DRIVEN RELIABILITY MODELS FOR MULTISTATE COHERENT SYSTEMS A DISSERTATION SUBMITTED TO THE GRADUATE FACULTY...BOEDIGHEIMER I Norman, Oklahoma Distribution/ Av~ilability Codes 1992 A vil andior Dist Special CUSTOMER -DRIVEN RELIABILITY MODELS FOR MULTISTATE

  18. How a good understanding of the physical oceanography of your offshore renewables site can drive down project costs.

    NASA Astrophysics Data System (ADS)

    Royle, J.

    2016-02-01

    For an offshore renewables plant to be viable it must be safe and cost effective to build and maintain (i.e. the conditions mustn't be too harsh to excessively impede operations at the site), it must also have an energetic enough resource to make the project attractive to investors. In order to strike the correct balance between cost and resource reliable datasets describing the meteorological and oceanographic (metocean) environment needs to be collected, analysed and its findings correctly applied . This presentation will use three real world examples from Iberdrola`s portfolio of offshore windfarms in Europe to demonstrate the economic benefits of good quality metocean data and robust analysis. The three examples are: 1) Moving from traditional frequency domain persistence statistics to time domain installation schedules driven by reliable metocean data reduces uncertainty and allows the developer to have better handle on weather risk during contract negotiations. 2) By comparing the planned installation schedules from a well validated metocean dataset with a coarser low cost unvalidated metocean dataset we can show that each Euro invested in the quality of metocean data can reduce the uncertainty in installation schedules by four Euros. 3) Careful consideration of co-varying wave and tidal parameters can justify lower cost designs, such as lower platform levels leading to shorter and cheaper offshore wind turbine foundations. By considering the above examples we will prove the case for investing in analysis of well validated metocean models as a basis for sound financial planning of offshore renewables installations.

  19. Novel Hybrid Scheduling Technique for Sensor Nodes with Mixed Criticality Tasks.

    PubMed

    Micea, Mihai-Victor; Stangaciu, Cristina-Sorina; Stangaciu, Valentin; Curiac, Daniel-Ioan

    2017-06-26

    Sensor networks become increasingly a key technology for complex control applications. Their potential use in safety- and time-critical domains has raised the need for task scheduling mechanisms specially adapted to sensor node specific requirements, often materialized in predictable jitter-less execution of tasks characterized by different criticality levels. This paper offers an efficient scheduling solution, named Hybrid Hard Real-Time Scheduling (H²RTS), which combines a static, clock driven method with a dynamic, event driven scheduling technique, in order to provide high execution predictability, while keeping a high node Central Processing Unit (CPU) utilization factor. From the detailed, integrated schedulability analysis of the H²RTS, a set of sufficiency tests are introduced and demonstrated based on the processor demand and linear upper bound metrics. The performance and correct behavior of the proposed hybrid scheduling technique have been extensively evaluated and validated both on a simulator and on a sensor mote equipped with ARM7 microcontroller.

  20. Determination of an Optimal Commercial Data Bus Architecture for a Flight Data System

    NASA Technical Reports Server (NTRS)

    Crawford, Kevin; Johnson, Martin; Humphries, Rick (Technical Monitor)

    2001-01-01

    NASA/Marshall Space Flight Center (MSFC) is continually looking for methods to reduce cost and schedule while keeping the quality of work high. MSFC is NASA's lead center for space transportation and microgravity research. When supporting NASA's programs several decisions concerning the avionics system must be made. Usually many trade studies must be conducted to determine the best ways to meet the customer's requirements. When deciding the flight data system, one of the first trade studies normally conducted is the determination of the data bus architecture. The schedule, cost, reliability, and environments are some of the factors that are reviewed in the determination of the data bus architecture. Based on the studies, the data bus architecture could result in a proprietary data bus or a commercial data bus. The cost factor usually removes the proprietary data bus from consideration. The commercial data bus's range from Versa Module Eurocard (VME) to Compact PCI to STD 32 to PC 104. If cost, schedule and size are prime factors, VME is usually not considered. If the prime factors are cost, schedule, and size then Compact PCI, STD 32 and PC104 are the choices for the data bus architecture. MSFC's center director has funded a study from his discretionary fund to determine an optimal low cost commercial data bus architecture. The goal of the study is to functionally and environmentally test Compact PCI, STD 32 and PC 104 data bus architectures. This paper will summarize the results of the data bus architecture study.

  1. Algorithm comparison for schedule optimization in MR fingerprinting.

    PubMed

    Cohen, Ouri; Rosen, Matthew S

    2017-09-01

    In MR Fingerprinting, the flip angles and repetition times are chosen according to a pseudorandom schedule. In previous work, we have shown that maximizing the discrimination between different tissue types by optimizing the acquisition schedule allows reductions in the number of measurements required. The ideal optimization algorithm for this application remains unknown, however. In this work we examine several different optimization algorithms to determine the one best suited for optimizing MR Fingerprinting acquisition schedules. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Development of a Pattern Recognition Methodology for Determining Operationally Optimal Heat Balance Instrumentation Calibration Schedules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurt Beran; John Christenson; Dragos Nica

    2002-12-15

    The goal of the project is to enable plant operators to detect with high sensitivity and reliability the onset of decalibration drifts in all of the instrumentation used as input to the reactor heat balance calculations. To achieve this objective, the collaborators developed and implemented at DBNPS an extension of the Multivariate State Estimation Technique (MSET) pattern recognition methodology pioneered by ANAL. The extension was implemented during the second phase of the project and fully achieved the project goal.

  3. Using Optimization to Improve Test Planning

    DTIC Science & Technology

    2017-09-01

    friendly and to display the output differently, the test and evaluation test schedule optimization model would be a good tool for the test and... evaluation schedulers. 14. SUBJECT TERMS schedule optimization, test planning 15. NUMBER OF PAGES 223 16. PRICE CODE 17. SECURITY CLASSIFICATION OF...make the input more user-friendly and to display the output differently, the test and evaluation test schedule optimization model would be a good tool

  4. Design of a universal logic block for fault-tolerant realization of any logic operation in trapped-ion quantum circuits

    NASA Astrophysics Data System (ADS)

    Goudarzi, H.; Dousti, M. J.; Shafaei, A.; Pedram, M.

    2014-05-01

    This paper presents a physical mapping tool for quantum circuits, which generates the optimal universal logic block (ULB) that can, on average, perform any logical fault-tolerant (FT) quantum operations with the minimum latency. The operation scheduling, placement, and qubit routing problems tackled by the quantum physical mapper are highly dependent on one another. More precisely, the scheduling solution affects the quality of the achievable placement solution due to resource pressures that may be created as a result of operation scheduling, whereas the operation placement and qubit routing solutions influence the scheduling solution due to resulting distances between predecessor and current operations, which in turn determines routing latencies. The proposed flow for the quantum physical mapper captures these dependencies by applying (1) a loose scheduling step, which transforms an initial quantum data flow graph into one that explicitly captures the no-cloning theorem of the quantum computing and then performs instruction scheduling based on a modified force-directed scheduling approach to minimize the resource contention and quantum circuit latency, (2) a placement step, which uses timing-driven instruction placement to minimize the approximate routing latencies while making iterative calls to the aforesaid force-directed scheduler to correct scheduling levels of quantum operations as needed, and (3) a routing step that finds dynamic values of routing latencies for the qubits. In addition to the quantum physical mapper, an approach is presented to determine the single best ULB size for a target quantum circuit by examining the latency of different FT quantum operations mapped onto different ULB sizes and using information about the occurrence frequency of operations on critical paths of the target quantum algorithm to weigh these latencies. Experimental results show an average latency reduction of about 40 % compared to previous work.

  5. Optimal radiotherapy dose schedules under parametric uncertainty

    NASA Astrophysics Data System (ADS)

    Badri, Hamidreza; Watanabe, Yoichi; Leder, Kevin

    2016-01-01

    We consider the effects of parameter uncertainty on the optimal radiation schedule in the context of the linear-quadratic model. Our interest arises from the observation that if inter-patient variability in normal and tumor tissue radiosensitivity or sparing factor of the organs-at-risk (OAR) are not accounted for during radiation scheduling, the performance of the therapy may be strongly degraded or the OAR may receive a substantially larger dose than the allowable threshold. This paper proposes a stochastic radiation scheduling concept to incorporate inter-patient variability into the scheduling optimization problem. Our method is based on a probabilistic approach, where the model parameters are given by a set of random variables. Our probabilistic formulation ensures that our constraints are satisfied with a given probability, and that our objective function achieves a desired level with a stated probability. We used a variable transformation to reduce the resulting optimization problem to two dimensions. We showed that the optimal solution lies on the boundary of the feasible region and we implemented a branch and bound algorithm to find the global optimal solution. We demonstrated how the configuration of optimal schedules in the presence of uncertainty compares to optimal schedules in the absence of uncertainty (conventional schedule). We observed that in order to protect against the possibility of the model parameters falling into a region where the conventional schedule is no longer feasible, it is required to avoid extremal solutions, i.e. a single large dose or very large total dose delivered over a long period. Finally, we performed numerical experiments in the setting of head and neck tumors including several normal tissues to reveal the effect of parameter uncertainty on optimal schedules and to evaluate the sensitivity of the solutions to the choice of key model parameters.

  6. An Improved Scheduling Algorithm for Data Transmission in Ultrasonic Phased Arrays with Multi-Group Ultrasonic Sensors

    PubMed Central

    Tang, Wenming; Liu, Guixiong; Li, Yuzhong; Tan, Daji

    2017-01-01

    High data transmission efficiency is a key requirement for an ultrasonic phased array with multi-group ultrasonic sensors. Here, a novel FIFOs scheduling algorithm was proposed and the data transmission efficiency with hardware technology was improved. This algorithm includes FIFOs as caches for the ultrasonic scanning data obtained from the sensors with the output data in a bandwidth-sharing way, on the basis of which an optimal length ratio of all the FIFOs is achieved, allowing the reading operations to be switched among all the FIFOs without time slot waiting. Therefore, this algorithm enhances the utilization ratio of the reading bandwidth resources so as to obtain higher efficiency than the traditional scheduling algorithms. The reliability and validity of the algorithm are substantiated after its implementation in the field programmable gate array (FPGA) technology, and the bandwidth utilization ratio and the real-time performance of the ultrasonic phased array are enhanced. PMID:29035345

  7. Data transmission system and method

    NASA Technical Reports Server (NTRS)

    Bruck, Jehoshua (Inventor); Langberg, Michael (Inventor); Sprintson, Alexander (Inventor)

    2010-01-01

    A method of transmitting data packets, where randomness is added to the schedule. Universal broadcast schedules using encoding and randomization techniques are also discussed, together with optimal randomized schedules and an approximation algorithm for finding near-optimal schedules.

  8. Model-Based Design of Long-Distance Tracer Transport Experiments in Plants.

    PubMed

    Bühler, Jonas; von Lieres, Eric; Huber, Gregor J

    2018-01-01

    Studies of long-distance transport of tracer isotopes in plants offer a high potential for functional phenotyping, but so far measurement time is a bottleneck because continuous time series of at least 1 h are required to obtain reliable estimates of transport properties. Hence, usual throughput values are between 0.5 and 1 samples h -1 . Here, we propose to increase sample throughput by introducing temporal gaps in the data acquisition of each plant sample and measuring multiple plants one after each other in a rotating scheme. In contrast to common time series analysis methods, mechanistic tracer transport models allow the analysis of interrupted time series. The uncertainties of the model parameter estimates are used as a measure of how much information was lost compared to complete time series. A case study was set up to systematically investigate different experimental schedules for different throughput scenarios ranging from 1 to 12 samples h -1 . Selected designs with only a small amount of data points were found to be sufficient for an adequate parameter estimation, implying that the presented approach enables a substantial increase of sample throughput. The presented general framework for automated generation and evaluation of experimental schedules allows the determination of a maximal sample throughput and the respective optimal measurement schedule depending on the required statistical reliability of data acquired by future experiments.

  9. Reliable gain-scheduled control of discrete-time systems and its application to CSTR model

    NASA Astrophysics Data System (ADS)

    Sakthivel, R.; Selvi, S.; Mathiyalagan, K.; Shi, Y.

    2016-10-01

    This paper is focused on reliable gain-scheduled controller design for a class of discrete-time systems with randomly occurring nonlinearities and actuator fault. Further, the nonlinearity in the system model is assumed to occur randomly according to a Bernoulli distribution with measurable time-varying probability in real time. The main purpose of this paper is to design a gain-scheduled controller by implementing a probability-dependent Lyapunov function and linear matrix inequality (LMI) approach such that the closed-loop discrete-time system is stochastically stable for all admissible randomly occurring nonlinearities. The existence conditions for the reliable controller is formulated in terms of LMI constraints. Finally, the proposed reliable gain-scheduled control scheme is applied on continuously stirred tank reactor model to demonstrate the effectiveness and applicability of the proposed design technique.

  10. A treatment schedule of conventional physical therapy provided to enhance upper limb sensorimotor recovery after stroke: expert criterion validity and intra-rater reliability.

    PubMed

    Donaldson, Catherine; Tallis, Raymond C; Pomeroy, Valerie M

    2009-06-01

    Inadequate description of treatment hampers progress in stroke rehabilitation. To develop a valid, reliable, standardised treatment schedule of conventional physical therapy provided for the paretic upper limb after stroke. Eleven neurophysiotherapists participated in the established methodology: semi-structured interviews, focus groups and piloting a draft treatment schedule in clinical practice. Different physiotherapists (n=13) used the treatment schedule to record treatment given to stroke patients with mild, moderate and severe upper limb paresis. Rating of adequacy of the treatment schedule was made using a visual analogue scale (0 to 100mm). Mean (95% confidence interval) visual analogue scores were calculated (expert criterion validity). For intra-rater reliability, each physiotherapist observed a video tape of their treatment and immediately completed a treatment schedule recording form on two separate occasions, 4 to 6 weeks apart. The Kappa statistic was calculated for intra-rater reliability. The treatment schedule consists of a one-page A4 recording form and a user booklet, detailing 50 treatment activities. Expert criterion validity was 79 (95% confidence interval 74 to 84). Intra-rater Kappa was 0.81 (P<0.001). This treatment schedule can be used to document conventional physical therapy in subsequent clinical trials in the geographical area of its development. Further work is needed to investigate generalisability beyond this geographical area.

  11. TFTR diagnostic control and data acquisition system

    NASA Astrophysics Data System (ADS)

    Sauthoff, N. R.; Daniels, R. E.

    1985-05-01

    General computerized control and data-handling support for TFTR diagnostics is presented within the context of the Central Instrumentation, Control and Data Acquisition (CICADA) System. Procedures, hardware, the interactive man-machine interface, event-driven task scheduling, system-wide arming and data acquisition, and a hierarchical data base of raw data and results are described. Similarities in data structures involved in control, monitoring, and data acquisition afford a simplification of the system functions, based on ``groups'' of devices. Emphases and optimizations appropriate for fusion diagnostic system designs are provided. An off-line data reduction computer system is under development.

  12. TFTR diagnostic control and data acquisition system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sauthoff, N.R.; Daniels, R.E.; PPL Computer Division

    1985-05-01

    General computerized control and data-handling support for TFTR diagnostics is presented within the context of the Central Instrumentation, Control and Data Acquisition (CICADA) System. Procedures, hardware, the interactive man--machine interface, event-driven task scheduling, system-wide arming and data acquisition, and a hierarchical data base of raw data and results are described. Similarities in data structures involved in control, monitoring, and data acquisition afford a simplification of the system functions, based on ''groups'' of devices. Emphases and optimizations appropriate for fusion diagnostic system designs are provided. An off-line data reduction computer system is under development.

  13. Alternative Outpatient Chemotherapy Scheduling Method to Improve Patient Service Quality and Nurse Satisfaction.

    PubMed

    Huang, Yu-Li; Bryce, Alan H; Culbertson, Tracy; Connor, Sarah L; Looker, Sherry A; Altman, Kristin M; Collins, James G; Stellner, Winston; McWilliams, Robert R; Moreno-Aspitia, Alvaro; Ailawadhi, Sikander; Mesa, Ruben A

    2018-02-01

    Optimal scheduling and calendar management in an outpatient chemotherapy unit is a complex process that is driven by a need to focus on safety while accommodating a high degree of variability. Primary constraints are infusion times, staffing resources, chair availability, and unit hours. We undertook a process to analyze our existing management models across multiple practice settings in our health care system, then developed a model to optimize safety and efficiency. The model was tested in one of the community chemotherapy units. We assessed staffing violations as measured by nurse-to-patient ratios throughout the workday and at key points during treatment. Staffing violations were tracked before and after the implementation of the new model. The new model reduced staffing violations by nearly 50% and required fewer chairs to treat the same number of patients for the selected clinic day. Actual implementation results indicated that the new model leveled the distribution of patients across the workday with an 18% reduction in maximum chair utilization and a 27% reduction in staffing violations. Subsequently, a positive impact on peak pharmacy workload reduced delays by as much as 35 minutes. Nursing staff satisfaction with the new model was positive. We conclude that the proposed optimization approach with regard to nursing resource assignment and workload balance throughout a day effectively improves patient service quality and staff satisfaction.

  14. Novel Hybrid Scheduling Technique for Sensor Nodes with Mixed Criticality Tasks

    PubMed Central

    Micea, Mihai-Victor; Stangaciu, Cristina-Sorina; Stangaciu, Valentin; Curiac, Daniel-Ioan

    2017-01-01

    Sensor networks become increasingly a key technology for complex control applications. Their potential use in safety- and time-critical domains has raised the need for task scheduling mechanisms specially adapted to sensor node specific requirements, often materialized in predictable jitter-less execution of tasks characterized by different criticality levels. This paper offers an efficient scheduling solution, named Hybrid Hard Real-Time Scheduling (H2RTS), which combines a static, clock driven method with a dynamic, event driven scheduling technique, in order to provide high execution predictability, while keeping a high node Central Processing Unit (CPU) utilization factor. From the detailed, integrated schedulability analysis of the H2RTS, a set of sufficiency tests are introduced and demonstrated based on the processor demand and linear upper bound metrics. The performance and correct behavior of the proposed hybrid scheduling technique have been extensively evaluated and validated both on a simulator and on a sensor mote equipped with ARM7 microcontroller. PMID:28672856

  15. Leveraging Hypoxia-Activated Prodrugs to Prevent Drug Resistance in Solid Tumors.

    PubMed

    Lindsay, Danika; Garvey, Colleen M; Mumenthaler, Shannon M; Foo, Jasmine

    2016-08-01

    Experimental studies have shown that one key factor in driving the emergence of drug resistance in solid tumors is tumor hypoxia, which leads to the formation of localized environmental niches where drug-resistant cell populations can evolve and survive. Hypoxia-activated prodrugs (HAPs) are compounds designed to penetrate to hypoxic regions of a tumor and release cytotoxic or cytostatic agents; several of these HAPs are currently in clinical trial. However, preliminary results have not shown a survival benefit in several of these trials. We hypothesize that the efficacy of treatments involving these prodrugs depends heavily on identifying the correct treatment schedule, and that mathematical modeling can be used to help design potential therapeutic strategies combining HAPs with standard therapies to achieve long-term tumor control or eradication. We develop this framework in the specific context of EGFR-driven non-small cell lung cancer, which is commonly treated with the tyrosine kinase inhibitor erlotinib. We develop a stochastic mathematical model, parametrized using clinical and experimental data, to explore a spectrum of treatment regimens combining a HAP, evofosfamide, with erlotinib. We design combination toxicity constraint models and optimize treatment strategies over the space of tolerated schedules to identify specific combination schedules that lead to optimal tumor control. We find that (i) combining these therapies delays resistance longer than any monotherapy schedule with either evofosfamide or erlotinib alone, (ii) sequentially alternating single doses of each drug leads to minimal tumor burden and maximal reduction in probability of developing resistance, and (iii) strategies minimizing the length of time after an evofosfamide dose and before erlotinib confer further benefits in reduction of tumor burden. These results provide insights into how hypoxia-activated prodrugs may be used to enhance therapeutic effectiveness in the clinic.

  16. Successful Completion of the JWST OGSE2 Cryogenic Test at JSC Chamber-A While Managing Numerous Challenges

    NASA Technical Reports Server (NTRS)

    Park, Sang C.; Brinckerhoff, Pamela; Franck, Randy; Schweickart, Rusty; Thomson, Shaun; Burt, Bill; Ousley, Wes

    2016-01-01

    The James Webb Space Telescope (JWST) Optical Telescope Element (OTE) assembly is the largest optically stable infrared-optimized telescope currently being manufactured and assembled, and scheduled for launch in 2018. The JWST OTE, including the primary mirrors, secondary mirror, and the Aft Optics Subsystems (AOS) are designed to be passively cooled and operate at near 45 degrees Kelvin. Due to the size of its large sunshield in relation to existing test facilities, JWST cannot be optically or thermally tested as a complete observatory-level system at flight temperatures. As a result, the telescope portion along with its instrument complement will be tested as a single unit very late in the program, and on the program schedule critical path. To mitigate schedule risks, a set of 'pathfinder' cryogenic tests will be performed to reduce program risks by demonstrating the optical testing capabilities of the facility, characterizing telescope thermal performance, and allowing project personnel to learn valuable testing lessons off-line. This paper describes the 'pathfinder' cryogenic test program, focusing on the recently completed second test in the series called the Optical Ground Support Equipment 2 (OGSE2) test. The JWST OGSE2 was successfully completed within the allocated project schedule while faced with numerous conflicting thermal requirements during cool-down to the final cryogenic operational temperatures, and during warm-up after the cryo-stable optical tests. The challenges include developing a pre-test cool-down and warm-up profiles without a reliable method to predict the thermal behaviors in a rarified helium environment, and managing the test article hardware safety driven by the project Limits and Constraints (L&C's). Furthermore, OGSE2 test included the time critical Aft Optics Subsystem (AOS), a part of the flight Optical Telescope Element that would need to be placed back in the overall telescope assembly integrations. The OGSE2 test requirements included the strict adherence of the project contamination controls due to the presence of the contamination sensitive flight optical elements. The test operations required close coordination of numerous personnel while they being exposed and trained for the 'final' combined OTE and instrument cryo-test in 2017. This paper will also encompass the OGSE2 thermal data look-back review.

  17. A FairShare Scheduling Service for OpenNebula

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Vallero, S.; Zaccolo, V.

    2017-10-01

    In the ideal limit of infinite resources, multi-tenant applications are able to scale in/out on a Cloud driven only by their functional requirements. While a large Public Cloud may be a reasonable approximation of this condition, small scientific computing centres usually work in a saturated regime. In this case, an advanced resource allocation policy is needed in order to optimize the use of the data centre. The general topic of advanced resource scheduling is addressed by several components of the EU-funded INDIGO-DataCloud project. In this contribution, we describe the FairShare Scheduler Service (FaSS) for OpenNebula (ONE). The service must satisfy resource requests according to an algorithm which prioritizes tasks according to an initial weight and to the historical resource usage of the project. The software was designed to be less intrusive as possible in the ONE code. We keep the original ONE scheduler implementation to match requests to available resources, but the queue of pending jobs to be processed is the one ordered according to priorities as delivered by the FaSS. The FaSS implementation is still being finalized and in this contribution we describe the functional and design requirements the module should satisfy, as well as its high-level architecture.

  18. An Optimizing Space Data-Communications Scheduling Method and Algorithm with Interference Mitigation, Generalized for a Broad Class of Optimization Problems

    NASA Technical Reports Server (NTRS)

    Rash, James L.

    2010-01-01

    NASA's space data-communications infrastructure, the Space Network and the Ground Network, provide scheduled (as well as some limited types of unscheduled) data-communications services to user spacecraft via orbiting relay satellites and ground stations. An implementation of the methods and algorithms disclosed herein will be a system that produces globally optimized schedules with not only optimized service delivery by the space data-communications infrastructure but also optimized satisfaction of all user requirements and prescribed constraints, including radio frequency interference (RFI) constraints. Evolutionary search, a class of probabilistic strategies for searching large solution spaces, constitutes the essential technology in this disclosure. Also disclosed are methods and algorithms for optimizing the execution efficiency of the schedule-generation algorithm itself. The scheduling methods and algorithms as presented are adaptable to accommodate the complexity of scheduling the civilian and/or military data-communications infrastructure. Finally, the problem itself, and the methods and algorithms, are generalized and specified formally, with applicability to a very broad class of combinatorial optimization problems.

  19. Reliability Generalization: An Examination of the Positive Affect and Negative Affect Schedule

    ERIC Educational Resources Information Center

    Leue, Anja; Lange, Sebastian

    2011-01-01

    The assessment of positive affect (PA) and negative affect (NA) by means of the Positive Affect and Negative Affect Schedule has received a remarkable popularity in the social sciences. Using a meta-analytic tool--namely, reliability generalization (RG)--population reliability scores of both scales have been investigated on the basis of a random…

  20. The TJO-OAdM Robotic Observatory: the scheduler

    NASA Astrophysics Data System (ADS)

    Colomé, Josep; Casteels, Kevin; Ribas, Ignasi; Francisco, Xavier

    2010-07-01

    The Joan Oró Telescope at the Montsec Astronomical Observatory (TJO - OAdM) is a small-class observatory working under completely unattended control, due to the isolation of the site. Robotic operation is mandatory for its routine use. The level of robotization of an observatory is given by its reliability in responding to environment changes and by the required human interaction due to possible alarms. These two points establish a level of human attendance to ensure low risk at any time. But there is another key point when deciding how the system performs as a robot: the capability to adapt the scheduled observation to actual conditions. The scheduler represents a fundamental element to fully achieve an intelligent response at any time. Its main task is the mid- and short-term time optimization and it has a direct effect on the scientific return achieved by the observatory. We present a description of the scheduler developed for the TJO - OAdM, which is separated in two parts. Firstly, a pre-scheduler that makes a temporary selection of objects from the available projects according to their possibility of observation. This process is carried out before the beginning of the night following different selection criteria. Secondly, a dynamic scheduler that is executed any time a target observation is complete and a new one must be scheduled. The latter enables the selection of the best target in real time according to actual environment conditions and the set of priorities.

  1. 'It is Time to Prepare the Next patient' Real-Time Prediction of Procedure Duration in Laparoscopic Cholecystectomies.

    PubMed

    Guédon, Annetje C P; Paalvast, M; Meeuwsen, F C; Tax, D M J; van Dijke, A P; Wauben, L S G L; van der Elst, M; Dankelman, J; van den Dobbelsteen, J J

    2016-12-01

    Operating Room (OR) scheduling is crucial to allow efficient use of ORs. Currently, the predicted durations of surgical procedures are unreliable and the OR schedulers have to follow the progress of the procedures in order to update the daily planning accordingly. The OR schedulers often acquire the needed information through verbal communication with the OR staff, which causes undesired interruptions of the surgical process. The aim of this study was to develop a system that predicts in real-time the remaining procedure duration and to test this prediction system for reliability and usability in an OR. The prediction system was based on the activation pattern of one single piece of equipment, the electrosurgical device. The prediction system was tested during 21 laparoscopic cholecystectomies, in which the activation of the electrosurgical device was recorded and processed in real-time using pattern recognition methods. The remaining surgical procedure duration was estimated and the optimal timing to prepare the next patient for surgery was communicated to the OR staff. The mean absolute error was smaller for the prediction system (14 min) than for the OR staff (19 min). The OR staff doubted whether the prediction system could take all relevant factors into account but were positive about its potential to shorten waiting times for patients. The prediction system is a promising tool to automatically and objectively predict the remaining procedure duration, and thereby achieve optimal OR scheduling and streamline the patient flow from the nursing department to the OR.

  2. A Novel Particle Swarm Optimization Approach for Grid Job Scheduling

    NASA Astrophysics Data System (ADS)

    Izakian, Hesam; Tork Ladani, Behrouz; Zamanifar, Kamran; Abraham, Ajith

    This paper represents a Particle Swarm Optimization (PSO) algorithm, for grid job scheduling. PSO is a population-based search algorithm based on the simulation of the social behavior of bird flocking and fish schooling. Particles fly in problem search space to find optimal or near-optimal solutions. In this paper we used a PSO approach for grid job scheduling. The scheduler aims at minimizing makespan and flowtime simultaneously. Experimental studies show that the proposed novel approach is more efficient than the PSO approach reported in the literature.

  3. The comparison of predictive scheduling algorithms for different sizes of job shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Paprocka, I.; Kempa, W. M.; Grabowik, C.; Kalinowski, K.; Krenczyk, D.

    2016-08-01

    In the paper a survey of predictive and reactive scheduling methods is done in order to evaluate how the ability of prediction of reliability characteristics influences over robustness criteria. The most important reliability characteristics are: Mean Time to Failure, Mean Time of Repair. Survey analysis is done for a job shop scheduling problem. The paper answers the question: what method generates robust schedules in the case of a bottleneck failure occurrence before, at the beginning of planned maintenance actions or after planned maintenance actions? Efficiency of predictive schedules is evaluated using criteria: makespan, total tardiness, flow time, idle time. Efficiency of reactive schedules is evaluated using: solution robustness criterion and quality robustness criterion. This paper is the continuation of the research conducted in the paper [1], where the survey of predictive and reactive scheduling methods is done only for small size scheduling problems.

  4. An Optimizing Space Data-Communications Scheduling Method and Algorithm with Interference Mitigation, Generalized for a Broad Class of Optimization Problems

    NASA Technical Reports Server (NTRS)

    Rash, James

    2014-01-01

    NASA's space data-communications infrastructure-the Space Network and the Ground Network-provide scheduled (as well as some limited types of unscheduled) data-communications services to user spacecraft. The Space Network operates several orbiting geostationary platforms (the Tracking and Data Relay Satellite System (TDRSS)), each with its own servicedelivery antennas onboard. The Ground Network operates service-delivery antennas at ground stations located around the world. Together, these networks enable data transfer between user spacecraft and their mission control centers on Earth. Scheduling data-communications events for spacecraft that use the NASA communications infrastructure-the relay satellites and the ground stations-can be accomplished today with software having an operational heritage dating from the 1980s or earlier. An implementation of the scheduling methods and algorithms disclosed and formally specified herein will produce globally optimized schedules with not only optimized service delivery by the space data-communications infrastructure but also optimized satisfaction of all user requirements and prescribed constraints, including radio frequency interference (RFI) constraints. Evolutionary algorithms, a class of probabilistic strategies for searching large solution spaces, is the essential technology invoked and exploited in this disclosure. Also disclosed are secondary methods and algorithms for optimizing the execution efficiency of the schedule-generation algorithms themselves. The scheduling methods and algorithms as presented are adaptable to accommodate the complexity of scheduling the civilian and/or military data-communications infrastructure within the expected range of future users and space- or ground-based service-delivery assets. Finally, the problem itself, and the methods and algorithms, are generalized and specified formally. The generalized methods and algorithms are applicable to a very broad class of combinatorial-optimization problems that encompasses, among many others, the problem of generating optimal space-data communications schedules.

  5. Conceptual achievement of 1GBq activity in a Plasma Focus driven system.

    PubMed

    Tabbakh, Farshid; Sadat Kiai, Seyed Mahmood; Pashaei, Mohammad

    2017-11-01

    This is an approach to evaluate the radioisotope production by means of typical dense plasma focus devices. The production rate of the appropriate positron emitters, F-18, N-13 and O-15 has been studied. The beam-target mechanism was simulated by GEANT4 Monte Carlo tool using QGSP_BIC and QGSP_INCLXX physic models as comparison. The results for positron emitters have been evaluated by reported experimental data and found conformity between simulations and experimental reports that leads to using this code as a reliable tool in optimizing the DPF driven systems for achieving to 1GBq activity of produced radioisotope. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Multiple-Color Optical Activation, Silencing, and Desynchronization of Neural Activity, with Single-Spike Temporal Resolution

    PubMed Central

    Han, Xue; Boyden, Edward S.

    2007-01-01

    The quest to determine how precise neural activity patterns mediate computation, behavior, and pathology would be greatly aided by a set of tools for reliably activating and inactivating genetically targeted neurons, in a temporally precise and rapidly reversible fashion. Having earlier adapted a light-activated cation channel, channelrhodopsin-2 (ChR2), for allowing neurons to be stimulated by blue light, we searched for a complementary tool that would enable optical neuronal inhibition, driven by light of a second color. Here we report that targeting the codon-optimized form of the light-driven chloride pump halorhodopsin from the archaebacterium Natronomas pharaonis (hereafter abbreviated Halo) to genetically-specified neurons enables them to be silenced reliably, and reversibly, by millisecond-timescale pulses of yellow light. We show that trains of yellow and blue light pulses can drive high-fidelity sequences of hyperpolarizations and depolarizations in neurons simultaneously expressing yellow light-driven Halo and blue light-driven ChR2, allowing for the first time manipulations of neural synchrony without perturbation of other parameters such as spiking rates. The Halo/ChR2 system thus constitutes a powerful toolbox for multichannel photoinhibition and photostimulation of virally or transgenically targeted neural circuits without need for exogenous chemicals, enabling systematic analysis and engineering of the brain, and quantitative bioengineering of excitable cells. PMID:17375185

  7. Geometric Distribution-Based Readers Scheduling Optimization Algorithm Using Artificial Immune System.

    PubMed

    Duan, Litian; Wang, Zizhong John; Duan, Fu

    2016-11-16

    In the multiple-reader environment (MRE) of radio frequency identification (RFID) system, multiple readers are often scheduled to interrogate the randomized tags via operating at different time slots or frequency channels to decrease the signal interferences. Based on this, a Geometric Distribution-based Multiple-reader Scheduling Optimization Algorithm using Artificial Immune System (GD-MRSOA-AIS) is proposed to fairly and optimally schedule the readers operating from the viewpoint of resource allocations. GD-MRSOA-AIS is composed of two parts, where a geometric distribution function combined with the fairness consideration is first introduced to generate the feasible scheduling schemes for reader operation. After that, artificial immune system (including immune clone, immune mutation and immune suppression) quickly optimize these feasible ones as the optimal scheduling scheme to ensure that readers are fairly operating with larger effective interrogation range and lower interferences. Compared with the state-of-the-art algorithm, the simulation results indicate that GD-MRSOA-AIS could efficiently schedules the multiple readers operating with a fairer resource allocation scheme, performing in larger effective interrogation range.

  8. Geometric Distribution-Based Readers Scheduling Optimization Algorithm Using Artificial Immune System

    PubMed Central

    Duan, Litian; Wang, Zizhong John; Duan, Fu

    2016-01-01

    In the multiple-reader environment (MRE) of radio frequency identification (RFID) system, multiple readers are often scheduled to interrogate the randomized tags via operating at different time slots or frequency channels to decrease the signal interferences. Based on this, a Geometric Distribution-based Multiple-reader Scheduling Optimization Algorithm using Artificial Immune System (GD-MRSOA-AIS) is proposed to fairly and optimally schedule the readers operating from the viewpoint of resource allocations. GD-MRSOA-AIS is composed of two parts, where a geometric distribution function combined with the fairness consideration is first introduced to generate the feasible scheduling schemes for reader operation. After that, artificial immune system (including immune clone, immune mutation and immune suppression) quickly optimize these feasible ones as the optimal scheduling scheme to ensure that readers are fairly operating with larger effective interrogation range and lower interferences. Compared with the state-of-the-art algorithm, the simulation results indicate that GD-MRSOA-AIS could efficiently schedules the multiple readers operating with a fairer resource allocation scheme, performing in larger effective interrogation range. PMID:27854342

  9. A Chaotic Particle Swarm Optimization-Based Heuristic for Market-Oriented Task-Level Scheduling in Cloud Workflow Systems.

    PubMed

    Li, Xuejun; Xu, Jia; Yang, Yun

    2015-01-01

    Cloud workflow system is a kind of platform service based on cloud computing. It facilitates the automation of workflow applications. Between cloud workflow system and its counterparts, market-oriented business model is one of the most prominent factors. The optimization of task-level scheduling in cloud workflow system is a hot topic. As the scheduling is a NP problem, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been proposed to optimize the cost. However, they have the characteristic of premature convergence in optimization process and therefore cannot effectively reduce the cost. To solve these problems, Chaotic Particle Swarm Optimization (CPSO) algorithm with chaotic sequence and adaptive inertia weight factor is applied to present the task-level scheduling. Chaotic sequence with high randomness improves the diversity of solutions, and its regularity assures a good global convergence. Adaptive inertia weight factor depends on the estimate value of cost. It makes the scheduling avoid premature convergence by properly balancing between global and local exploration. The experimental simulation shows that the cost obtained by our scheduling is always lower than the other two representative counterparts.

  10. A Chaotic Particle Swarm Optimization-Based Heuristic for Market-Oriented Task-Level Scheduling in Cloud Workflow Systems

    PubMed Central

    Li, Xuejun; Xu, Jia; Yang, Yun

    2015-01-01

    Cloud workflow system is a kind of platform service based on cloud computing. It facilitates the automation of workflow applications. Between cloud workflow system and its counterparts, market-oriented business model is one of the most prominent factors. The optimization of task-level scheduling in cloud workflow system is a hot topic. As the scheduling is a NP problem, Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) have been proposed to optimize the cost. However, they have the characteristic of premature convergence in optimization process and therefore cannot effectively reduce the cost. To solve these problems, Chaotic Particle Swarm Optimization (CPSO) algorithm with chaotic sequence and adaptive inertia weight factor is applied to present the task-level scheduling. Chaotic sequence with high randomness improves the diversity of solutions, and its regularity assures a good global convergence. Adaptive inertia weight factor depends on the estimate value of cost. It makes the scheduling avoid premature convergence by properly balancing between global and local exploration. The experimental simulation shows that the cost obtained by our scheduling is always lower than the other two representative counterparts. PMID:26357510

  11. Risk assessment in man and mouse.

    PubMed

    Balci, Fuat; Freestone, David; Gallistel, Charles R

    2009-02-17

    Human and mouse subjects tried to anticipate at which of 2 locations a reward would appear. On a randomly scheduled fraction of the trials, it appeared with a short latency at one location; on the complementary fraction, it appeared after a longer latency at the other location. Subjects of both species accurately assessed the exogenous uncertainty (the probability of a short versus a long trial) and the endogenous uncertainty (from the scalar variability in their estimates of an elapsed duration) to compute the optimal target latency for a switch from the short- to the long-latency location. The optimal latency was arrived at so rapidly that there was no reliably discernible improvement over trials. Under these nonverbal conditions, humans and mice accurately assess risks and behave nearly optimally. That this capacity is well-developed in the mouse opens up the possibility of a genetic approach to the neurobiological mechanisms underlying risk assessment.

  12. Risk assessment in man and mouse

    PubMed Central

    Balci, Fuat; Freestone, David; Gallistel, Charles R.

    2009-01-01

    Human and mouse subjects tried to anticipate at which of 2 locations a reward would appear. On a randomly scheduled fraction of the trials, it appeared with a short latency at one location; on the complementary fraction, it appeared after a longer latency at the other location. Subjects of both species accurately assessed the exogenous uncertainty (the probability of a short versus a long trial) and the endogenous uncertainty (from the scalar variability in their estimates of an elapsed duration) to compute the optimal target latency for a switch from the short- to the long-latency location. The optimal latency was arrived at so rapidly that there was no reliably discernible improvement over trials. Under these nonverbal conditions, humans and mice accurately assess risks and behave nearly optimally. That this capacity is well-developed in the mouse opens up the possibility of a genetic approach to the neurobiological mechanisms underlying risk assessment. PMID:19188592

  13. Development of large-aperture electro-optical switch for high power laser at CAEP

    NASA Astrophysics Data System (ADS)

    Zhang, Xiongjun; Wu, Dengsheng; Zhang, Jun; Lin, Donghui; Zheng, Jiangang; Zheng, Kuixing

    2015-02-01

    Large-aperture electro-optical switch based on plasma Pockels cell (PPC) is one of important components for inertial confinement fusion (ICF) laser facility. We have demonstrated a single-pulse driven 4×1 PPC with 400mm×400mm aperture for SGIII laser facility. And four 2×1 PPCs modules with 350mm×350mm aperture have been operated in SGII update laser facility. It is different to the PPC of NIF and LMJ for its simple operation to perform Pockels effect. With optimized operation parameters, the PPCs meet the SGII-U laser requirement of four-pass amplification control. Only driven by one high voltage pulser, the simplified PPC system would be provided with less associated diagnostics, and higher reliability. To farther reduce the insert loss of the PPC, research on the large-aperture PPC based on DKDP crystal driven by one pulse is developed. And several single-pulse driven PPCs with 80mm×80mm DKDP crystal have been manufactured and operated in laser facilities.

  14. Taxi Time Prediction at Charlotte Airport Using Fast-Time Simulation and Machine Learning Techniques

    NASA Technical Reports Server (NTRS)

    Lee, Hanbong

    2016-01-01

    Accurate taxi time prediction is required for enabling efficient runway scheduling that can increase runway throughput and reduce taxi times and fuel consumptions on the airport surface. Currently NASA and American Airlines are jointly developing a decision-support tool called Spot and Runway Departure Advisor (SARDA) that assists airport ramp controllers to make gate pushback decisions and improve the overall efficiency of airport surface traffic. In this presentation, we propose to use Linear Optimized Sequencing (LINOS), a discrete-event fast-time simulation tool, to predict taxi times and provide the estimates to the runway scheduler in real-time airport operations. To assess its prediction accuracy, we also introduce a data-driven analytical method using machine learning techniques. These two taxi time prediction methods are evaluated with actual taxi time data obtained from the SARDA human-in-the-loop (HITL) simulation for Charlotte Douglas International Airport (CLT) using various performance measurement metrics. Based on the taxi time prediction results, we also discuss how the prediction accuracy can be affected by the operational complexity at this airport and how we can improve the fast time simulation model before implementing it with an airport scheduling algorithm in a real-time environment.

  15. Constrained non-linear multi-objective optimisation of preventive maintenance scheduling for offshore wind farms

    NASA Astrophysics Data System (ADS)

    Zhong, Shuya; Pantelous, Athanasios A.; Beer, Michael; Zhou, Jian

    2018-05-01

    Offshore wind farm is an emerging source of renewable energy, which has been shown to have tremendous potential in recent years. In this blooming area, a key challenge is that the preventive maintenance of offshore turbines should be scheduled reasonably to satisfy the power supply without failure. In this direction, two significant goals should be considered simultaneously as a trade-off. One is to maximise the system reliability and the other is to minimise the maintenance related cost. Thus, a non-linear multi-objective programming model is proposed including two newly defined objectives with thirteen families of constraints suitable for the preventive maintenance of offshore wind farms. In order to solve our model effectively, the nondominated sorting genetic algorithm II, especially for the multi-objective optimisation is utilised and Pareto-optimal solutions of schedules can be obtained to offer adequate support to decision-makers. Finally, an example is given to illustrate the performances of the devised model and algorithm, and explore the relationships of the two targets with the help of a contrast model.

  16. Range Process Simulation Tool

    NASA Technical Reports Server (NTRS)

    Phillips, Dave; Haas, William; Barth, Tim; Benjamin, Perakath; Graul, Michael; Bagatourova, Olga

    2005-01-01

    Range Process Simulation Tool (RPST) is a computer program that assists managers in rapidly predicting and quantitatively assessing the operational effects of proposed technological additions to, and/or upgrades of, complex facilities and engineering systems such as the Eastern Test Range. Originally designed for application to space transportation systems, RPST is also suitable for assessing effects of proposed changes in industrial facilities and large organizations. RPST follows a model-based approach that includes finite-capacity schedule analysis and discrete-event process simulation. A component-based, scalable, open architecture makes RPST easily and rapidly tailorable for diverse applications. Specific RPST functions include: (1) definition of analysis objectives and performance metrics; (2) selection of process templates from a processtemplate library; (3) configuration of process models for detailed simulation and schedule analysis; (4) design of operations- analysis experiments; (5) schedule and simulation-based process analysis; and (6) optimization of performance by use of genetic algorithms and simulated annealing. The main benefits afforded by RPST are provision of information that can be used to reduce costs of operation and maintenance, and the capability for affordable, accurate, and reliable prediction and exploration of the consequences of many alternative proposed decisions.

  17. An Optimal Static Scheduling Algorithm for Hard Real-Time Systems Specified in a Prototyping Language

    DTIC Science & Technology

    1989-12-01

    to construct because the mechanism is a dispatching procedure. Since all nonpreemptive schedules are contained in the set of all preemptive schedules...the optimal value of T’.. in the preemptive case is at least a lower bound on the optimal T., for the nonpreemptive schedules. This principle is the...adapt to changes in the enviro.nment. In hard real-time systems, tasks are also distinguished as preemptable and nonpreemptable . A task is preemptable

  18. Particle swarm optimization based space debris surveillance network scheduling

    NASA Astrophysics Data System (ADS)

    Jiang, Hai; Liu, Jing; Cheng, Hao-Wen; Zhang, Yao

    2017-02-01

    The increasing number of space debris has created an orbital debris environment that poses increasing impact risks to existing space systems and human space flights. For the safety of in-orbit spacecrafts, we should optimally schedule surveillance tasks for the existing facilities to allocate resources in a manner that most significantly improves the ability to predict and detect events involving affected spacecrafts. This paper analyzes two criteria that mainly affect the performance of a scheduling scheme and introduces an artificial intelligence algorithm into the scheduling of tasks of the space debris surveillance network. A new scheduling algorithm based on the particle swarm optimization algorithm is proposed, which can be implemented in two different ways: individual optimization and joint optimization. Numerical experiments with multiple facilities and objects are conducted based on the proposed algorithm, and simulation results have demonstrated the effectiveness of the proposed algorithm.

  19. Stochastic Optimization for Unit Commitment-A Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Qipeng P.; Wang, Jianhui; Liu, Andrew L.

    2015-07-01

    Optimization models have been widely used in the power industry to aid the decision-making process of scheduling and dispatching electric power generation resources, a process known as unit commitment (UC). Since UC's birth, there have been two major waves of revolution on UC research and real life practice. The first wave has made mixed integer programming stand out from the early solution and modeling approaches for deterministic UC, such as priority list, dynamic programming, and Lagrangian relaxation. With the high penetration of renewable energy, increasing deregulation of the electricity industry, and growing demands on system reliability, the next wave ismore » focused on transitioning from traditional deterministic approaches to stochastic optimization for unit commitment. Since the literature has grown rapidly in the past several years, this paper is to review the works that have contributed to the modeling and computational aspects of stochastic optimization (SO) based UC. Relevant lines of future research are also discussed to help transform research advances into real-world applications.« less

  20. Optimal recombination in genetic algorithms for flowshop scheduling problems

    NASA Astrophysics Data System (ADS)

    Kovalenko, Julia

    2016-10-01

    The optimal recombination problem consists in finding the best possible offspring as a result of a recombination operator in a genetic algorithm, given two parent solutions. We prove NP-hardness of the optimal recombination for various variants of the flowshop scheduling problem with makespan criterion and criterion of maximum lateness. An algorithm for solving the optimal recombination problem for permutation flowshop problems is built, using enumeration of prefect matchings in a special bipartite graph. The algorithm is adopted for the classical flowshop scheduling problem and for the no-wait flowshop problem. It is shown that the optimal recombination problem for the permutation flowshop scheduling problem is solvable in polynomial time for almost all pairs of parent solutions as the number of jobs tends to infinity.

  1. An Optimization Model for Scheduling Problems with Two-Dimensional Spatial Resource Constraint

    NASA Technical Reports Server (NTRS)

    Garcia, Christopher; Rabadi, Ghaith

    2010-01-01

    Traditional scheduling problems involve determining temporal assignments for a set of jobs in order to optimize some objective. Some scheduling problems also require the use of limited resources, which adds another dimension of complexity. In this paper we introduce a spatial resource-constrained scheduling problem that can arise in assembly, warehousing, cross-docking, inventory management, and other areas of logistics and supply chain management. This scheduling problem involves a twodimensional rectangular area as a limited resource. Each job, in addition to having temporal requirements, has a width and a height and utilizes a certain amount of space inside the area. We propose an optimization model for scheduling the jobs while respecting all temporal and spatial constraints.

  2. Performance comparison of some evolutionary algorithms on job shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Mishra, S. K.; Rao, C. S. P.

    2016-09-01

    Job Shop Scheduling as a state space search problem belonging to NP-hard category due to its complexity and combinational explosion of states. Several naturally inspire evolutionary methods have been developed to solve Job Shop Scheduling Problems. In this paper the evolutionary methods namely Particles Swarm Optimization, Artificial Intelligence, Invasive Weed Optimization, Bacterial Foraging Optimization, Music Based Harmony Search Algorithms are applied and find tuned to model and solve Job Shop Scheduling Problems. To compare about 250 Bench Mark instances have been used to evaluate the performance of these algorithms. The capabilities of each these algorithms in solving Job Shop Scheduling Problems are outlined.

  3. A thermally driven differential mutation approach for the structural optimization of large atomic systems

    NASA Astrophysics Data System (ADS)

    Biswas, Katja

    2017-09-01

    A computational method is presented which is capable to obtain low lying energy structures of topological amorphous systems. The method merges a differential mutation genetic algorithm with simulated annealing. This is done by incorporating a thermal selection criterion, which makes it possible to reliably obtain low lying minima with just a small population size and is suitable for multimodal structural optimization. The method is tested on the structural optimization of amorphous graphene from unbiased atomic starting configurations. With just a population size of six systems, energetically very low structures are obtained. While each of the structures represents a distinctly different arrangement of the atoms, their properties, such as energy, distribution of rings, radial distribution function, coordination number, and distribution of bond angles, are very similar.

  4. Design of an Aircrew Scheduling Decision Aid for the 6916th Electronic Security Squadron.

    DTIC Science & Technology

    1987-06-01

    Security Classification) Design of an Aircrew Scheduling Decision Aid for the 6916th Electronic Security Squadron 12. PERSONAL AUTHOR(S) Thomas J. Kopf...Because of the great number of possible scheduling alternatives, it is difficult to find an optimal solution to-the scheduling problem. Additionally...changes to the original schedule make it even more difficult to find an optimal solution. The emergence of capable microcompu- ters, decision support

  5. Determining optimal preventive maintenance interval for component of Well Barrier Element in an Oil & Gas Company

    NASA Astrophysics Data System (ADS)

    Siswanto, A.; Kurniati, N.

    2018-04-01

    An oil and gas company has 2,268 oil and gas wells. Well Barrier Element (WBE) is installed in a well to protect human, prevent asset damage and minimize harm to the environment. The primary WBE component is Surface Controlled Subsurface Safety Valve (SCSSV). The secondary WBE component is Christmas Tree Valves that consist of four valves i.e. Lower Master Valve (LMV), Upper Master Valve (UMV), Swab Valve (SV) and Wing Valve (WV). Current practice on WBE Preventive Maintenance (PM) program is conducted by considering the suggested schedule as stated on manual. Corrective Maintenance (CM) program is conducted when the component fails unexpectedly. Both PM and CM need cost and may cause production loss. This paper attempts to analyze the failure data and reliability based on historical data. Optimal PM interval is determined in order to minimize the total cost of maintenance per unit time. The optimal PM interval for SCSSV is 730 days, LMV is 985 days, UMV is 910 days, SV is 900 days and WV is 780 days. In average of all components, the cost reduction by implementing the suggested interval is 52%, while the reliability is improved by 4% and the availability is increased by 5%.

  6. Multi-Satellite Observation Scheduling for Large Area Disaster Emergency Response

    NASA Astrophysics Data System (ADS)

    Niu, X. N.; Tang, H.; Wu, L. X.

    2018-04-01

    an optimal imaging plan, plays a key role in coordinating multiple satellites to monitor the disaster area. In the paper, to generate imaging plan dynamically according to the disaster relief, we propose a dynamic satellite task scheduling method for large area disaster response. First, an initial robust scheduling scheme is generated by a robust satellite scheduling model in which both the profit and the robustness of the schedule are simultaneously maximized. Then, we use a multi-objective optimization model to obtain a series of decomposing schemes. Based on the initial imaging plan, we propose a mixed optimizing algorithm named HA_NSGA-II to allocate the decomposing results thus to obtain an adjusted imaging schedule. A real disaster scenario, i.e., 2008 Wenchuan earthquake, is revisited in terms of rapid response using satellite resources and used to evaluate the performance of the proposed method with state-of-the-art approaches. We conclude that our satellite scheduling model can optimize the usage of satellite resources so as to obtain images in disaster response in a more timely and efficient manner.

  7. Sensibility study in a flexible job shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Curralo, Ana; Pereira, Ana I.; Barbosa, José; Leitão, Paulo

    2013-10-01

    This paper proposes the impact assessment of the jobs order in the optimal time of operations in a Flexible Job Shop Scheduling Problem. In this work a real assembly cell was studied: the AIP-PRIMECA cell at the Université de Valenciennes et du Hainaut-Cambrésis, in France, which is considered as a Flexible Job Shop problem. The problem consists in finding the machines operations schedule, taking into account the precedence constraints. The main objective is to minimize the batch makespan, i.e. the finish time of the last operation completed in the schedule. Shortly, the present study consists in evaluating if the jobs order affects the optimal time of the operations schedule. The genetic algorithm was used to solve the optimization problem. As a conclusion, it's assessed that the jobs order influence the optimal time.

  8. Design and Scheduling of Microgrids using Benders Decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagarajan, Adarsh; Ayyanar, Raja

    2016-11-21

    The distribution feeder laterals in a distribution feeder with relatively high PV generation as compared to the load can be operated as microgrids to achieve reliability, power quality and economic benefits. However, renewable resources are intermittent and stochastic in nature. A novel approach for sizing and scheduling an energy storage system and microturbine for reliable operation of microgrids is proposed. The size and schedule of an energy storage system and microturbine are determined using Benders' decomposition, considering PV generation as a stochastic resource.

  9. HURON (HUman and Robotic Optimization Network) Multi-Agent Temporal Activity Planner/Scheduler

    NASA Technical Reports Server (NTRS)

    Hua, Hook; Mrozinski, Joseph J.; Elfes, Alberto; Adumitroaie, Virgil; Shelton, Kacie E.; Smith, Jeffrey H.; Lincoln, William P.; Weisbin, Charles R.

    2012-01-01

    HURON solves the problem of how to optimize a plan and schedule for assigning multiple agents to a temporal sequence of actions (e.g., science tasks). Developed as a generic planning and scheduling tool, HURON has been used to optimize space mission surface operations. The tool has also been used to analyze lunar architectures for a variety of surface operational scenarios in order to maximize return on investment and productivity. These scenarios include numerous science activities performed by a diverse set of agents: humans, teleoperated rovers, and autonomous rovers. Once given a set of agents, activities, resources, resource constraints, temporal constraints, and de pendencies, HURON computes an optimal schedule that meets a specified goal (e.g., maximum productivity or minimum time), subject to the constraints. HURON performs planning and scheduling optimization as a graph search in state-space with forward progression. Each node in the graph contains a state instance. Starting with the initial node, a graph is automatically constructed with new successive nodes of each new state to explore. The optimization uses a set of pre-conditions and post-conditions to create the children states. The Python language was adopted to not only enable more agile development, but to also allow the domain experts to easily define their optimization models. A graphical user interface was also developed to facilitate real-time search information feedback and interaction by the operator in the search optimization process. The HURON package has many potential uses in the fields of Operations Research and Management Science where this technology applies to many commercial domains requiring optimization to reduce costs. For example, optimizing a fleet of transportation truck routes, aircraft flight scheduling, and other route-planning scenarios involving multiple agent task optimization would all benefit by using HURON.

  10. Framework for computationally efficient optimal irrigation scheduling using ant colony optimization

    USDA-ARS?s Scientific Manuscript database

    A general optimization framework is introduced with the overall goal of reducing search space size and increasing the computational efficiency of evolutionary algorithm application for optimal irrigation scheduling. The framework achieves this goal by representing the problem in the form of a decisi...

  11. Research on crude oil storage and transportation based on optimization algorithm

    NASA Astrophysics Data System (ADS)

    Yuan, Xuhua

    2018-04-01

    At present, the optimization theory and method have been widely used in the optimization scheduling and optimal operation scheme of complex production systems. Based on C++Builder 6 program development platform, the theoretical research results are implemented by computer. The simulation and intelligent decision system of crude oil storage and transportation inventory scheduling are designed. The system includes modules of project management, data management, graphics processing, simulation of oil depot operation scheme. It can realize the optimization of the scheduling scheme of crude oil storage and transportation system. A multi-point temperature measuring system for monitoring the temperature field of floating roof oil storage tank is developed. The results show that by optimizing operating parameters such as tank operating mode and temperature, the total transportation scheduling costs of the storage and transportation system can be reduced by 9.1%. Therefore, this method can realize safe and stable operation of crude oil storage and transportation system.

  12. DTS: Building custom, intelligent schedulers

    NASA Technical Reports Server (NTRS)

    Hansson, Othar; Mayer, Andrew

    1994-01-01

    DTS is a decision-theoretic scheduler, built on top of a flexible toolkit -- this paper focuses on how the toolkit might be reused in future NASA mission schedulers. The toolkit includes a user-customizable scheduling interface, and a 'Just-For-You' optimization engine. The customizable interface is built on two metaphors: objects and dynamic graphs. Objects help to structure problem specifications and related data, while dynamic graphs simplify the specification of graphical schedule editors (such as Gantt charts). The interface can be used with any 'back-end' scheduler, through dynamically-loaded code, interprocess communication, or a shared database. The 'Just-For-You' optimization engine includes user-specific utility functions, automatically compiled heuristic evaluations, and a postprocessing facility for enforcing scheduling policies. The optimization engine is based on BPS, the Bayesian Problem-Solver (1,2), which introduced a similar approach to solving single-agent and adversarial graph search problems.

  13. Car painting process scheduling with harmony search algorithm

    NASA Astrophysics Data System (ADS)

    Syahputra, M. F.; Maiyasya, A.; Purnamawati, S.; Abdullah, D.; Albra, W.; Heikal, M.; Abdurrahman, A.; Khaddafi, M.

    2018-02-01

    Automotive painting program in the process of painting the car body by using robot power, making efficiency in the production system. Production system will be more efficient if pay attention to scheduling of car order which will be done by considering painting body shape of car. Flow shop scheduling is a scheduling model in which the job-job to be processed entirely flows in the same product direction / path. Scheduling problems often arise if there are n jobs to be processed on the machine, which must be specified which must be done first and how to allocate jobs on the machine to obtain a scheduled production process. Harmony Search Algorithm is a metaheuristic optimization algorithm based on music. The algorithm is inspired by observations that lead to music in search of perfect harmony. This musical harmony is in line to find optimal in the optimization process. Based on the tests that have been done, obtained the optimal car sequence with minimum makespan value.

  14. Pitch Guidance Optimization for the Orion Abort Flight Tests

    NASA Technical Reports Server (NTRS)

    Stillwater, Ryan Allanque

    2010-01-01

    The National Aeronautics and Space Administration created the Constellation program to develop the next generation of manned space vehicles and launch vehicles. The Orion abort system is initiated in the event of an unsafe condition during launch. The system has a controller gains schedule that can be tuned to reduce the attitude errors between the simulated Orion abort trajectories and the guidance trajectory. A program was created that uses the method of steepest descent to tune the pitch gains schedule by an automated procedure. The gains schedule optimization was applied to three potential abort scenarios; each scenario tested using the optimized gains schedule resulted in reduced attitude errors when compared to the Orion production gains schedule.

  15. Optimization of Airport Surface Traffic: A Case-Study of Incheon International Airport

    NASA Technical Reports Server (NTRS)

    Eun, Yeonju; Jeon, Daekeun; Lee, Hanbong; Jung, Yoon C.; Zhu, Zhifan; Jeong, Myeongsook; Kim, Hyounkong; Oh, Eunmi; Hong, Sungkwon

    2017-01-01

    This study aims to develop a controllers decision support tool for departure and surface management of ICN. Airport surface traffic optimization for Incheon International Airport (ICN) in South Korea was studied based on the operational characteristics of ICN and airspace of Korea. For surface traffic optimization, a multiple runway scheduling problem and a taxi scheduling problem were formulated into two Mixed Integer Linear Programming (MILP) optimization models. The Miles-In-Trail (MIT) separation constraint at the departure fix shared by the departure flights from multiple runways and the runway crossing constraints due to the taxi route configuration specific to ICN were incorporated into the runway scheduling and taxiway scheduling problems, respectively. Since the MILP-based optimization model for the multiple runway scheduling problem may be computationally intensive, computation times and delay costs of different solving methods were compared for a practical implementation. This research was a collaboration between Korea Aerospace Research Institute (KARI) and National Aeronautics and Space Administration (NASA).

  16. Optimization of Airport Surface Traffic: A Case-Study of Incheon International Airport

    NASA Technical Reports Server (NTRS)

    Eun, Yeonju; Jeon, Daekeun; Lee, Hanbong; Jung, Yoon Chul; Zhu, Zhifan; Jeong, Myeong-Sook; Kim, Hyoun Kyoung; Oh, Eunmi; Hong, Sungkwon

    2017-01-01

    This study aims to develop a controllers' decision support tool for departure and surface management of ICN. Airport surface traffic optimization for Incheon International Airport (ICN) in South Korea was studied based on the operational characteristics of ICN and airspace of Korea. For surface traffic optimization, a multiple runway scheduling problem and a taxi scheduling problem were formulated into two Mixed Integer Linear Programming (MILP) optimization models. The Miles-In-Trail (MIT) separation constraint at the departure fix shared by the departure flights from multiple runways and the runway crossing constraints due to the taxi route configuration specific to ICN were incorporated into the runway scheduling and taxiway scheduling problems, respectively. Since the MILP-based optimization model for the multiple runway scheduling problem may be computationally intensive, computation times and delay costs of different solving methods were compared for a practical implementation. This research was a collaboration between Korea Aerospace Research Institute (KARI) and National Aeronautics and Space Administration (NASA).

  17. Fractional Programming for Communication Systems—Part II: Uplink Scheduling via Matching

    NASA Astrophysics Data System (ADS)

    Shen, Kaiming; Yu, Wei

    2018-05-01

    This two-part paper develops novel methodologies for using fractional programming (FP) techniques to design and optimize communication systems. Part I of this paper proposes a new quadratic transform for FP and treats its application for continuous optimization problems. In this Part II of the paper, we study discrete problems, such as those involving user scheduling, which are considerably more difficult to solve. Unlike the continuous problems, discrete or mixed discrete-continuous problems normally cannot be recast as convex problems. In contrast to the common heuristic of relaxing the discrete variables, this work reformulates the original problem in an FP form amenable to distributed combinatorial optimization. The paper illustrates this methodology by tackling the important and challenging problem of uplink coordinated multi-cell user scheduling in wireless cellular systems. Uplink scheduling is more challenging than downlink scheduling, because uplink user scheduling decisions significantly affect the interference pattern in nearby cells. Further, the discrete scheduling variable needs to be optimized jointly with continuous variables such as transmit power levels and beamformers. The main idea of the proposed FP approach is to decouple the interaction among the interfering links, thereby permitting a distributed and joint optimization of the discrete and continuous variables with provable convergence. The paper shows that the well-known weighted minimum mean-square-error (WMMSE) algorithm can also be derived from a particular use of FP; but our proposed FP-based method significantly outperforms WMMSE when discrete user scheduling variables are involved, both in term of run-time efficiency and optimizing results.

  18. Optimal Scheduling of Time-Shiftable Electric Loads in Expeditionary Power Grids

    DTIC Science & Technology

    2015-09-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS OPTIMAL SCHEDULING OF TIME-SHIFTABLE ELECTRIC LOADS IN EXPEDITIONARY POWER GRIDS by John G...to 09-25-2015 4. TITLE AND SUBTITLE OPTIMAL SCHEDULING OF TIME-SHIFTABLE ELECTRIC LOADS IN EXPEDI- TIONARY POWER GRIDS 5. FUNDING NUMBERS 6. AUTHOR(S...eliminate unmanaged peak demand, reduce generator peak-to-average power ratios, and facilitate a persistent shift to higher fuel efficiency. Using

  19. Decreasing inventory of a cement factory roller mill parts using reliability centered maintenance method

    NASA Astrophysics Data System (ADS)

    Witantyo; Rindiyah, Anita

    2018-03-01

    According to data from maintenance planning and control, it was obtained that highest inventory value is non-routine components. Maintenance components are components which procured based on maintenance activities. The problem happens because there is no synchronization between maintenance activities and the components required. Reliability Centered Maintenance method is used to overcome the problem by reevaluating maintenance activities required components. The case chosen is roller mill system because it has the highest unscheduled downtime record. Components required for each maintenance activities will be determined by its failure distribution, so the number of components needed could be predicted. Moreover, those components will be reclassified from routine component to be non-routine component, so the procurement could be carried out regularly. Based on the conducted analysis, failure happens in almost every maintenance task are classified to become scheduled on condition task, scheduled discard task, schedule restoration task and no schedule maintenance. From 87 used components for maintenance activities are evaluated and there 19 components that experience reclassification from non-routine components to routine components. Then the reliability and need of those components were calculated for one-year operation period. Based on this invention, it is suggested to change all of the components in overhaul activity to increase the reliability of roller mill system. Besides, the inventory system should follow maintenance schedule and the number of required components in maintenance activity so the value of procurement will be decreased and the reliability system will increase.

  20. 48 CFR 1436.270-2 - Part I-The Schedule.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Construction 1436.270-2 Part I—The Schedule. The CO shall prepare the Schedule as follows: (a) Section A... reliability requirements (See FAR Part 46). (f) Section F, Deliveries or performance. Include Suspension of...

  1. 48 CFR 1436.270-2 - Part I-The Schedule.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Construction 1436.270-2 Part I—The Schedule. The CO shall prepare the Schedule as follows: (a) Section A... reliability requirements (See FAR Part 46). (f) Section F, Deliveries or performance. Include Suspension of...

  2. Tera-OP Reliable Intelligently Adaptive Processing System (TRIPS) Implementation

    DTIC Science & Technology

    2008-09-01

    38 6.8 Instruction Scheduling ...39 6.8.1 Spatial Path Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 6.8.2...oblivious scheduling for rapid application prototyping and deployment, environmental adaptivity for resilience in hostile environments, and dynamic

  3. Evaluations of Some Scheduling Algorithms for Hard Real-Time Systems

    DTIC Science & Technology

    1990-06-01

    construct because the mechanism is a dispatching procedure. Since all nonpreemptive schedules are contained in the set of all preemptive schedules, the...optimal value of Tmax in the preemptive case is at least a lower bound on the optimal Tmax for the nonpreemptive schedules. This principle is the basis...23 b. Nonpreemptable Version .............................................. 24 4. The Minimize Maximum Tardiness with Earliest Start

  4. A decision support system for real-time hydropower scheduling in a competitive power market environment

    NASA Astrophysics Data System (ADS)

    Shawwash, Ziad Khaled Elias

    2000-10-01

    The electricity supply market is rapidly changing from a monopolistic to a competitive environment. Being able to operate their system of reservoirs and generating facilities to get maximum benefits out of existing assets and resources is important to the British Columbia Hydro Authority (B.C. Hydro). A decision support system has been developed to help B.C. Hydro operate their system in an optimal way. The system is operational and is one of the tools that are currently used by the B.C. Hydro system operations engineers to determine optimal schedules that meet the hourly domestic load and also maximize the value B.C. Hydro obtains from spot transactions in the Western U.S. and Alberta electricity markets. This dissertation describes the development and implementation of the decision support system in production mode. The decision support system consists of six components: the input data preparation routines, the graphical user interface (GUI), the communication protocols, the hydraulic simulation model, the optimization model, and the results display software. A major part of this work involved the development and implementation of a practical and detailed large-scale optimization model that determines the optimal tradeoff between the long-term value of water and the returns from spot trading transactions in real-time operations. The postmortem-testing phase showed that the gains in value from using the model accounted for 0.25% to 1.0% of the revenues obtained. The financial returns from using the decision support system greatly outweigh the costs of building it. Other benefits are the savings in the time needed to prepare the generation and trading schedules. The system operations engineers now can use the time saved to focus on other important aspects of their job. The operators are currently experimenting with the system in production mode, and are gradually gaining confidence that the advice it provides is accurate, reliable and sensible. The main lesson learned from developing and implementing the system was that there is no alternative to working very closely with the intended end-users of the system, and with the people who have deep knowledge, experience and understanding of how the system is and should be operated.

  5. Energy Storage Applications in Power Systems with Renewable Energy Generation

    NASA Astrophysics Data System (ADS)

    Ghofrani, Mahmoud

    In this dissertation, we propose new operational and planning methodologies for power systems with renewable energy sources. A probabilistic optimal power flow (POPF) is developed to model wind power variations and evaluate the power system operation with intermittent renewable energy generation. The methodology is used to calculate the operating and ramping reserves that are required to compensate for power system uncertainties. Distributed wind generation is introduced as an operational scheme to take advantage of the spatial diversity of renewable energy resources and reduce wind power fluctuations using low or uncorrelated wind farms. The POPF is demonstrated using the IEEE 24-bus system where the proposed operational scheme reduces the operating and ramping reserve requirements and operation and congestion cost of the system as compared to operational practices available in the literature. A stochastic operational-planning framework is also proposed to adequately size, optimally place and schedule storage units within power systems with high wind penetrations. The method is used for different applications of energy storage systems for renewable energy integration. These applications include market-based opportunities such as renewable energy time-shift, renewable capacity firming, and transmission and distribution upgrade deferral in the form of revenue or reduced cost and storage-related societal benefits such as integration of more renewables, reduced emissions and improved utilization of grid assets. A power-pool model which incorporates the one-sided auction market into POPF is developed. The model considers storage units as market participants submitting hourly price bids in the form of marginal costs. This provides an accurate market-clearing process as compared to the 'price-taker' analysis available in the literature where the effects of large-scale storage units on the market-clearing prices are neglected. Different case studies are provided to demonstrate our operational-planning framework and economic justification for different storage applications. A new reliability model is proposed for security and adequacy assessment of power networks containing renewable resources and energy storage systems. The proposed model is used in combination with the operational-planning framework to enhance the reliability and operability of wind integration. The proposed framework optimally utilizes the storage capacity for reliability applications of wind integration. This is essential for justification of storage deployment within regulated utilities where the absence of market opportunities limits the economic advantage of storage technologies over gas-fired generators. A control strategy is also proposed to achieve the maximum reliability using energy storage systems. A cost-benefit analysis compares storage technologies and conventional alternatives to reliably and efficiently integrate different wind penetrations and determines the most economical design. Our simulation results demonstrate the necessity of optimal storage placement for different wind applications. This dissertation also proposes a new stochastic framework to optimally charge and discharge electric vehicles (EVs) to mitigate the effects of wind power uncertainties. Vehicle-to-grid (V2G) service for hedging against wind power imbalances is introduced as a novel application for EVs. This application enhances the predictability of wind power and reduces the power imbalances between the scheduled output and actual power. An Auto Regressive Moving Average (ARMA) wind speed model is developed to forecast the wind power output. Driving patterns of EVs are stochastically modeled and the EVs are clustered in the fleets of similar daily driving patterns. Monte Carlo Simulation (MCS) simulates the system behavior by generating samples of system states using the wind ARMA model and EVs driving patterns. A Genetic Algorithm (GA) is used in combination with MCS to optimally coordinate the EV fleets for their V2G services and minimize the penalty cost associated with wind power imbalances. The economic characteristics of automotive battery technologies and costs of V2G service are incorporated into a cost-benefit analysis which evaluates the economic justification of the proposed V2G application. Simulation results demonstrate that the developed algorithm enhances wind power utilization and reduces the penalty cost for wind power under-/over-production. This offers potential revenues for the wind producer. Our cost-benefit analysis also demonstrates that the proposed algorithm will provide the EV owners with economic incentives to participate in V2G services. The proposed smart scheduling strategy develops a sustainable integrated electricity and transportation infrastructure.

  6. Development of Watch Schedule Using Rules Approach

    NASA Astrophysics Data System (ADS)

    Jurkevicius, Darius; Vasilecas, Olegas

    The software for schedule creation and optimization solves a difficult, important and practical problem. The proposed solution is an online employee portal where administrator users can create and manage watch schedules and employee requests. Each employee can login with his/her own account and see his/her assignments, manage requests, etc. Employees set as administrators can perform the employee scheduling online, manage requests, etc. This scheduling software allows users not only to see the initial and optimized watch schedule in a simple and understandable form, but also to create special rules and criteria and input their business. The system using rules automatically will generate watch schedule.

  7. On scheduling task systems with variable service times

    NASA Astrophysics Data System (ADS)

    Maset, Richard G.; Banawan, Sayed A.

    1993-08-01

    Several strategies have been proposed for developing optimal and near-optimal schedules for task systems (jobs consisting of multiple tasks that can be executed in parallel). Most such strategies, however, implicitly assume deterministic task service times. We show that these strategies are much less effective when service times are highly variable. We then evaluate two strategies—one adaptive, one static—that have been proposed for retaining high performance despite such variability. Both strategies are extensions of critical path scheduling, which has been found to be efficient at producing near-optimal schedules. We found the adaptive approach to be quite effective.

  8. Effective Staffing Takes a Village: Creating the Staffing Ecosystem.

    PubMed

    Gavigan, Margaret; Fitzpatrick, Therese A; Miserendino, Carole

    2016-01-01

    The traditional approaches to staffing and scheduling are often ineffective in assuring sufficient budgeting and deployment of staff to assure the right nurse at the right time for the right cost. As hospital merger activity increases, this exercise is further complicated by the need to rationalize staffing across multiple enterprises and standardize systems and processes. This Midwest hospital system successfully optimized staffing at the unit and enterprise levels by utilizing operations research methodologies. Savings were reinvested to improve staffing models which provided sufficient nonproductive coverage and patient-driven ratios. Over/under-staffing was eliminated in support of the system's recognition that adequate resource planning and deployment are critical to the culture of safety.

  9. Deadlock-free genetic scheduling algorithm for automated manufacturing systems based on deadlock control policy.

    PubMed

    Xing, KeYi; Han, LiBin; Zhou, MengChu; Wang, Feng

    2012-06-01

    Deadlock-free control and scheduling are vital for optimizing the performance of automated manufacturing systems (AMSs) with shared resources and route flexibility. Based on the Petri net models of AMSs, this paper embeds the optimal deadlock avoidance policy into the genetic algorithm and develops a novel deadlock-free genetic scheduling algorithm for AMSs. A possible solution of the scheduling problem is coded as a chromosome representation that is a permutation with repetition of parts. By using the one-step look-ahead method in the optimal deadlock control policy, the feasibility of a chromosome is checked, and infeasible chromosomes are amended into feasible ones, which can be easily decoded into a feasible deadlock-free schedule. The chromosome representation and polynomial complexity of checking and amending procedures together support the cooperative aspect of genetic search for scheduling problems strongly.

  10. Probabilistic Risk Assessment (PRA): A Practical and Cost Effective Approach

    NASA Technical Reports Server (NTRS)

    Lee, Lydia L.; Ingegneri, Antonino J.; Djam, Melody

    2006-01-01

    The Lunar Reconnaissance Orbiter (LRO) is the first mission of the Robotic Lunar Exploration Program (RLEP), a space exploration venture to the Moon, Mars and beyond. The LRO mission includes spacecraft developed by NASA Goddard Space Flight Center (GSFC) and seven instruments built by GSFC, Russia, and contractors across the nation. LRO is defined as a measurement mission, not a science mission. It emphasizes the overall objectives of obtaining data to facilitate returning mankind safely to the Moon in preparation for an eventual manned mission to Mars. As the first mission in response to the President's commitment of the journey of exploring the solar system and beyond: returning to the Moon in the next decade, then venturing further into the solar system, ultimately sending humans to Mars and beyond, LRO has high-visibility to the public but limited resources and a tight schedule. This paper demonstrates how NASA's Lunar Reconnaissance Orbiter Mission project office incorporated reliability analyses in assessing risks and performing design tradeoffs to ensure mission success. Risk assessment is performed using NASA Procedural Requirements (NPR) 8705.5 - Probabilistic Risk Assessment (PRA) Procedures for NASA Programs and Projects to formulate probabilistic risk assessment (PRA). As required, a limited scope PRA is being performed for the LRO project. The PRA is used to optimize the mission design within mandated budget, manpower, and schedule constraints. The technique that LRO project office uses to perform PRA relies on the application of a component failure database to quantify the potential mission success risks. To ensure mission success in an efficient manner, low cost and tight schedule, the traditional reliability analyses, such as reliability predictions, Failure Modes and Effects Analysis (FMEA), and Fault Tree Analysis (FTA), are used to perform PRA for the large system of LRO with more than 14,000 piece parts and over 120 purchased or contractor built components.

  11. Analysis Testing of Sociocultural Factors Influence on Human Reliability within Sociotechnical Systems: The Algerian Oil Companies.

    PubMed

    Laidoune, Abdelbaki; Rahal Gharbi, Med El Hadi

    2016-09-01

    The influence of sociocultural factors on human reliability within an open sociotechnical systems is highlighted. The design of such systems is enhanced by experience feedback. The study was focused on a survey related to the observation of working cases, and by processing of incident/accident statistics and semistructured interviews in the qualitative part. In order to consolidate the study approach, we considered a schedule for the purpose of standard statistical measurements. We tried to be unbiased by supporting an exhaustive list of all worker categories including age, sex, educational level, prescribed task, accountability level, etc. The survey was reinforced by a schedule distributed to 300 workers belonging to two oil companies. This schedule comprises 30 items related to six main factors that influence human reliability. Qualitative observations and schedule data processing had shown that the sociocultural factors can negatively and positively influence operator behaviors. The explored sociocultural factors influence the human reliability both in qualitative and quantitative manners. The proposed model shows how reliability can be enhanced by some measures such as experience feedback based on, for example, safety improvements, training, and information. With that is added the continuous systems improvements to improve sociocultural reality and to reduce negative behaviors.

  12. Coordination between Generation and Transmission Maintenance Scheduling by Means of Multi-agent Technique

    NASA Astrophysics Data System (ADS)

    Nagata, Takeshi; Tao, Yasuhiro; Utatani, Masahiro; Sasaki, Hiroshi; Fujita, Hideki

    This paper proposes a multi-agent approach to maintenance scheduling in restructured power systems. The restructuring of electric power industry has resulted in market-based approaches for unbundling a multitude of service provided by self-interested entities such as power generating companies (GENCOs), transmission providers (TRANSCOs) and distribution companies (DISCOs). The Independent System Operator (ISO) is responsible for the security of the system operation. The schedule submitted to ISO by GENCOs and TRANSCOs should satisfy security and reliability constraints. The proposed method consists of several GENCO Agents (GAGs), TARNSCO Agents (TAGs) and a ISO Agent(IAG). The IAG’s role in maintenance scheduling is limited to ensuring that the submitted schedules do not cause transmission congestion or endanger the system reliability. From the simulation results, it can be seen the proposed multi-agent approach could coordinate between generation and transmission maintenance schedules.

  13. Request-Driven Schedule Automation for the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Johnston, Mark D.; Tran, Daniel; Arroyo, Belinda; Call, Jared; Mercado, Marisol

    2010-01-01

    The DSN Scheduling Engine (DSE) has been developed to increase the level of automated scheduling support available to users of NASA s Deep Space Network (DSN). We have adopted a request-driven approach to DSN scheduling, in contrast to the activity-oriented approach used up to now. Scheduling requests allow users to declaratively specify patterns and conditions on their DSN service allocations, including timing, resource requirements, gaps, overlaps, time linkages among services, repetition, priorities, and a wide range of additional factors and preferences. The DSE incorporates a model of the key constraints and preferences of the DSN scheduling domain, along with algorithms to expand scheduling requests into valid resource allocations, to resolve schedule conflicts, and to repair unsatisfied requests. We use time-bounded systematic search with constraint relaxation to return nearby solutions if exact ones cannot be found, where the relaxation options and order are under user control. To explore the usability aspects of our approach we have developed a graphical user interface incorporating some crucial features to make it easier to work with complex scheduling requests. Among these are: progressive revelation of relevant detail, immediate propagation and visual feedback from a user s decisions, and a meeting calendar metaphor for repeated patterns of requests. Even as a prototype, the DSE has been deployed and adopted as the initial step in building the operational DSN schedule, thus representing an important initial validation of our overall approach. The DSE is a core element of the DSN Service Scheduling Software (S(sup 3)), a web-based collaborative scheduling system now under development for deployment to all DSN users.

  14. Hybrid glowworm swarm optimization for task scheduling in the cloud environment

    NASA Astrophysics Data System (ADS)

    Zhou, Jing; Dong, Shoubin

    2018-06-01

    In recent years many heuristic algorithms have been proposed to solve task scheduling problems in the cloud environment owing to their optimization capability. This article proposes a hybrid glowworm swarm optimization (HGSO) based on glowworm swarm optimization (GSO), which uses a technique of evolutionary computation, a strategy of quantum behaviour based on the principle of neighbourhood, offspring production and random walk, to achieve more efficient scheduling with reasonable scheduling costs. The proposed HGSO reduces the redundant computation and the dependence on the initialization of GSO, accelerates the convergence and more easily escapes from local optima. The conducted experiments and statistical analysis showed that in most cases the proposed HGSO algorithm outperformed previous heuristic algorithms to deal with independent tasks.

  15. Scheduler for multiprocessor system switch with selective pairing

    DOEpatents

    Gara, Alan; Gschwind, Michael Karl; Salapura, Valentina

    2015-01-06

    System, method and computer program product for scheduling threads in a multiprocessing system with selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). The method configures the selective pairing facility to use checking provide one highly reliable thread for high-reliability and allocate threads to corresponding processor cores indicating need for hardware checking. The method configures the selective pairing facility to provide multiple independent cores and allocate threads to corresponding processor cores indicating inherent resilience.

  16. Scheduler Design Criteria: Requirements and Considerations

    NASA Technical Reports Server (NTRS)

    Lee, Hanbong

    2016-01-01

    This presentation covers fundamental requirements and considerations for developing schedulers in airport operations. We first introduce performance and functional requirements for airport surface schedulers. Among various optimization problems in airport operations, we focus on airport surface scheduling problem, including runway and taxiway operations. We then describe a basic methodology for airport surface scheduling such as node-link network model and scheduling algorithms previously developed. Next, we explain how to design a mathematical formulation in more details, which consists of objectives, decision variables, and constraints. Lastly, we review other considerations, including optimization tools, computational performance, and performance metrics for evaluation.

  17. Further Examination of the Reliability of the Modified Rathus Assertiveness Schedule.

    ERIC Educational Resources Information Center

    Del Greco, Linda; And Others

    1986-01-01

    Examined the reliability of the 30-item Modified Rathus Assertiveness Schedule (MRAS) using the test-retest method over a three-week period. The MRAS yielded correlations of .74 using the Pearson product and Spearman Brown correlation coefficient. Correlations for males yielded .77 and .72. For females correlations for both tests were .72.…

  18. Influence of Schizotypy on Responding and Contingency Awareness on Free-Operant Schedules of Reinforcement

    ERIC Educational Resources Information Center

    Randell, Jordan; Searle, Rob; Reed, Phil

    2012-01-01

    Schedules of reinforcement typically produce reliable patterns of behaviour, and one factor that can cause deviations from these normally reliable patterns is schizotypy. Low scorers on the unusual experiences subscale of the Oxford-Liverpool Inventory of Feelings and Experiences performed as expected on a yoked random-ratio (RR), random-interval…

  19. Scheduling algorithm for data relay satellite optical communication based on artificial intelligent optimization

    NASA Astrophysics Data System (ADS)

    Zhao, Wei-hu; Zhao, Jing; Zhao, Shang-hong; Li, Yong-jun; Wang, Xiang; Dong, Yi; Dong, Chen

    2013-08-01

    Optical satellite communication with the advantages of broadband, large capacity and low power consuming broke the bottleneck of the traditional microwave satellite communication. The formation of the Space-based Information System with the technology of high performance optical inter-satellite communication and the realization of global seamless coverage and mobile terminal accessing are the necessary trend of the development of optical satellite communication. Considering the resources, missions and restraints of Data Relay Satellite Optical Communication System, a model of optical communication resources scheduling is established and a scheduling algorithm based on artificial intelligent optimization is put forwarded. According to the multi-relay-satellite, multi-user-satellite, multi-optical-antenna and multi-mission with several priority weights, the resources are scheduled reasonable by the operation: "Ascertain Current Mission Scheduling Time" and "Refresh Latter Mission Time-Window". The priority weight is considered as the parameter of the fitness function and the scheduling project is optimized by the Genetic Algorithm. The simulation scenarios including 3 relay satellites with 6 optical antennas, 12 user satellites and 30 missions, the simulation result reveals that the algorithm obtain satisfactory results in both efficiency and performance and resources scheduling model and the optimization algorithm are suitable in multi-relay-satellite, multi-user-satellite, and multi-optical-antenna recourses scheduling problem.

  20. Cloud computing task scheduling strategy based on improved differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Ge, Junwei; He, Qian; Fang, Yiqiu

    2017-04-01

    In order to optimize the cloud computing task scheduling scheme, an improved differential evolution algorithm for cloud computing task scheduling is proposed. Firstly, the cloud computing task scheduling model, according to the model of the fitness function, and then used improved optimization calculation of the fitness function of the evolutionary algorithm, according to the evolution of generation of dynamic selection strategy through dynamic mutation strategy to ensure the global and local search ability. The performance test experiment was carried out in the CloudSim simulation platform, the experimental results show that the improved differential evolution algorithm can reduce the cloud computing task execution time and user cost saving, good implementation of the optimal scheduling of cloud computing tasks.

  1. CQPSO scheduling algorithm for heterogeneous multi-core DAG task model

    NASA Astrophysics Data System (ADS)

    Zhai, Wenzheng; Hu, Yue-Li; Ran, Feng

    2017-07-01

    Efficient task scheduling is critical to achieve high performance in a heterogeneous multi-core computing environment. The paper focuses on the heterogeneous multi-core directed acyclic graph (DAG) task model and proposes a novel task scheduling method based on an improved chaotic quantum-behaved particle swarm optimization (CQPSO) algorithm. A task priority scheduling list was built. A processor with minimum cumulative earliest finish time (EFT) was acted as the object of the first task assignment. The task precedence relationships were satisfied and the total execution time of all tasks was minimized. The experimental results show that the proposed algorithm has the advantage of optimization abilities, simple and feasible, fast convergence, and can be applied to the task scheduling optimization for other heterogeneous and distributed environment.

  2. Rail Mounted Gantry Crane Scheduling Optimization in Railway Container Terminal Based on Hybrid Handling Mode

    PubMed Central

    Zhu, Xiaoning

    2014-01-01

    Rail mounted gantry crane (RMGC) scheduling is important in reducing makespan of handling operation and improving container handling efficiency. In this paper, we present an RMGC scheduling optimization model, whose objective is to determine an optimization handling sequence in order to minimize RMGC idle load time in handling tasks. An ant colony optimization is proposed to obtain near optimal solutions. Computational experiments on a specific railway container terminal are conducted to illustrate the proposed model and solution algorithm. The results show that the proposed method is effective in reducing the idle load time of RMGC. PMID:25538768

  3. Optimizing energy for a ‘green’ vaccine supply chain

    PubMed Central

    Lloyd, John; McCarney, Steve; Ouhichi, Ramzi; Lydon, Patrick; Zaffran, Michel

    2015-01-01

    This paper describes an approach piloted in the Kasserine region of Tunisia to increase the energy efficiency of the distribution of vaccines and temperature sensitive drugs. The objectives of an approach, known as the ‘net zero energy’ (NZE) supply chain were demonstrated within the first year of operation. The existing distribution system was modified to store vaccines and medicines in the same buildings and to transport them according to pre-scheduled and optimized delivery circuits. Electric utility vehicles, dedicated to the integrated delivery of vaccines and medicines, improved the regularity and reliability of the supply chains. Solar energy, linked to the electricity grid at regional and district stores, supplied over 100% of consumption meeting all energy needs for storage, cooling and transportation. Significant benefits to the quality and costs of distribution were demonstrated. Supply trips were scheduled, integrated and reliable, energy consumption was reduced, the recurrent cost of electricity was eliminated and the release of carbon to the atmosphere was reduced. Although the initial capital cost of scaling up implementation of NZE remain high today, commercial forecasts predict cost reduction for solar energy and electric vehicles that may permit a step-wise implementation over the next 7–10 years. Efficiency in the use of energy and in the deployment of transport is already a critical component of distribution logistics in both private and public sectors of industrialized countries. The NZE approach has an intensified rationale in countries where energy costs threaten the maintenance of public health services in areas of low population density. In these countries where the mobility of health personnel and timely arrival of supplies is at risk, NZE has the potential to reduce energy costs and release recurrent budget to other needs of service delivery while also improving the supply chain. PMID:25444811

  4. The efficacy of a restart break for recycling with optimal performance depends critically on circadian timing.

    PubMed

    Van Dongen, Hans P A; Belenky, Gregory; Vila, Bryan J

    2011-07-01

    Under simulated shift-work conditions, we investigated the efficacy of a restart break for maintaining neurobehavioral functioning across consecutive duty cycles, as a function of the circadian timing of the duty periods. As part of a 14-day experiment, subjects underwent two cycles of five simulated daytime or nighttime duty days, separated by a 34-hour restart break. Cognitive functioning and high-fidelity driving simulator performance were tested 4 times per day during the two duty cycles. Lapses on a psychomotor vigilance test (PVT) served as the primary outcome variable. Selected sleep periods were recorded polysomnographically. The experiment was conducted under standardized, controlled laboratory conditions with continuous monitoring. Twenty-seven healthy adults (13 men, 14 women; aged 22-39 years) participated in the study. Subjects were randomly assigned to a nighttime duty (experimental) condition or a daytime duty (control) condition. The efficacy of the 34-hour restart break for maintaining neurobehavioral functioning from the pre-restart duty cycle to the post-restart duty cycle was compared between these two conditions. Relative to the daytime duty condition, the nighttime duty condition was associated with reduced amounts of sleep, whereas sleep latencies were shortened and slow-wave sleep appeared to be conserved. Neurobehavioral performance measures ranging from lapses of attention on the PVT to calculated fuel consumption on the driving simulators remained optimal across time of day in the daytime duty schedule, but degraded across time of night in the nighttime duty schedule. The 34-hour restart break was efficacious for maintaining PVT performance and other objective neurobehavioral functioning profiles from one duty cycle to the next in the daytime duty condition, but not in the nighttime duty condition. Subjective sleepiness did not reliably track objective neurobehavioral deficits. The 34-hour restart break was adequate for maintaining performance in the case of optimal circadian placement of sleep and duty periods (control condition) but was inadequate (and perhaps even detrimental) for maintaining performance in a simulated nighttime duty schedule (experimental condition). Current US transportation hours-of-service regulations mandate time off duty but do not consider the circadian aspects of shift scheduling. Reinforcing a recent trend of applying sleep science to inform policymaking for duty and rest times, our findings indicate that restart provisions in hours-of-service regulations could be improved by taking the circadian timing of the duty schedules into account.

  5. Energy-efficient approach to minimizing the energy consumption in an extended job-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Tang, Dunbing; Dai, Min

    2015-09-01

    The traditional production planning and scheduling problems consider performance indicators like time, cost and quality as optimization objectives in manufacturing processes. However, environmentally-friendly factors like energy consumption of production have not been completely taken into consideration. Against this background, this paper addresses an approach to modify a given schedule generated by a production planning and scheduling system in a job shop floor, where machine tools can work at different cutting speeds. It can adjust the cutting speeds of the operations while keeping the original assignment and processing sequence of operations of each job fixed in order to obtain energy savings. First, the proposed approach, based on a mixed integer programming mathematical model, changes the total idle time of the given schedule to minimize energy consumption in the job shop floor while accepting the optimal solution of the scheduling objective, makespan. Then, a genetic-simulated annealing algorithm is used to explore the optimal solution due to the fact that the problem is strongly NP-hard. Finally, the effectiveness of the approach is performed smalland large-size instances, respectively. The experimental results show that the approach can save 5%-10% of the average energy consumption while accepting the optimal solution of the makespan in small-size instances. In addition, the average maximum energy saving ratio can reach to 13%. And it can save approximately 1%-4% of the average energy consumption and approximately 2.4% of the average maximum energy while accepting the near-optimal solution of the makespan in large-size instances. The proposed research provides an interesting point to explore an energy-aware schedule optimization for a traditional production planning and scheduling problem.

  6. Ensuring the Reliable Operation of the Power Grid: State-Based and Distributed Approaches to Scheduling Energy and Contingency Reserves

    NASA Astrophysics Data System (ADS)

    Prada, Jose Fernando

    Keeping a contingency reserve in power systems is necessary to preserve the security of real-time operations. This work studies two different approaches to the optimal allocation of energy and reserves in the day-ahead generation scheduling process. Part I presents a stochastic security-constrained unit commitment model to co-optimize energy and the locational reserves required to respond to a set of uncertain generation contingencies, using a novel state-based formulation. The model is applied in an offer-based electricity market to allocate contingency reserves throughout the power grid, in order to comply with the N-1 security criterion under transmission congestion. The objective is to minimize expected dispatch and reserve costs, together with post contingency corrective redispatch costs, modeling the probability of generation failure and associated post contingency states. The characteristics of the scheduling problem are exploited to formulate a computationally efficient method, consistent with established operational practices. We simulated the distribution of locational contingency reserves on the IEEE RTS96 system and compared the results with the conventional deterministic method. We found that assigning locational spinning reserves can guarantee an N-1 secure dispatch accounting for transmission congestion at a reasonable extra cost. The simulations also showed little value of allocating downward reserves but sizable operating savings from co-optimizing locational nonspinning reserves. Overall, the results indicate the computational tractability of the proposed method. Part II presents a distributed generation scheduling model to optimally allocate energy and spinning reserves among competing generators in a day-ahead market. The model is based on the coordination between individual generators and a market entity. The proposed method uses forecasting, augmented pricing and locational signals to induce efficient commitment of generators based on firm posted prices. It is price-based but does not rely on multiple iterations, minimizes information exchange and simplifies the market clearing process. Simulations of the distributed method performed on a six-bus test system showed that, using an appropriate set of prices, it is possible to emulate the results of a conventional centralized solution, without need of providing make-whole payments to generators. Likewise, they showed that the distributed method can accommodate transactions with different products and complex security constraints.

  7. Power plant maintenance scheduling using ant colony optimization: an improved formulation

    NASA Astrophysics Data System (ADS)

    Foong, Wai Kuan; Maier, Holger; Simpson, Angus

    2008-04-01

    It is common practice in the hydropower industry to either shorten the maintenance duration or to postpone maintenance tasks in a hydropower system when there is expected unserved energy based on current water storage levels and forecast storage inflows. It is therefore essential that a maintenance scheduling optimizer can incorporate the options of shortening the maintenance duration and/or deferring maintenance tasks in the search for practical maintenance schedules. In this article, an improved ant colony optimization-power plant maintenance scheduling optimization (ACO-PPMSO) formulation that considers such options in the optimization process is introduced. As a result, both the optimum commencement time and the optimum outage duration are determined for each of the maintenance tasks that need to be scheduled. In addition, a local search strategy is presented in this article to boost the robustness of the algorithm. When tested on a five-station hydropower system problem, the improved formulation is shown to be capable of allowing shortening of maintenance duration in the event of expected demand shortfalls. In addition, the new local search strategy is also shown to have significantly improved the optimization ability of the ACO-PPMSO algorithm.

  8. Surgery scheduling optimization considering real life constraints and comprehensive operation cost of operating room.

    PubMed

    Xiang, Wei; Li, Chong

    2015-01-01

    Operating Room (OR) is the core sector in hospital expenditure, the operation management of which involves a complete three-stage surgery flow, multiple resources, prioritization of the various surgeries, and several real-life OR constraints. As such reasonable surgery scheduling is crucial to OR management. To optimize OR management and reduce operation cost, a short-term surgery scheduling problem is proposed and defined based on the survey of the OR operation in a typical hospital in China. The comprehensive operation cost is clearly defined considering both under-utilization and overutilization. A nested Ant Colony Optimization (nested-ACO) incorporated with several real-life OR constraints is proposed to solve such a combinatorial optimization problem. The 10-day manual surgery schedules from a hospital in China are compared with the optimized schedules solved by the nested-ACO. Comparison results show the advantage using the nested-ACO in several measurements: OR-related time, nurse-related time, variation in resources' working time, and the end time. The nested-ACO considering real-life operation constraints such as the difference between first and following case, surgeries priority, and fixed nurses in pre/post-operative stage is proposed to solve the surgery scheduling optimization problem. The results clearly show the benefit of using the nested-ACO in enhancing the OR management efficiency and minimizing the comprehensive overall operation cost.

  9. Instructional versus schedule control of humans' choices in situations of diminishing returns

    PubMed Central

    Hackenberg, Timothy D.; Joker, Veronica R.

    1994-01-01

    Four adult humans chose repeatedly between a fixed-time schedule (of points later exchangeable for money) and a progressive-time schedule that began at 0 s and increased by a fixed number of seconds with each point delivered by that schedule. Each point delivered by the fixed-time schedule reset the requirements of the progressive-time schedule to its minimum value. Subjects were provided with instructions that specified a particular sequence of choices. Under the initial conditions, the instructions accurately specified the optimal choice sequence. Thus, control by instructions and optimal control by the programmed contingencies both supported the same performance. To distinguish the effects of instructions from schedule sensitivity, the correspondence between the instructed and optimal choice patterns was gradually altered across conditions by varying the step size of the progressive-time schedule while maintaining the same instructions. Step size was manipulated, typically in 1-s units, first in an ascending and then in a descending sequence of conditions. Instructions quickly established control in all 4 subjects but, by narrowing the range of choice patterns, they reduced subsequent sensitivity to schedule changes. Instructional control was maintained across the ascending sequence of progressive-time values for each subject, but eventually diminished, giving way to more schedule-appropriate patterns. The transition from instruction-appropriate to schedule-appropriate behavior was characterized by an increase in the variability of choice patterns and local increases in point density. On the descending sequence of progressive-time values, behavior appeared to be schedule sensitive, sometimes even optimally sensitive, but it did not always change systematically with the contingencies, suggesting the involvement of other factors. PMID:16812747

  10. Simulated Stochastic Approximation Annealing for Global Optimization with a Square-Root Cooling Schedule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Faming; Cheng, Yichen; Lin, Guang

    2014-06-13

    Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to have such a long CPU time. This paper proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation Markov chain Monte Carlo, it is shown that themore » new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, e.g., a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors.« less

  11. Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments

    PubMed Central

    Kadima, Hubert; Granado, Bertrand

    2013-01-01

    We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach. PMID:24319361

  12. AI techniques for a space application scheduling problem

    NASA Technical Reports Server (NTRS)

    Thalman, N.; Sparn, T.; Jaffres, L.; Gablehouse, D.; Judd, D.; Russell, C.

    1991-01-01

    Scheduling is a very complex optimization problem which can be categorized as an NP-complete problem. NP-complete problems are quite diverse, as are the algorithms used in searching for an optimal solution. In most cases, the best solutions that can be derived for these combinatorial explosive problems are near-optimal solutions. Due to the complexity of the scheduling problem, artificial intelligence (AI) can aid in solving these types of problems. Some of the factors are examined which make space application scheduling problems difficult and presents a fairly new AI-based technique called tabu search as applied to a real scheduling application. the specific problem is concerned with scheduling application. The specific problem is concerned with scheduling solar and stellar observations for the SOLar-STellar Irradiance Comparison Experiment (SOLSTICE) instrument in a constrained environment which produces minimum impact on the other instruments and maximizes target observation times. The SOLSTICE instrument will gly on-board the Upper Atmosphere Research Satellite (UARS) in 1991, and a similar instrument will fly on the earth observing system (Eos).

  13. Multi-objective approach for energy-aware workflow scheduling in cloud computing environments.

    PubMed

    Yassa, Sonia; Chelouah, Rachid; Kadima, Hubert; Granado, Bertrand

    2013-01-01

    We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach.

  14. Discrete Bat Algorithm for Optimal Problem of Permutation Flow Shop Scheduling

    PubMed Central

    Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang

    2014-01-01

    A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem. PMID:25243220

  15. Discrete bat algorithm for optimal problem of permutation flow shop scheduling.

    PubMed

    Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang

    2014-01-01

    A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem.

  16. A mathematical model for maximizing the value of phase 3 drug development portfolios incorporating budget constraints and risk.

    PubMed

    Patel, Nitin R; Ankolekar, Suresh; Antonijevic, Zoran; Rajicic, Natasa

    2013-05-10

    We describe a value-driven approach to optimizing pharmaceutical portfolios. Our approach incorporates inputs from research and development and commercial functions by simultaneously addressing internal and external factors. This approach differentiates itself from current practices in that it recognizes the impact of study design parameters, sample size in particular, on the portfolio value. We develop an integer programming (IP) model as the basis for Bayesian decision analysis to optimize phase 3 development portfolios using expected net present value as the criterion. We show how this framework can be used to determine optimal sample sizes and trial schedules to maximize the value of a portfolio under budget constraints. We then illustrate the remarkable flexibility of the IP model to answer a variety of 'what-if' questions that reflect situations that arise in practice. We extend the IP model to a stochastic IP model to incorporate uncertainty in the availability of drugs from earlier development phases for phase 3 development in the future. We show how to use stochastic IP to re-optimize the portfolio development strategy over time as new information accumulates and budget changes occur. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Optimizing Industrial Consumer Demand Response Through Disaggregation, Hour-Ahead Pricing, and Momentary Autonomous Control

    NASA Astrophysics Data System (ADS)

    Abdulaal, Ahmed

    The work in this study addresses the current limitations of the price-driven demand response (DR) approach. Mainly, the dependability on consumers to respond in an energy aware conduct, the response timeliness, the difficulty of applying DR in a busy industrial environment, and the problem of load synchronization are of utmost concern. In order to conduct a simulation study, realistic price simulation model and consumers' building load models are created using real data. DR action is optimized using an autonomous control method, which eliminates the dependency on frequent consumer engagement. Since load scheduling and long-term planning approaches are infeasible in the industrial environment, the proposed method utilizes instantaneous DR in response to hour-ahead price signals (RTP-HA). Preliminary simulation results concluded savings at the consumer-side at the cost of increased supplier-side burden due to the aggregate effect of the universal DR policies. Therefore, a consumer disaggregation strategy is briefly discussed. Finally, a refined discrete-continuous control system is presented, which utilizes multi-objective Pareto optimization, evolutionary programming, utility functions, and bidirectional loads. Demonstrated through a virtual testbed fit with real data, the new system achieves momentary optimized DR in real-time while maximizing the consumer's wellbeing.

  18. Simulation-Driven Design Approach for Design and Optimization of Blankholder

    NASA Astrophysics Data System (ADS)

    Sravan, Tatipala; Suddapalli, Nikshep R.; Johan, Pilthammar; Mats, Sigvant; Christian, Johansson

    2017-09-01

    Reliable design of stamping dies is desired for efficient and safe production. The design of stamping dies are today mostly based on casting feasibility, although it can also be based on criteria for fatigue, stiffness, safety, economy. Current work presents an approach that is built on Simulation Driven Design, enabling Design Optimization to address this issue. A structural finite element model of a stamping die, used to produce doors for Volvo V70/S80 car models, is studied. This die had developed cracks during its usage. To understand the behaviour of stress distribution in the stamping die, structural analysis of the die is conducted and critical regions with high stresses are identified. The results from structural FE-models are compared with analytical calculations pertaining to fatigue properties of the material. To arrive at an optimum design with increased stiffness and lifetime, topology and free-shape optimization are performed. In the optimization routine, identified critical regions of the die are set as design variables. Other optimization variables are set to maintain manufacturability of the resultant stamping die. Thereafter a CAD model is built based on geometrical results from topology and free-shape optimizations. Then the CAD model is subjected to structural analysis to visualize the new stress distribution. This process is iterated until a satisfactory result is obtained. The final results show reduction in stress levels by 70% with a more homogeneous distribution. Even though mass of the die is increased by 17 %, overall, a stiffer die with better lifetime is obtained. Finally, by reflecting on the entire process, a coordinated approach to handle such situations efficiently is presented.

  19. Advancing the LSST Operations Simulator

    NASA Astrophysics Data System (ADS)

    Saha, Abhijit; Ridgway, S. T.; Cook, K. H.; Delgado, F.; Chandrasekharan, S.; Petry, C. E.; Operations Simulator Group

    2013-01-01

    The Operations Simulator for the Large Synoptic Survey Telescope (LSST; http://lsst.org) allows the planning of LSST observations that obey explicit science driven observing specifications, patterns, schema, and priorities, while optimizing against the constraints placed by design-specific opto-mechanical system performance of the telescope facility, site specific conditions (including weather and seeing), as well as additional scheduled and unscheduled downtime. A simulation run records the characteristics of all observations (e.g., epoch, sky position, seeing, sky brightness) in a MySQL database, which can be queried for any desired purpose. Derivative information digests of the observing history database are made with an analysis package called Simulation Survey Tools for Analysis and Reporting (SSTAR). Merit functions and metrics have been designed to examine how suitable a specific simulation run is for several different science applications. This poster reports recent work which has focussed on an architectural restructuring of the code that will allow us to a) use "look-ahead" strategies that avoid cadence sequences that cannot be completed due to observing constraints; and b) examine alternate optimization strategies, so that the most efficient scheduling algorithm(s) can be identified and used: even few-percent efficiency gains will create substantive scientific opportunity. The enhanced simulator will be used to assess the feasibility of desired observing cadences, study the impact of changing science program priorities, and assist with performance margin investigations of the LSST system.

  20. Improved NSGA model for multi objective operation scheduling and its evaluation

    NASA Astrophysics Data System (ADS)

    Li, Weining; Wang, Fuyu

    2017-09-01

    Reasonable operation can increase the income of the hospital and improve the patient’s satisfactory level. In this paper, by using multi object operation scheduling method with improved NSGA algorithm, it shortens the operation time, reduces the operation costand lowers the operation risk, the multi-objective optimization model is established for flexible operation scheduling, through the MATLAB simulation method, the Pareto solution is obtained, the standardization of data processing. The optimal scheduling scheme is selected by using entropy weight -Topsis combination method. The results show that the algorithm is feasible to solve the multi-objective operation scheduling problem, and provide a reference for hospital operation scheduling.

  1. Business Cases for Microgrids: Modeling Interactions of Technology Choice, Reliability, Cost, and Benefit

    NASA Astrophysics Data System (ADS)

    Hanna, Ryan

    Distributed energy resources (DERs), and increasingly microgrids, are becoming an integral part of modern distribution systems. Interest in microgrids--which are insular and autonomous power networks embedded within the bulk grid--stems largely from the vast array of flexibilities and benefits they can offer stakeholders. Managed well, they can improve grid reliability and resiliency, increase end-use energy efficiency by coupling electric and thermal loads, reduce transmission losses by generating power locally, and may reduce system-wide emissions, among many others. Whether these public benefits are realized, however, depends on whether private firms see a "business case", or private value, in investing. To this end, firms need models that evaluate costs, benefits, risks, and assumptions that underlie decisions to invest. The objectives of this dissertation are to assess the business case for microgrids that provide what industry analysts forecast as two primary drivers of market growth--that of providing energy services (similar to an electric utility) as well as reliability service to customers within. Prototypical first adopters are modeled--using an existing model to analyze energy services and a new model that couples that analysis with one of reliability--to explore interactions between technology choice, reliability, costs, and benefits. The new model has a bi-level hierarchy; it uses heuristic optimization to select and size DERs and analytical optimization to schedule them. It further embeds Monte Carlo simulation to evaluate reliability as well as regression models for customer damage functions to monetize reliability. It provides least-cost microgrid configurations for utility customers who seek to reduce interruption and operating costs. Lastly, the model is used to explore the impact of such adoption on system-wide greenhouse gas emissions in California. Results indicate that there are, at present, co-benefits for emissions reductions when customers adopt and operate microgrids for private benefit, though future analysis is needed as the bulk grid continues to transition toward a less carbon intensive system.

  2. Escalator: An Autonomous Scheduling Scheme for Convergecast in TSCH

    PubMed Central

    Oh, Sukho; Hwang, DongYeop; Kim, Ki-Hyung; Kim, Kangseok

    2018-01-01

    Time Slotted Channel Hopping (TSCH) is widely used in the industrial wireless sensor networks due to its high reliability and energy efficiency. Various timeslot and channel scheduling schemes have been proposed for achieving high reliability and energy efficiency for TSCH networks. Recently proposed autonomous scheduling schemes provide flexible timeslot scheduling based on the routing topology, but do not take into account the network traffic and packet forwarding delays. In this paper, we propose an autonomous scheduling scheme for convergecast in TSCH networks with RPL as a routing protocol, named Escalator. Escalator generates a consecutive timeslot schedule along the packet forwarding path to minimize the packet transmission delay. The schedule is generated autonomously by utilizing only the local routing topology information without any additional signaling with other nodes. The generated schedule is guaranteed to be conflict-free, in that all nodes in the network could transmit packets to the sink in every slotframe cycle. We implement Escalator and evaluate its performance with existing autonomous scheduling schemes through a testbed and simulation. Experimental results show that the proposed Escalator has lower end-to-end delay and higher packet delivery ratio compared to the existing schemes regardless of the network topology. PMID:29659508

  3. Escalator: An Autonomous Scheduling Scheme for Convergecast in TSCH.

    PubMed

    Oh, Sukho; Hwang, DongYeop; Kim, Ki-Hyung; Kim, Kangseok

    2018-04-16

    Time Slotted Channel Hopping (TSCH) is widely used in the industrial wireless sensor networks due to its high reliability and energy efficiency. Various timeslot and channel scheduling schemes have been proposed for achieving high reliability and energy efficiency for TSCH networks. Recently proposed autonomous scheduling schemes provide flexible timeslot scheduling based on the routing topology, but do not take into account the network traffic and packet forwarding delays. In this paper, we propose an autonomous scheduling scheme for convergecast in TSCH networks with RPL as a routing protocol, named Escalator. Escalator generates a consecutive timeslot schedule along the packet forwarding path to minimize the packet transmission delay. The schedule is generated autonomously by utilizing only the local routing topology information without any additional signaling with other nodes. The generated schedule is guaranteed to be conflict-free, in that all nodes in the network could transmit packets to the sink in every slotframe cycle. We implement Escalator and evaluate its performance with existing autonomous scheduling schemes through a testbed and simulation. Experimental results show that the proposed Escalator has lower end-to-end delay and higher packet delivery ratio compared to the existing schemes regardless of the network topology.

  4. Understanding London's Water Supply Tradeoffs When Scheduling Interventions Under Deep Uncertainty

    NASA Astrophysics Data System (ADS)

    Huskova, I.; Matrosov, E. S.; Harou, J. J.; Kasprzyk, J. R.; Reed, P. M.

    2015-12-01

    Water supply planning in many major world cities faces several challenges associated with but not limited to climate change, population growth and insufficient land availability for infrastructure development. Long-term plans to maintain supply-demand balance and ecosystem services require careful consideration of uncertainties associated with future conditions. The current approach for London's water supply planning utilizes least cost optimization of future intervention schedules with limited uncertainty consideration. Recently, the focus of the long-term plans has shifted from solely least cost performance to robustness and resilience of the system. Identifying robust scheduling of interventions requires optimizing over a statistically representative sample of stochastic inputs which may be computationally difficult to achieve. In this study we optimize schedules using an ensemble of plausible scenarios and assess how manipulating that ensemble influences the different Pareto-approximate intervention schedules. We investigate how a major stress event's location in time as well as the optimization problem formulation influence the Pareto-approximate schedules. A bootstrapping method that respects the non-stationary trend of climate change scenarios and ensures the even distribution of the major stress event in the scenario ensemble is proposed. Different bootstrapped hydrological scenario ensembles are assessed using many-objective scenario optimization of London's future water supply and demand intervention scheduling. However, such a "fixed" scheduling of interventions approach does not aim to embed flexibility or adapt effectively as the future unfolds. Alternatively, making decisions based on the observations of occurred conditions could help planners who prefer adaptive planning. We will show how rules to guide the implementation of interventions based on observations may result in more flexible strategies.

  5. Reliability analysis of component of affination centrifugal 1 machine by using reliability engineering

    NASA Astrophysics Data System (ADS)

    Sembiring, N.; Ginting, E.; Darnello, T.

    2017-12-01

    Problems that appear in a company that produces refined sugar, the production floor has not reached the level of critical machine availability because it often suffered damage (breakdown). This results in a sudden loss of production time and production opportunities. This problem can be solved by Reliability Engineering method where the statistical approach to historical damage data is performed to see the pattern of the distribution. The method can provide a value of reliability, rate of damage, and availability level, of an machine during the maintenance time interval schedule. The result of distribution test to time inter-damage data (MTTF) flexible hose component is lognormal distribution while component of teflon cone lifthing is weibull distribution. While from distribution test to mean time of improvement (MTTR) flexible hose component is exponential distribution while component of teflon cone lifthing is weibull distribution. The actual results of the flexible hose component on the replacement schedule per 720 hours obtained reliability of 0.2451 and availability 0.9960. While on the critical components of teflon cone lifthing actual on the replacement schedule per 1944 hours obtained reliability of 0.4083 and availability 0.9927.

  6. Residual Stress Developed During the Cure of Thermosetting Polymers: Optimizing Cure Schedule to Minimize Stress.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kropka, Jamie Michael; Stavig, Mark E.; Jaramillo, Rex

    When thermosetting polymers are used to bond or encapsulate electrical, mechanical or optical assemblies, residual stress, which often affects the performance and/or reliability of these devices, develops within the structure. The Thin-Disk-on-Cylinder structural response test is demonstrated as a powerful tool to design epoxy encapsulant cure schedules to reduce residual stress, even when all the details of the material evolution during cure are not explicitly known. The test's ability to (1) distinguish between cohesive and adhesive failure modes and (2) demonstrate methodologies to eliminate failure and reduce residual stress, make choices of cure schedules that optimize stress in the encapsulantmore » unambiguous. For the 828/DEA/GMB material in the Thin-Disk-on-Cylinder geometry, the stress associated with cure is significant and outweighs that associated with cool down from the final cure temperature to room temperature (for measured lid strain, Scure I > I I e+h erma * II) * The difference between the final cure temperature and 1 1 -- the temperature at which the material gels, Tf-T ge i, was demonstrated to be a primary factor in determining the residual stress associated with cure. Increasing T f -T ge i leads to a reduction in cure stress that is described as being associated with balancing some of the 828/DEA/GMB cure shrinkage with thermal expansion. The ability to tune residual stress associated with cure by controlling T f -T ge i would be anticipated to translate to other thermosetting encapsulation materials, but the times and temperatures appropriate for a given material may vary widely.« less

  7. A pharmacokinetic model of filgrastim and pegfilgrastim application in normal mice and those with cyclophosphamide-induced granulocytopaenia.

    PubMed

    Scholz, M; Ackermann, M; Engel, C; Emmrich, F; Loeffler, M; Kamprad, M

    2009-12-01

    Recombinant human granulocyte colony-stimulating factor (rhG-CSF) is widely used as treatment for granulocytopaenia during cytotoxic chemotherapy; however, optimal scheduling of this pharmaceutical is unknown. Biomathematical models can help to pre-select optimal application schedules but precise pharmacokinetic properties of the pharmaceuticals are required at first. In this study, we have aimed to construct a pharmacokinetic model of G-CSF derivatives filgrastim and pegfilgrastim in mice. Healthy CD-1 mice and those with cyclophosphamide-induced granulocytopaenia were studied after administration of filgrastim and pegfilgrastim in different dosing and timing schedules. Close meshed time series of granulocytes and G-CSF plasma concentrations were determined. An ordinary differential equations model of pharmacokinetics was constructed on the basis of known mechanisms of drug distribution and degradation. Predictions of the model fit well with all experimental data for both filgrastim and pegfilgrastim. We obtained a unique parameter setting for all experimental scenarios. Differences in pharmacokinetics between filgrastim and pegfilgrastim can be explained by different estimates of model parameters rather than by different model mechanisms. Parameter estimates with respect to distribution and clearance of the drug derivatives are in agreement with qualitative experimental results. Dynamics of filgrastim and pegfilgrastim plasma levels can be explained by the same pharmacokinetic model but different model parameters. Beause of a strong clearance mechanism mediated by granulocytes, granulocytotic and granulocytopaenic conditions must be studied simultaneously to construct a reliable model. The pharmacokinetic model will be extended to a murine model of granulopoiesis under chemotherapy and G-CSF application.

  8. The Basic Organizing/Optimizing Training Scheduler (BOOTS): User's Guide. Technical Report 151.

    ERIC Educational Resources Information Center

    Church, Richard L.; Keeler, F. Laurence

    This report provides the step-by-step instructions required for using the Navy's Basic Organizing/Optimizing Training Scheduler (BOOTS) system. BOOTS is a computerized tool designed to aid in the creation of master training schedules for each Navy recruit training command. The system is defined in terms of three major functions: (1) data file…

  9. Temporal Data-Driven Sleep Scheduling and Spatial Data-Driven Anomaly Detection for Clustered Wireless Sensor Networks

    PubMed Central

    Li, Gang; He, Bin; Huang, Hongwei; Tang, Limin

    2016-01-01

    The spatial–temporal correlation is an important feature of sensor data in wireless sensor networks (WSNs). Most of the existing works based on the spatial–temporal correlation can be divided into two parts: redundancy reduction and anomaly detection. These two parts are pursued separately in existing works. In this work, the combination of temporal data-driven sleep scheduling (TDSS) and spatial data-driven anomaly detection is proposed, where TDSS can reduce data redundancy. The TDSS model is inspired by transmission control protocol (TCP) congestion control. Based on long and linear cluster structure in the tunnel monitoring system, cooperative TDSS and spatial data-driven anomaly detection are then proposed. To realize synchronous acquisition in the same ring for analyzing the situation of every ring, TDSS is implemented in a cooperative way in the cluster. To keep the precision of sensor data, spatial data-driven anomaly detection based on the spatial correlation and Kriging method is realized to generate an anomaly indicator. The experiment results show that cooperative TDSS can realize non-uniform sensing effectively to reduce the energy consumption. In addition, spatial data-driven anomaly detection is quite significant for maintaining and improving the precision of sensor data. PMID:27690035

  10. Developing optimal nurses work schedule using integer programming

    NASA Astrophysics Data System (ADS)

    Shahidin, Ainon Mardhiyah; Said, Mohd Syazwan Md; Said, Noor Hizwan Mohamad; Sazali, Noor Izatie Amaliena

    2017-08-01

    Time management is the art of arranging, organizing and scheduling one's time for the purpose of generating more effective work and productivity. Scheduling is the process of deciding how to commit resources between varieties of possible tasks. Thus, it is crucial for every organization to have a good work schedule for their staffs. The job of Ward nurses at hospitals runs for 24 hours every day. Therefore, nurses will be working using shift scheduling. This study is aimed to solve the nurse scheduling problem at an emergency ward of a private hospital. A 7-day work schedule for 7 consecutive weeks satisfying all the constraints set by the hospital will be developed using Integer Programming. The work schedule for the nurses obtained gives an optimal solution where all the constraints are being satisfied successfully.

  11. Discrete particle swarm optimization to solve multi-objective limited-wait hybrid flow shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Santosa, B.; Siswanto, N.; Fiqihesa

    2018-04-01

    This paper proposes a discrete Particle Swam Optimization (PSO) to solve limited-wait hybrid flowshop scheduing problem with multi objectives. Flow shop schedulimg represents the condition when several machines are arranged in series and each job must be processed at each machine with same sequence. The objective functions are minimizing completion time (makespan), total tardiness time, and total machine idle time. Flow shop scheduling model always grows to cope with the real production system accurately. Since flow shop scheduling is a NP-Hard problem then the most suitable method to solve is metaheuristics. One of metaheuristics algorithm is Particle Swarm Optimization (PSO), an algorithm which is based on the behavior of a swarm. Originally, PSO was intended to solve continuous optimization problems. Since flow shop scheduling is a discrete optimization problem, then, we need to modify PSO to fit the problem. The modification is done by using probability transition matrix mechanism. While to handle multi objectives problem, we use Pareto Optimal (MPSO). The results of MPSO is better than the PSO because the MPSO solution set produced higher probability to find the optimal solution. Besides the MPSO solution set is closer to the optimal solution

  12. Integrating Solar PV in Utility System Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, A.; Botterud, A.; Wu, J.

    2013-10-31

    This study develops a systematic framework for estimating the increase in operating costs due to uncertainty and variability in renewable resources, uses the framework to quantify the integration costs associated with sub-hourly solar power variability and uncertainty, and shows how changes in system operations may affect these costs. Toward this end, we present a statistical method for estimating the required balancing reserves to maintain system reliability along with a model for commitment and dispatch of the portfolio of thermal and renewable resources at different stages of system operations. We estimate the costs of sub-hourly solar variability, short-term forecast errors, andmore » day-ahead (DA) forecast errors as the difference in production costs between a case with “realistic” PV (i.e., subhourly solar variability and uncertainty are fully included in the modeling) and a case with “well behaved” PV (i.e., PV is assumed to have no sub-hourly variability and can be perfectly forecasted). In addition, we highlight current practices that allow utilities to compensate for the issues encountered at the sub-hourly time frame with increased levels of PV penetration. In this analysis we use the analytical framework to simulate utility operations with increasing deployment of PV in a case study of Arizona Public Service Company (APS), a utility in the southwestern United States. In our analysis, we focus on three processes that are important in understanding the management of PV variability and uncertainty in power system operations. First, we represent the decisions made the day before the operating day through a DA commitment model that relies on imperfect DA forecasts of load and wind as well as PV generation. Second, we represent the decisions made by schedulers in the operating day through hour-ahead (HA) scheduling. Peaking units can be committed or decommitted in the HA schedules and online units can be redispatched using forecasts that are improved relative to DA forecasts, but still imperfect. Finally, we represent decisions within the operating hour by schedulers and transmission system operators as real-time (RT) balancing. We simulate the DA and HA scheduling processes with a detailed unit-commitment (UC) and economic dispatch (ED) optimization model. This model creates a least-cost dispatch and commitment plan for the conventional generating units using forecasts and reserve requirements as inputs. We consider only the generation units and load of the utility in this analysis; we do not consider opportunities to trade power with neighboring utilities. We also do not consider provision of reserves from renewables or from demand-side options. We estimate dynamic reserve requirements in order to meet reliability requirements in the RT operations, considering the uncertainty and variability in load, solar PV, and wind resources. Balancing reserve requirements are based on the 2.5th and 97.5th percentile of 1-min deviations from the HA schedule in a previous year. We then simulate RT deployment of balancing reserves using a separate minute-by-minute simulation of deviations from the HA schedules in the operating year. In the simulations we assume that balancing reserves can be fully deployed in 10 min. The minute-by-minute deviations account for HA forecasting errors and the actual variability of the load, wind, and solar generation. Using these minute-by-minute deviations and deployment of balancing reserves, we evaluate the impact of PV on system reliability through the calculation of the standard reliability metric called Control Performance Standard 2 (CPS2). Broadly speaking, the CPS2 score measures the percentage of 10-min periods in which a balancing area is able to balance supply and demand within a specific threshold. Compliance with the North American Electric Reliability Corporation (NERC) reliability standards requires that the CPS2 score must exceed 90% (i.e., the balancing area must maintain adequate balance for 90% of the 10-min periods). The combination of representing DA forecast errors in the DA commitments, using 1-min PV data to simulate RT balancing, and estimates of reliability performance through the CPS2 metric, all factors that are important to operating systems with increasing amounts of PV, makes this study unique in its scope.« less

  13. Optimization of scheduling system for plant watering using electric cars in agro techno park

    NASA Astrophysics Data System (ADS)

    Oktavia Adiwijaya, Nelly; Herlambang, Yudha; Slamin

    2018-04-01

    Agro Techno Park in University of Jember is a special area used for the development of agriculture, livestock and fishery. In this plantation, the process of watering the plants is according to the frequency of each plant needs. This research develops the optimization of plant watering scheduling system using edge coloring of graph. This research was conducted in 3 stages, namely, data collection phase, analysis phase, and system development stage. The collected data was analyzed and then converted into a graph by using bipartite adjacency matrix representation. The development phase is conducted to build a web-based watering schedule optimization system. The result of this research showed that the schedule system is optimal because it can maximize the use of all electric cars to water the plants and minimize the number of idle cars.

  14. A note on resource allocation scheduling with group technology and learning effects on a single machine

    NASA Astrophysics Data System (ADS)

    Lu, Yuan-Yuan; Wang, Ji-Bo; Ji, Ping; He, Hongyu

    2017-09-01

    In this article, single-machine group scheduling with learning effects and convex resource allocation is studied. The goal is to find the optimal job schedule, the optimal group schedule, and resource allocations of jobs and groups. For the problem of minimizing the makespan subject to limited resource availability, it is proved that the problem can be solved in polynomial time under the condition that the setup times of groups are independent. For the general setup times of groups, a heuristic algorithm and a branch-and-bound algorithm are proposed, respectively. Computational experiments show that the performance of the heuristic algorithm is fairly accurate in obtaining near-optimal solutions.

  15. Issues in NASA Program and Project Management: Focus on Project Planning and Scheduling

    NASA Technical Reports Server (NTRS)

    Hoffman, Edward J. (Editor); Lawbaugh, William M. (Editor)

    1997-01-01

    Topics addressed include: Planning and scheduling training for working project teams at NASA, overview of project planning and scheduling workshops, project planning at NASA, new approaches to systems engineering, software reliability assessment, and software reuse in wind tunnel control systems.

  16. Computer-aided resource planning and scheduling for radiological services

    NASA Astrophysics Data System (ADS)

    Garcia, Hong-Mei C.; Yun, David Y.; Ge, Yiqun; Khan, Javed I.

    1996-05-01

    There exists tremendous opportunity in hospital-wide resource optimization based on system integration. This paper defines the resource planning and scheduling requirements integral to PACS, RIS and HIS integration. An multi-site case study is conducted to define the requirements. A well-tested planning and scheduling methodology, called Constrained Resource Planning model, has been applied to the chosen problem of radiological service optimization. This investigation focuses on resource optimization issues for minimizing the turnaround time to increase clinical efficiency and customer satisfaction, particularly in cases where the scheduling of multiple exams are required for a patient. How best to combine the information system efficiency and human intelligence in improving radiological services is described. Finally, an architecture for interfacing a computer-aided resource planning and scheduling tool with the existing PACS, HIS and RIS implementation is presented.

  17. A hierarchical scheduling and management solution for dynamic reconfiguration in FPGA-based embedded systems

    NASA Astrophysics Data System (ADS)

    Cervero, T.; Gómez, A.; López, S.; Sarmiento, R.; Dondo, J.; Rincón, F.; López, J. C.

    2013-05-01

    One of the limiting factors that have prevented a widely dissemination of the reconfigurable technology is the absence of an appropriate model for certain target applications capable of offering a reliable control. Moreover, the lack of flexible and easy-to-use scheduling and management systems are also relevant drawbacks to be considered. Under static scenarios, it is relatively easy to schedule and manage the reconfiguration process since all the variations corresponding to predetermined and well-known tasks. However, the difficulty increases when the adaptation needs of the overall system change semi-randomly according to the environmental fluctuations. In this context, this work proposes a change in the paradigm of dynamically reconfigurable systems, by attending to the dynamically reconfigurable control problematic as a whole, in which the scheduling and the placement issues are packed together as a hierarchical management structure, interacting together as one entity from the system point of view, but performing their tasks with certain degree of independence each other. In this sense, the top hierarchical level corresponds with a dynamic scheduler in charge of planning and adjusting all the reconfigurable modules according to the variations of the external stimulus. The lower level interacts with the physical layer of the device by means of instantiating, relocating, removing a reconfigurable module following the scheduler's instructions. In regards to how fast is the proposed solution, the total partial reconfiguration time achieved with this proposal has been measured and compared with other two approaches: 1) using traditional Xilinx's tools; 2) using an optimized version of the Xilinx's drivers. The collected numbers demonstrate that our solution reaches a gain up to 10 times faster than the other approaches.

  18. The development of a structured rating schedule (the BAS) to assess skills in breaking bad news

    PubMed Central

    Miller, S J; Hope, T; Talbot, D C

    1999-01-01

    There has been considerable interest in how doctors break bad news, with calls from within the profession and from patients for doctors to improve their communication skills. In order to aid clinical training and assessment of the skills used in breaking bad news there is a need for a reliable, practical and valid, structured rating schedule. Such a rating schedule was compiled from agreed criteria in the literature. Video-taped recordings of simulated consultations breaking bad news were independently assessed by three raters using the schedule and compared to three experts who gave global ratings. The primary outcome measures were internal consistency of the schedule and level of agreement between raters. The internal consistency was high with a Cronbach's alpha of 0.93. Agreement between raters using the schedule was moderate to good. The majority of the variation in scores was due to the differences in skills demonstrated in the interviews. The agreement between raters not using the schedule was poor. The BAS provides a simple to use, reliable, and consistent rating schedule for assessing skills used in breaking bad news. It could be a valuable aid to teaching this difficult task. © 1999 Cancer Research Campaign PMID:10360657

  19. Developing interpretable models with optimized set reduction for identifying high risk software components

    NASA Technical Reports Server (NTRS)

    Briand, Lionel C.; Basili, Victor R.; Hetmanski, Christopher J.

    1993-01-01

    Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault frequency components so that testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents the Optimized Set Reduction approach for constructing such models, intended to fulfill specific software engineering needs. Our approach to classification is to measure the software system and build multivariate stochastic models for predicting high risk system components. We present experimental results obtained by classifying Ada components into two classes: is or is not likely to generate faults during system and acceptance test. Also, we evaluate the accuracy of the model and the insights it provides into the error making process.

  20. A Comparison of Probabilistic and Deterministic Campaign Analysis for Human Space Exploration

    NASA Technical Reports Server (NTRS)

    Merrill, R. Gabe; Andraschko, Mark; Stromgren, Chel; Cirillo, Bill; Earle, Kevin; Goodliff, Kandyce

    2008-01-01

    Human space exploration is by its very nature an uncertain endeavor. Vehicle reliability, technology development risk, budgetary uncertainty, and launch uncertainty all contribute to stochasticity in an exploration scenario. However, traditional strategic analysis has been done in a deterministic manner, analyzing and optimizing the performance of a series of planned missions. History has shown that exploration scenarios rarely follow such a planned schedule. This paper describes a methodology to integrate deterministic and probabilistic analysis of scenarios in support of human space exploration. Probabilistic strategic analysis is used to simulate "possible" scenario outcomes, based upon the likelihood of occurrence of certain events and a set of pre-determined contingency rules. The results of the probabilistic analysis are compared to the nominal results from the deterministic analysis to evaluate the robustness of the scenario to adverse events and to test and optimize contingency planning.

  1. Optimizing donor scheduling before recruitment: An effective approach to increasing apheresis platelet collections.

    PubMed

    Lokhandwala, Parvez M; Shike, Hiroko; Wang, Ming; Domen, Ronald E; George, Melissa R

    2018-01-01

    Typical approach for increasing apheresis platelet collections is to recruit new donors. Here, we investigated the effectiveness of an alternative strategy: optimizing donor scheduling, prior to recruitment, at a hospital-based blood donor center. Analysis of collections, during the 89 consecutive months since opening of donor center, was performed. Linear regression and segmented time-series analyses were performed to calculate growth rates of collections and to test for statistical differences, respectively. Pre-intervention donor scheduling capacity was 39/month. In the absence of active donor recruitment, during the first 29 months, the number of collections rose gradually to 24/month (growth-rate of 0.70/month). However, between month-30 and -55, collections exhibited a plateau at 25.6 ± 3.0 (growth-rate of -0.09/month) (p<0.0001). This plateau-phase coincided with donor schedule approaching saturation (65.6 ± 7.6% schedule booked). Scheduling capacity was increased by following two interventions: adding an apheresis instrument (month-56) and adding two more collection days/week (month-72). Consequently, the scheduling capacity increased to 130/month. Post-interventions, apheresis platelet collections between month-56 and -81 exhibited a spontaneous renewed growth at a rate of 0.62/month (p<0.0001), in absence of active donor recruitment. Active donor recruitment in month-82 and -86, when the donor schedule had been optimized to accommodate further growth, resulted in a dramatic but transient surge in collections. Apheresis platelet collections plateau at nearly 2/3rd of the scheduling capacity. Optimizing the scheduling capacity prior to active donor recruitment is an effective strategy to increase platelet collections at a hospital-based donor center.

  2. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm

    PubMed Central

    Abdulhamid, Shafi’i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques. PMID:27384239

  3. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm.

    PubMed

    Abdulhamid, Shafi'i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques.

  4. On program restructuring, scheduling, and communication for parallel processor systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polychronopoulos, Constantine D.

    1986-08-01

    This dissertation discusses several software and hardware aspects of program execution on large-scale, high-performance parallel processor systems. The issues covered are program restructuring, partitioning, scheduling and interprocessor communication, synchronization, and hardware design issues of specialized units. All this work was performed focusing on a single goal: to maximize program speedup, or equivalently, to minimize parallel execution time. Parafrase, a Fortran restructuring compiler was used to transform programs in a parallel form and conduct experiments. Two new program restructuring techniques are presented, loop coalescing and subscript blocking. Compile-time and run-time scheduling schemes are covered extensively. Depending on the program construct, thesemore » algorithms generate optimal or near-optimal schedules. For the case of arbitrarily nested hybrid loops, two optimal scheduling algorithms for dynamic and static scheduling are presented. Simulation results are given for a new dynamic scheduling algorithm. The performance of this algorithm is compared to that of self-scheduling. Techniques for program partitioning and minimization of interprocessor communication for idealized program models and for real Fortran programs are also discussed. The close relationship between scheduling, interprocessor communication, and synchronization becomes apparent at several points in this work. Finally, the impact of various types of overhead on program speedup and experimental results are presented.« less

  5. Optimization of a pH-shift control strategy for producing monoclonal antibodies in Chinese hamster ovary cell cultures using a pH-dependent dynamic model.

    PubMed

    Hogiri, Tomoharu; Tamashima, Hiroshi; Nishizawa, Akitoshi; Okamoto, Masahiro

    2018-02-01

    To optimize monoclonal antibody (mAb) production in Chinese hamster ovary cell cultures, culture pH should be temporally controlled with high resolution. In this study, we propose a new pH-dependent dynamic model represented by simultaneous differential equations including a minimum of six system component, depending on pH value. All kinetic parameters in the dynamic model were estimated using an evolutionary numerical optimization (real-coded genetic algorithm) method based on experimental time-course data obtained at different pH values ranging from 6.6 to 7.2. We determined an optimal pH-shift schedule theoretically. We validated this optimal pH-shift schedule experimentally and mAb production increased by approximately 40% with this schedule. Throughout this study, it was suggested that the culture pH-shift optimization strategy using a pH-dependent dynamic model is suitable to optimize any pH-shift schedule for CHO cell lines used in mAb production projects. Copyright © 2017 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  6. The Efficacy of a Restart Break for Recycling with Optimal Performance Depends Critically on Circadian Timing

    PubMed Central

    Van Dongen, Hans P.A.; Belenky, Gregory; Vila, Bryan J.

    2011-01-01

    Objectives: Under simulated shift-work conditions, we investigated the efficacy of a restart break for maintaining neurobehavioral functioning across consecutive duty cycles, as a function of the circadian timing of the duty periods. Design: As part of a 14-day experiment, subjects underwent two cycles of five simulated daytime or nighttime duty days, separated by a 34-hour restart break. Cognitive functioning and high-fidelity driving simulator performance were tested 4 times per day during the two duty cycles. Lapses on a psychomotor vigilance test (PVT) served as the primary outcome variable. Selected sleep periods were recorded polysomnographically. Setting: The experiment was conducted under standardized, controlled laboratory conditions with continuous monitoring. Participants: Twenty-seven healthy adults (13 men, 14 women; aged 22–39 years) participated in the study. Interventions: Subjects were randomly assigned to a nighttime duty (experimental) condition or a daytime duty (control) condition. The efficacy of the 34-hour restart break for maintaining neurobehavioral functioning from the pre-restart duty cycle to the post-restart duty cycle was compared between these two conditions. Results: Relative to the daytime duty condition, the nighttime duty condition was associated with reduced amounts of sleep, whereas sleep latencies were shortened and slow-wave sleep appeared to be conserved. Neurobehavioral performance measures ranging from lapses of attention on the PVT to calculated fuel consumption on the driving simulators remained optimal across time of day in the daytime duty schedule, but degraded across time of night in the nighttime duty schedule. The 34-hour restart break was efficacious for maintaining PVT performance and other objective neurobehavioral functioning profiles from one duty cycle to the next in the daytime duty condition, but not in the nighttime duty condition. Subjective sleepiness did not reliably track objective neurobehavioral deficits. Conclusions: The 34-hour restart break was adequate for maintaining performance in the case of optimal circadian placement of sleep and duty periods (control condition) but was inadequate (and perhaps even detrimental) for maintaining performance in a simulated nighttime duty schedule (experimental condition). Current US transportation hours-of-service regulations mandate time off duty but do not consider the circadian aspects of shift scheduling. Reinforcing a recent trend of applying sleep science to inform policymaking for duty and rest times, our findings indicate that restart provisions in hours-of-service regulations could be improved by taking the circadian timing of the duty schedules into account. Citation: Van Dongen HPA; Belenky G; Vila BJ. The efficacy of a restart break for recycling with optimal performance depends critically on circadian timing. SLEEP 2011;34(7):917-929. PMID:21731142

  7. Optimal load scheduling in commercial and residential microgrids

    NASA Astrophysics Data System (ADS)

    Ganji Tanha, Mohammad Mahdi

    Residential and commercial electricity customers use more than two third of the total energy consumed in the United States, representing a significant resource of demand response. Price-based demand response, which is in response to changes in electricity prices, represents the adjustments in load through optimal load scheduling (OLS). In this study, an efficient model for OLS is developed for residential and commercial microgrids which include aggregated loads in single-units and communal loads. Single unit loads which include fixed, adjustable and shiftable loads are controllable by the unit occupants. Communal loads which include pool pumps, elevators and central heating/cooling systems are shared among the units. In order to optimally schedule residential and commercial loads, a community-based optimal load scheduling (CBOLS) is proposed in this thesis. The CBOLS schedule considers hourly market prices, occupants' comfort level, and microgrid operation constraints. The CBOLS' objective in residential and commercial microgrids is the constrained minimization of the total cost of supplying the aggregator load, defined as the microgrid load minus the microgrid generation. This problem is represented by a large-scale mixed-integer optimization for supplying single-unit and communal loads. The Lagrangian relaxation methodology is used to relax the linking communal load constraint and decompose the independent single-unit functions into subproblems which can be solved in parallel. The optimal solution is acceptable if the aggregator load limit and the duality gap are within the bounds. If any of the proposed criteria is not satisfied, the Lagrangian multiplier will be updated and a new optimal load schedule will be regenerated until both constraints are satisfied. The proposed method is applied to several case studies and the results are presented for the Galvin Center load on the 16th floor of the IIT Tower in Chicago.

  8. Efficient operation scheduling for adsorption chillers using predictive optimization-based control methods

    NASA Astrophysics Data System (ADS)

    Bürger, Adrian; Sawant, Parantapa; Bohlayer, Markus; Altmann-Dieses, Angelika; Braun, Marco; Diehl, Moritz

    2017-10-01

    Within this work, the benefits of using predictive control methods for the operation of Adsorption Cooling Machines (ACMs) are shown on a simulation study. Since the internal control decisions of series-manufactured ACMs often cannot be influenced, the work focuses on optimized scheduling of an ACM considering its internal functioning as well as forecasts for load and driving energy occurrence. For illustration, an assumed solar thermal climate system is introduced and a system model suitable for use within gradient-based optimization methods is developed. The results of a system simulation using a conventional scheme for ACM scheduling are compared to the results of a predictive, optimization-based scheduling approach for the same exemplary scenario of load and driving energy occurrence. The benefits of the latter approach are shown and future actions for application of these methods for system control are addressed.

  9. Uplink Packet-Data Scheduling in DS-CDMA Systems

    NASA Astrophysics Data System (ADS)

    Choi, Young Woo; Kim, Seong-Lyun

    In this letter, we consider the uplink packet scheduling for non-real-time data users in a DS-CDMA system. As an effort to jointly optimize throughput and fairness, we formulate a time-span minimization problem incorporating the time-multiplexing of different simultaneous transmission schemes. Based on simple rules, we propose efficient scheduling algorithms and compare them with the optimal solution obtained by linear programming.

  10. Using Time-Driven Activity-Based Costing to Implement Change.

    PubMed

    Sayed, Ellen N; Laws, Sa'ad; Uthman, Basim

    2017-01-01

    Academic medical libraries have responded to changes in technology, evolving professional roles, reduced budgets, and declining traditional services. Libraries that have taken a proactive role to change have seen their librarians emerge as collaborators and partners with faculty and researchers, while para-professional staff is increasingly overseeing traditional services. This article addresses shifting staff and schedules at a single-service-point information desk by using time-driven activity-based costing to determine the utilization of resources available to provide traditional library services. Opening hours and schedules were changed, allowing librarians to focus on patrons' information needs in their own environment.

  11. Removing Barriers for Effective Deployment of Intermittent Renewable Generation

    NASA Astrophysics Data System (ADS)

    Arabali, Amirsaman

    The stochastic nature of intermittent renewable resources is the main barrier to effective integration of renewable generation. This problem can be studied from feeder-scale and grid-scale perspectives. Two new stochastic methods are proposed to meet the feeder-scale controllable load with a hybrid renewable generation (including wind and PV) and energy storage system. For the first method, an optimization problem is developed whose objective function is the cost of the hybrid system including the cost of renewable generation and storage subject to constraints on energy storage and shifted load. A smart-grid strategy is developed to shift the load and match the renewable energy generation and controllable load. Minimizing the cost function guarantees minimum PV and wind generation installation, as well as storage capacity selection for supplying the controllable load. A confidence coefficient is allocated to each stochastic constraint which shows to what degree the constraint is satisfied. In the second method, a stochastic framework is developed for optimal sizing and reliability analysis of a hybrid power system including renewable resources (PV and wind) and energy storage system. The hybrid power system is optimally sized to satisfy the controllable load with a specified reliability level. A load-shifting strategy is added to provide more flexibility for the system and decrease the installation cost. Load shifting strategies and their potential impacts on the hybrid system reliability/cost analysis are evaluated trough different scenarios. Using a compromise-solution method, the best compromise between the reliability and cost will be realized for the hybrid system. For the second problem, a grid-scale stochastic framework is developed to examine the storage application and its optimal placement for the social cost and transmission congestion relief of wind integration. Storage systems are optimally placed and adequately sized to minimize the sum of operation and congestion costs over a scheduling period. A technical assessment framework is developed to enhance the efficiency of wind integration and evaluate the economics of storage technologies and conventional gas-fired alternatives. The proposed method is used to carry out a cost-benefit analysis for the IEEE 24-bus system and determine the most economical technology. In order to mitigate the financial and technical concerns of renewable energy integration into the power system, a stochastic framework is proposed for transmission grid reinforcement studies in a power system with wind generation. A multi-stage multi-objective transmission network expansion planning (TNEP) methodology is developed which considers the investment cost, absorption of private investment and reliability of the system as the objective functions. A Non-dominated Sorting Genetic Algorithm (NSGA II) optimization approach is used in combination with a probabilistic optimal power flow (POPF) to determine the Pareto optimal solutions considering the power system uncertainties. Using a compromise-solution method, the best final plan is then realized based on the decision maker preferences. The proposed methodology is applied to the IEEE 24-bus Reliability Tests System (RTS) to evaluate the feasibility and practicality of the developed planning strategy.

  12. A DAG Scheduling Scheme on Heterogeneous Computing Systems Using Tuple-Based Chemical Reaction Optimization

    PubMed Central

    Jiang, Yuyi; Shao, Zhiqing; Guo, Yi

    2014-01-01

    A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems. PMID:25143977

  13. A DAG scheduling scheme on heterogeneous computing systems using tuple-based chemical reaction optimization.

    PubMed

    Jiang, Yuyi; Shao, Zhiqing; Guo, Yi

    2014-01-01

    A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems.

  14. Resource planning and scheduling of payload for satellite with particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Li, Jian; Wang, Cheng

    2007-11-01

    The resource planning and scheduling technology of payload is a key technology to realize an automated control for earth observing satellite with limited resources on satellite, which is implemented to arrange the works states of various payloads to carry out missions by optimizing the scheme of the resources. The scheduling task is a difficult constraint optimization problem with various and mutative requests and constraints. Based on the analysis of the satellite's functions and the payload's resource constraints, a proactive planning and scheduling strategy based on the availability of consumable and replenishable resources in time-order is introduced along with dividing the planning and scheduling period to several pieces. A particle swarm optimization algorithm is proposed to address the problem with an adaptive mutation operator selection, where the swarm is divided into groups with different probabilities to employ various mutation operators viz., differential evolution, Gaussian and random mutation operators. The probabilities are adjusted adaptively by comparing the effectiveness of the groups to select a proper operator. The simulation results have shown the feasibility and effectiveness of the method.

  15. Large scale nonlinear programming for the optimization of spacecraft trajectories

    NASA Astrophysics Data System (ADS)

    Arrieta-Camacho, Juan Jose

    Despite the availability of high fidelity mathematical models, the computation of accurate optimal spacecraft trajectories has never been an easy task. While simplified models of spacecraft motion can provide useful estimates on energy requirements, sizing, and cost; the actual launch window and maneuver scheduling must rely on more accurate representations. We propose an alternative for the computation of optimal transfers that uses an accurate representation of the spacecraft dynamics. Like other methodologies for trajectory optimization, this alternative is able to consider all major disturbances. In contrast, it can handle explicitly equality and inequality constraints throughout the trajectory; it requires neither the derivation of costate equations nor the identification of the constrained arcs. The alternative consist of two steps: (1) discretizing the dynamic model using high-order collocation at Radau points, which displays numerical advantages, and (2) solution to the resulting Nonlinear Programming (NLP) problem using an interior point method, which does not suffer from the performance bottleneck associated with identifying the active set, as required by sequential quadratic programming methods; in this way the methodology exploits the availability of sound numerical methods, and next generation NLP solvers. In practice the methodology is versatile; it can be applied to a variety of aerospace problems like homing, guidance, and aircraft collision avoidance; the methodology is particularly well suited for low-thrust spacecraft trajectory optimization. Examples are presented which consider the optimization of a low-thrust orbit transfer subject to the main disturbances due to Earth's gravity field together with Lunar and Solar attraction. Other example considers the optimization of a multiple asteroid rendezvous problem. In both cases, the ability of our proposed methodology to consider non-standard objective functions and constraints is illustrated. Future research directions are identified, involving the automatic scheduling and optimization of trajectory correction maneuvers. The sensitivity information provided by the methodology is expected to be invaluable in such research pursuit. The collocation scheme and nonlinear programming algorithm presented in this work, complement other existing methodologies by providing reliable and efficient numerical methods able to handle large scale, nonlinear dynamic models.

  16. Expert systems tools for Hubble Space Telescope observation scheduling

    NASA Technical Reports Server (NTRS)

    Miller, Glenn; Rosenthal, Don; Cohen, William; Johnston, Mark

    1987-01-01

    The utility of expert systems techniques for the Hubble Space Telescope (HST) planning and scheduling is discussed and a plan for development of expert system tools which will augment the existing ground system is described. Additional capabilities provided by these tools will include graphics-oriented plan evaluation, long-range analysis of the observation pool, analysis of optimal scheduling time intervals, constructing sequences of spacecraft activities which minimize operational overhead, and optimization of linkages between observations. Initial prototyping of a scheduler used the Automated Reasoning Tool running on a LISP workstation.

  17. A modify ant colony optimization for the grid jobs scheduling problem with QoS requirements

    NASA Astrophysics Data System (ADS)

    Pu, Xun; Lu, XianLiang

    2011-10-01

    Job scheduling with customers' quality of service (QoS) requirement is challenging in grid environment. In this paper, we present a modify Ant colony optimization (MACO) for the Job scheduling problem in grid. Instead of using the conventional construction approach to construct feasible schedules, the proposed algorithm employs a decomposition method to satisfy the customer's deadline and cost requirements. Besides, a new mechanism of service instances state updating is embedded to improve the convergence of MACO. Experiments demonstrate the effectiveness of the proposed algorithm.

  18. Optimal maintenance of a multi-unit system under dependencies

    NASA Astrophysics Data System (ADS)

    Sung, Ho-Joon

    The availability, or reliability, of an engineering component greatly influences the operational cost and safety characteristics of a modern system over its life-cycle. Until recently, the reliance on past empirical data has been the industry-standard practice to develop maintenance policies that provide the minimum level of system reliability. Because such empirically-derived policies are vulnerable to unforeseen or fast-changing external factors, recent advancements in the study of topic on maintenance, which is known as optimal maintenance problem, has gained considerable interest as a legitimate area of research. An extensive body of applicable work is available, ranging from those concerned with identifying maintenance policies aimed at providing required system availability at minimum possible cost, to topics on imperfect maintenance of multi-unit system under dependencies. Nonetheless, these existing mathematical approaches to solve for optimal maintenance policies must be treated with caution when considered for broader applications, as they are accompanied by specialized treatments to ease the mathematical derivation of unknown functions in both objective function and constraint for a given optimal maintenance problem. These unknown functions are defined as reliability measures in this thesis, and theses measures (e.g., expected number of failures, system renewal cycle, expected system up time, etc.) do not often lend themselves to possess closed-form formulas. It is thus quite common to impose simplifying assumptions on input probability distributions of components' lifetime or repair policies. Simplifying the complex structure of a multi-unit system to a k-out-of-n system by neglecting any sources of dependencies is another commonly practiced technique intended to increase the mathematical tractability of a particular model. This dissertation presents a proposal for an alternative methodology to solve optimal maintenance problems by aiming to achieve the same end-goals as Reliability Centered Maintenance (RCM). RCM was first introduced to the aircraft industry in an attempt to bridge the gap between the empirically-driven and theory-driven approaches to establishing optimal maintenance policies. Under RCM, qualitative processes that enable the prioritizing of functions based on the criticality and influence would be combined with mathematical modeling to obtain the optimal maintenance policies. Where this thesis work deviates from RCM is its proposal to directly apply quantitative processes to model the reliability measures in optimal maintenance problem. First, Monte Carlo (MC) simulation, in conjunction with a pre-determined Design of Experiments (DOE) table, can be used as a numerical means of obtaining the corresponding discrete simulated outcomes of the reliability measures based on the combination of decision variables (e.g., periodic preventive maintenance interval, trigger age for opportunistic maintenance, etc.). These discrete simulation results can then be regressed as Response Surface Equations (RSEs) with respect to the decision variables. Such an approach to represent the reliability measures with continuous surrogate functions (i.e., the RSEs) not only enables the application of the numerical optimization technique to solve for optimal maintenance policies, but also obviates the need to make mathematical assumptions or impose over-simplifications on the structure of a multi-unit system for the sake of mathematical tractability. The applicability of the proposed methodology to a real-world optimal maintenance problem is showcased through its application to a Time Limited Dispatch (TLD) of Full Authority Digital Engine Control (FADEC) system. In broader terms, this proof-of-concept exercise can be described as a constrained optimization problem, whose objective is to identify the optimal system inspection interval that guarantees a certain level of availability for a multi-unit system. A variety of reputable numerical techniques were used to model the problem as accurately as possible, including algorithms for the MC simulation, imperfect maintenance model from quasi renewal processes, repair time simulation, and state transition rules. Variance Reduction Techniques (VRTs) were also used in an effort to enhance MC simulation efficiency. After accurate MC simulation results are obtained, the RSEs are generated based on the goodness-of-fit measure to yield as parsimonious model as possible to construct the optimization problem. Under the assumption of constant failure rate for lifetime distributions, the inspection interval from the proposed methodology was found to be consistent with the one from the common approach used in industry that leverages Continuous Time Markov Chain (CTMC). While the latter does not consider maintenance cost settings, the proposed methodology enables an operator to consider different types of maintenance cost settings, e.g., inspection cost, system corrective maintenance cost, etc., to result in more flexible maintenance policies. When the proposed methodology was applied to the same TLD of FADEC example, but under the more generalized assumption of strictly Increasing Failure Rate (IFR) for lifetime distribution, it was shown to successfully capture component wear-out, as well as the economic dependencies among the system components.

  19. Achieving reutilization of scheduling software through abstraction and generalization

    NASA Technical Reports Server (NTRS)

    Wilkinson, George J.; Monteleone, Richard A.; Weinstein, Stuart M.; Mohler, Michael G.; Zoch, David R.; Tong, G. Michael

    1995-01-01

    Reutilization of software is a difficult goal to achieve particularly in complex environments that require advanced software systems. The Request-Oriented Scheduling Engine (ROSE) was developed to create a reusable scheduling system for the diverse scheduling needs of the National Aeronautics and Space Administration (NASA). ROSE is a data-driven scheduler that accepts inputs such as user activities, available resources, timing contraints, and user-defined events, and then produces a conflict-free schedule. To support reutilization, ROSE is designed to be flexible, extensible, and portable. With these design features, applying ROSE to a new scheduling application does not require changing the core scheduling engine, even if the new application requires significantly larger or smaller data sets, customized scheduling algorithms, or software portability. This paper includes a ROSE scheduling system description emphasizing its general-purpose features, reutilization techniques, and tasks for which ROSE reuse provided a low-risk solution with significant cost savings and reduced software development time.

  20. Managing Reliability in the 21st Century

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dellin, T.A.

    1998-11-23

    The rapid pace of change at Ike end of the 20th Century should continue unabated well into the 21st Century. The driver will be the marketplace imperative of "faster, better, cheaper." This imperative has already stimulated a revolution-in-engineering in design and manufacturing. In contrast, to date, reliability engineering has not undergone a similar level of change. It is critical that we implement a corresponding revolution-in-reliability-engineering as we enter the new millennium. If we are still using 20th Century reliability approaches in the 21st Century, then reliability issues will be the limiting factor in faster, better, and cheaper. At the heartmore » of this reliability revolution will be a science-based approach to reliability engineering. Science-based reliability will enable building-in reliability, application-specific products, virtual qualification, and predictive maintenance. The purpose of this paper is to stimulate a dialogue on the future of reliability engineering. We will try to gaze into the crystal ball and predict some key issues that will drive reliability programs in the new millennium. In the 21st Century, we will demand more of our reliability programs. We will need the ability to make accurate reliability predictions that will enable optimizing cost, performance and time-to-market to meet the needs of every market segment. We will require that all of these new capabilities be in place prior to the stint of a product development cycle. The management of reliability programs will be driven by quantifiable metrics of value added to the organization business objectives.« less

  1. Automated Job Controller for Clouds and the Earth's Radiant Energy System (CERES) Production Processing

    NASA Astrophysics Data System (ADS)

    Gleason, J. L.; Hillyer, T. N.

    2011-12-01

    Clouds and the Earth's Radiant Energy System (CERES) is one of NASA's highest priority Earth Observing System (EOS) scientific instruments. The CERES science team will integrate data from the CERES Flight Model 5 (FM5) on the NPOESS Preparatory Project (NPP) in addition to the four CERES scanning instrument on Terra and Aqua. The CERES production system consists of over 75 Product Generation Executives (PGEs) maintained by twelve subsystem groups. The processing chain fuses CERES instrument observations with data from 19 other unique sources. The addition of FM5 to over 22 instrument years of data to be reprocessed from flight models 1-4 creates a need for an optimized production processing approach. This poster discusses a new approach, using JBoss and Perl to manage job scheduling and interdependencies between PGEs and external data sources. The new optimized approach uses JBoss to serve handler servlets which regulate PGE-level job interdependencies and job completion notifications. Additional servlets are used to regulate all job submissions from the handlers and to interact with the operator. Perl submission scripts are used to build Process Control Files and to interact directly with the operating system and cluster scheduler. The result is a reduced burden on the operator by algorithmically enforcing a set of rules that determine the optimal time to produce data products with the highest integrity. These rules are designed on a per PGE basis and periodically change. This design provides the means to dynamically update PGE rules at run time and increases the processing throughput by using an event driven controller. The immediate notification of a PGE's completion (an event) allows successor PGEs to launch at the proper time with minimal start up latency, thereby increasing computer system utilization.

  2. Genetic algorithm parameters tuning for resource-constrained project scheduling problem

    NASA Astrophysics Data System (ADS)

    Tian, Xingke; Yuan, Shengrui

    2018-04-01

    Project Scheduling Problem (RCPSP) is a kind of important scheduling problem. To achieve a certain optimal goal such as the shortest duration, the smallest cost, the resource balance and so on, it is required to arrange the start and finish of all tasks under the condition of satisfying project timing constraints and resource constraints. In theory, the problem belongs to the NP-hard problem, and the model is abundant. Many combinatorial optimization problems are special cases of RCPSP, such as job shop scheduling, flow shop scheduling and so on. At present, the genetic algorithm (GA) has been used to deal with the classical RCPSP problem and achieved remarkable results. Vast scholars have also studied the improved genetic algorithm for the RCPSP problem, which makes it to solve the RCPSP problem more efficiently and accurately. However, for the selection of the main parameters of the genetic algorithm, there is no parameter optimization in these studies. Generally, we used the empirical method, but it cannot ensure to meet the optimal parameters. In this paper, the problem was carried out, which is the blind selection of parameters in the process of solving the RCPSP problem. We made sampling analysis, the establishment of proxy model and ultimately solved the optimal parameters.

  3. Dynamic I/O Power Management for Hard Real-Time Systems

    DTIC Science & Technology

    2005-01-01

    recently emerged as an attractive alternative to inflexible hardware solutions. DPM for hard real - time systems has received relatively little attention...In particular, energy-driven I/O device scheduling for real - time systems has not been considered before. We present the first online DPM algorithm...which we call Low Energy Device Scheduler (LEDES), for hard real - time systems . LEDES takes as inputs a predetermined task schedule and a device-usage

  4. Natural Gas Engine-Driven Heat Pump Demonstration at DoD Installations: Performance and Reliability Summary

    DTIC Science & Technology

    2009-06-09

    ER D C/ CE R L TR -0 9 -1 0 Natural Gas Engine-Driven Heat Pump Demonstration at DoD Installations Performance and Reliability Summary...L ab or at or y Approved for public release; distribution is unlimited. ERDC/CERL TR-09-10 June 2009 Natural Gas Engine-Driven Heat Pump ...CERL TR-09-10 ii Abstract: Results of field testing natural gas engine-driven heat pumps (GHP) at six southwestern U.S. Department of Defense (DoD

  5. The Rathus Assertiveness Schedule: Reliability at the Junior High School Level

    ERIC Educational Resources Information Center

    Vaal, Joseph J.; McCullagh, James

    1977-01-01

    This research was an attempt to determine the usefullness of the Rathus Assertiveness Schedule with pre-adolescent and early adolescent students. Previously it has been used with outpatients, institutionalized adults, or with college students. The RAS is a thirty item schedule that was developed for measuring assertiveness. (Author/RK)

  6. Specialist availability in emergencies: contributions of response times and the use of ad hoc coverage in New York State.

    PubMed

    Rabin, Elaine; Patrick, Lisa

    2016-04-01

    Nationwide, hospitals struggle to maintain specialist on-call coverage for emergencies. We seek to further understand the issue by examining reliability of scheduled coverage and the role of ad hoc coverage when none is scheduled. An anonymous electronic survey of all emergency department (ED) directors of a large state. Overall and for 10 specialties, respondents were asked to estimate on-call coverage extent and "reliability" (frequency of emergency response in a clinically useful time frame: 2 hours), and use and effect of ad hoc emergency coverage to fill gaps. Descriptive statistics were performed using Fisher exact and Wilcoxon sign rank tests for significance. Contact information was obtained for 125 of 167 ED directors. Sixty responded (48%), representing 36% of EDs. Forty-six percent reported full on-call coverage scheduled for all specialties. Forty-six percent reported consistent reliability. Coverage and reliability were strongly related (P<.01; 33% reported both), and larger ED volume correlated with both (P<.01). Ninety percent of hospitals that had gaps in either employed ad hoc coverage, significantly improving coverage for 8 of 10 specialties. For all but 1 specialty, more than 20% of hospitals reported that specialists are "Never", "Rarely" or "Sometimes" reliable (more than 50% for cardiovascular surgery, hand surgery and ophthalmology). Significant holes in scheduled on-call specialist coverage are compounded by frequent unreliability of on-call specialists, but partially ameliorated by ad hoc specialist coverage. Regionalization may help because a 2-tiered system may exist: larger hospitals have more complete, reliable coverage. Better understanding of specialists' willingness to treat emergencies ad hoc without taking formal call will suggest additional remedies. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Model-based rational feedback controller design for closed-loop deep brain stimulation of Parkinson's disease

    NASA Astrophysics Data System (ADS)

    Gorzelic, P.; Schiff, S. J.; Sinha, A.

    2013-04-01

    Objective. To explore the use of classical feedback control methods to achieve an improved deep brain stimulation (DBS) algorithm for application to Parkinson's disease (PD). Approach. A computational model of PD dynamics was employed to develop model-based rational feedback controller design. The restoration of thalamocortical relay capabilities to patients suffering from PD is formulated as a feedback control problem with the DBS waveform serving as the control input. Two high-level control strategies are tested: one that is driven by an online estimate of thalamic reliability, and another that acts to eliminate substantial decreases in the inhibition from the globus pallidus interna (GPi) to the thalamus. Control laws inspired by traditional proportional-integral-derivative (PID) methodology are prescribed for each strategy and simulated on this computational model of the basal ganglia network. Main Results. For control based upon thalamic reliability, a strategy of frequency proportional control with proportional bias delivered the optimal control achieved for a given energy expenditure. In comparison, control based upon synaptic inhibitory output from the GPi performed very well in comparison with those of reliability-based control, with considerable further reduction in energy expenditure relative to that of open-loop DBS. The best controller performance was amplitude proportional with derivative control and integral bias, which is full PID control. We demonstrated how optimizing the three components of PID control is feasible in this setting, although the complexity of these optimization functions argues for adaptive methods in implementation. Significance. Our findings point to the potential value of model-based rational design of feedback controllers for Parkinson's disease.

  8. Model-based rational feedback controller design for closed-loop deep brain stimulation of Parkinson's disease.

    PubMed

    Gorzelic, P; Schiff, S J; Sinha, A

    2013-04-01

    To explore the use of classical feedback control methods to achieve an improved deep brain stimulation (DBS) algorithm for application to Parkinson's disease (PD). A computational model of PD dynamics was employed to develop model-based rational feedback controller design. The restoration of thalamocortical relay capabilities to patients suffering from PD is formulated as a feedback control problem with the DBS waveform serving as the control input. Two high-level control strategies are tested: one that is driven by an online estimate of thalamic reliability, and another that acts to eliminate substantial decreases in the inhibition from the globus pallidus interna (GPi) to the thalamus. Control laws inspired by traditional proportional-integral-derivative (PID) methodology are prescribed for each strategy and simulated on this computational model of the basal ganglia network. For control based upon thalamic reliability, a strategy of frequency proportional control with proportional bias delivered the optimal control achieved for a given energy expenditure. In comparison, control based upon synaptic inhibitory output from the GPi performed very well in comparison with those of reliability-based control, with considerable further reduction in energy expenditure relative to that of open-loop DBS. The best controller performance was amplitude proportional with derivative control and integral bias, which is full PID control. We demonstrated how optimizing the three components of PID control is feasible in this setting, although the complexity of these optimization functions argues for adaptive methods in implementation. Our findings point to the potential value of model-based rational design of feedback controllers for Parkinson's disease.

  9. Scheduling Algorithms for Maximizing Throughput with Zero-Forcing Beamforming in a MIMO Wireless System

    NASA Astrophysics Data System (ADS)

    Foronda, Augusto; Ohta, Chikara; Tamaki, Hisashi

    Dirty paper coding (DPC) is a strategy to achieve the region capacity of multiple input multiple output (MIMO) downlink channels and a DPC scheduler is throughput optimal if users are selected according to their queue states and current rates. However, DPC is difficult to implement in practical systems. One solution, zero-forcing beamforming (ZFBF) strategy has been proposed to achieve the same asymptotic sum rate capacity as that of DPC with an exhaustive search over the entire user set. Some suboptimal user group selection schedulers with reduced complexity based on ZFBF strategy (ZFBF-SUS) and proportional fair (PF) scheduling algorithm (PF-ZFBF) have also been proposed to enhance the throughput and fairness among the users, respectively. However, they are not throughput optimal, fairness and throughput decrease if each user queue length is different due to different users channel quality. Therefore, we propose two different scheduling algorithms: a throughput optimal scheduling algorithm (ZFBF-TO) and a reduced complexity scheduling algorithm (ZFBF-RC). Both are based on ZFBF strategy and, at every time slot, the scheduling algorithms have to select some users based on user channel quality, user queue length and orthogonality among users. Moreover, the proposed algorithms have to produce the rate allocation and power allocation for the selected users based on a modified water filling method. We analyze the schedulers complexity and numerical results show that ZFBF-RC provides throughput and fairness improvements compared to the ZFBF-SUS and PF-ZFBF scheduling algorithms.

  10. Simultaneous Scheduling of Jobs, AGVs and Tools Considering Tool Transfer Times in Multi Machine FMS By SOS Algorithm

    NASA Astrophysics Data System (ADS)

    Sivarami Reddy, N.; Ramamurthy, D. V., Dr.; Prahlada Rao, K., Dr.

    2017-08-01

    This article addresses simultaneous scheduling of machines, AGVs and tools where machines are allowed to share the tools considering transfer times of jobs and tools between machines, to generate best optimal sequences that minimize makespan in a multi-machine Flexible Manufacturing System (FMS). Performance of FMS is expected to improve by effective utilization of its resources, by proper integration and synchronization of their scheduling. Symbiotic Organisms Search (SOS) algorithm is a potent tool which is a better alternative for solving optimization problems like scheduling and proven itself. The proposed SOS algorithm is tested on 22 job sets with makespan as objective for scheduling of machines and tools where machines are allowed to share tools without considering transfer times of jobs and tools and the results are compared with the results of existing methods. The results show that the SOS has outperformed. The same SOS algorithm is used for simultaneous scheduling of machines, AGVs and tools where machines are allowed to share tools considering transfer times of jobs and tools to determine the best optimal sequences that minimize makespan.

  11. Planning and Scheduling for Fleets of Earth Observing Satellites

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Jonsson, Ari; Morris, Robert; Smith, David E.; Norvig, Peter (Technical Monitor)

    2001-01-01

    We address the problem of scheduling observations for a collection of earth observing satellites. This scheduling task is a difficult optimization problem, potentially involving many satellites, hundreds of requests, constraints on when and how to service each request, and resources such as instruments, recording devices, transmitters, and ground stations. High-fidelity models are required to ensure the validity of schedules; at the same time, the size and complexity of the problem makes it unlikely that systematic optimization search methods will be able to solve them in a reasonable time. This paper presents a constraint-based approach to solving the Earth Observing Satellites (EOS) scheduling problem, and proposes a stochastic heuristic search method for solving it.

  12. A self-organizing neural network for job scheduling in distributed systems

    NASA Astrophysics Data System (ADS)

    Newman, Harvey B.; Legrand, Iosif C.

    2001-08-01

    The aim of this work is to describe a possible approach for the optimization of the job scheduling in large distributed systems, based on a self-organizing Neural Network. This dynamic scheduling system should be seen as adaptive middle layer software, aware of current available resources and making the scheduling decisions using the "past experience." It aims to optimize job specific parameters as well as the resource utilization. The scheduling system is able to dynamically learn and cluster information in a large dimensional parameter space and at the same time to explore new regions in the parameters space. This self-organizing scheduling system may offer a possible solution to provide an effective use of resources for the off-line data processing jobs for future HEP experiments.

  13. System control of an autonomous planetary mobile spacecraft

    NASA Technical Reports Server (NTRS)

    Dias, William C.; Zimmerman, Barbara A.

    1990-01-01

    The goal is to suggest the scheduling and control functions necessary for accomplishing mission objectives of a fairly autonomous interplanetary mobile spacecraft, while maximizing reliability. Goals are to provide an extensible, reliable system conservative in its use of on-board resources, while getting full value from subsystem autonomy, and avoiding the lure of ground micromanagement. A functional layout consisting of four basic elements is proposed: GROUND and SYSTEM EXECUTIVE system functions and RESOURCE CONTROL and ACTIVITY MANAGER subsystem functions. The system executive includes six subfunctions: SYSTEM MANAGER, SYSTEM FAULT PROTECTION, PLANNER, SCHEDULE ADAPTER, EVENT MONITOR and RESOURCE MONITOR. The full configuration is needed for autonomous operation on Moon or Mars, whereas a reduced version without the planning, schedule adaption and event monitoring functions could be appropriate for lower-autonomy use on the Moon. An implementation concept is suggested which is conservative in use of system resources and consists of modules combined with a network communications fabric. A language concept termed a scheduling calculus for rapidly performing essential on-board schedule adaption functions is introduced.

  14. Optimizing The Scheduling Of Recruitment And Initial Training For Soldiers In The Australian Army

    DTIC Science & Technology

    2016-03-01

    SCHEDULING OF RECRUITMENT AND INITIAL TRAINING FOR SOLDIERS IN THE AUSTRALIAN ARMY by Melissa T. Joy March 2016 Thesis Advisor: Kenneth...SOLDIERS IN THE AUSTRALIAN ARMY 5. FUNDING NUMBERS 6. AUTHOR(S) Melissa T. Joy 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval...This thesis develops a master scheduling program to optimize recruitment into the Australian Army by employment category. The goal of the model

  15. Short-term bulk energy storage system scheduling for load leveling in unit commitment: modeling, optimization, and sensitivity analysis

    PubMed Central

    Hemmati, Reza; Saboori, Hedayat

    2016-01-01

    Energy storage systems (ESSs) have experienced a very rapid growth in recent years and are expected to be a promising tool in order to improving power system reliability and being economically efficient. The ESSs possess many potential benefits in various areas in the electric power systems. One of the main benefits of an ESS, especially a bulk unit, relies on smoothing the load pattern by decreasing on-peak and increasing off-peak loads, known as load leveling. These devices require new methods and tools in order to model and optimize their effects in the power system studies. In this respect, this paper will model bulk ESSs based on the several technical characteristics, introduce the proposed model in the thermal unit commitment (UC) problem, and analyze it with respect to the various sensitive parameters. The technical limitations of the thermal units and transmission network constraints are also considered in the model. The proposed model is a Mixed Integer Linear Programming (MILP) which can be easily solved by strong commercial solvers (for instance CPLEX) and it is appropriate to be used in the practical large scale networks. The results of implementing the proposed model on a test system reveal that proper load leveling through optimum storage scheduling leads to considerable operation cost reduction with respect to the storage system characteristics. PMID:27222741

  16. Short-term bulk energy storage system scheduling for load leveling in unit commitment: modeling, optimization, and sensitivity analysis.

    PubMed

    Hemmati, Reza; Saboori, Hedayat

    2016-05-01

    Energy storage systems (ESSs) have experienced a very rapid growth in recent years and are expected to be a promising tool in order to improving power system reliability and being economically efficient. The ESSs possess many potential benefits in various areas in the electric power systems. One of the main benefits of an ESS, especially a bulk unit, relies on smoothing the load pattern by decreasing on-peak and increasing off-peak loads, known as load leveling. These devices require new methods and tools in order to model and optimize their effects in the power system studies. In this respect, this paper will model bulk ESSs based on the several technical characteristics, introduce the proposed model in the thermal unit commitment (UC) problem, and analyze it with respect to the various sensitive parameters. The technical limitations of the thermal units and transmission network constraints are also considered in the model. The proposed model is a Mixed Integer Linear Programming (MILP) which can be easily solved by strong commercial solvers (for instance CPLEX) and it is appropriate to be used in the practical large scale networks. The results of implementing the proposed model on a test system reveal that proper load leveling through optimum storage scheduling leads to considerable operation cost reduction with respect to the storage system characteristics.

  17. Mission Data System Java Edition Version 7

    NASA Technical Reports Server (NTRS)

    Reinholtz, William K.; Wagner, David A.

    2013-01-01

    The Mission Data System framework defines closed-loop control system abstractions from State Analysis including interfaces for state variables, goals, estimators, and controllers that can be adapted to implement a goal-oriented control system. The framework further provides an execution environment that includes a goal scheduler, execution engine, and fault monitor that support the expression of goal network activity plans. Using these frameworks, adapters can build a goal-oriented control system where activity coordination is verified before execution begins (plan time), and continually during execution. Plan failures including violations of safety constraints expressed in the plan can be handled through automatic re-planning. This version optimizes a number of key interfaces and features to minimize dependencies, performance overhead, and improve reliability. Fault diagnosis and real-time projection capabilities are incorporated. This version enhances earlier versions primarily through optimizations and quality improvements that raise the technology readiness level. Goals explicitly constrain system states over explicit time intervals to eliminate ambiguity about intent, as compared to command-oriented control that only implies persistent intent until another command is sent. A goal network scheduling and verification process ensures that all goals in the plan are achievable before starting execution. Goal failures at runtime can be detected (including predicted failures) and handled by adapted response logic. Responses can include plan repairs (try an alternate tactic to achieve the same goal), goal shedding, ignoring the fault, cancelling the plan, or safing the system.

  18. Identification of minimal parameters for optimal suppression of chaos in dissipative driven systems.

    PubMed

    Martínez, Pedro J; Euzzor, Stefano; Gallas, Jason A C; Meucci, Riccardo; Chacón, Ricardo

    2017-12-21

    Taming chaos arising from dissipative non-autonomous nonlinear systems by applying additional harmonic excitations is a reliable and widely used procedure nowadays. But the suppressory effectiveness of generic non-harmonic periodic excitations continues to be a significant challenge both to our theoretical understanding and in practical applications. Here we show how the effectiveness of generic suppressory excitations is optimally enhanced when the impulse transmitted by them (time integral over two consecutive zeros) is judiciously controlled in a not obvious way. Specifically, the effective amplitude of the suppressory excitation is minimal when the impulse transmitted is maximum. Also, by lowering the impulse transmitted one obtains larger regularization areas in the initial phase difference-amplitude control plane, the price to be paid being the requirement of larger amplitudes. These two remarkable features, which constitute our definition of optimum control, are demonstrated experimentally by means of an analog version of a paradigmatic model, and confirmed numerically by simulations of such a damped driven system including the presence of noise. Our theoretical analysis shows that the controlling effect of varying the impulse is due to a subsequent variation of the energy transmitted by the suppressory excitation.

  19. A feasibility study: Selection of a personalized radiotherapy fractionation schedule using spatiotemporal optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Minsun, E-mail: mk688@uw.edu; Stewart, Robert D.; Phillips, Mark H.

    2015-11-15

    Purpose: To investigate the impact of using spatiotemporal optimization, i.e., intensity-modulated spatial optimization followed by fractionation schedule optimization, to select the patient-specific fractionation schedule that maximizes the tumor biologically equivalent dose (BED) under dose constraints for multiple organs-at-risk (OARs). Methods: Spatiotemporal optimization was applied to a variety of lung tumors in a phantom geometry using a range of tumor sizes and locations. The optimal fractionation schedule for a patient using the linear-quadratic cell survival model depends on the tumor and OAR sensitivity to fraction size (α/β), the effective tumor doubling time (T{sub d}), and the size and location of tumormore » target relative to one or more OARs (dose distribution). The authors used a spatiotemporal optimization method to identify the optimal number of fractions N that maximizes the 3D tumor BED distribution for 16 lung phantom cases. The selection of the optimal fractionation schedule used equivalent (30-fraction) OAR constraints for the heart (D{sub mean} ≤ 45 Gy), lungs (D{sub mean} ≤ 20 Gy), cord (D{sub max} ≤ 45 Gy), esophagus (D{sub max} ≤ 63 Gy), and unspecified tissues (D{sub 05} ≤ 60 Gy). To assess plan quality, the authors compared the minimum, mean, maximum, and D{sub 95} of tumor BED, as well as the equivalent uniform dose (EUD) for optimized plans to conventional intensity-modulated radiation therapy plans prescribing 60 Gy in 30 fractions. A sensitivity analysis was performed to assess the effects of T{sub d} (3–100 days), tumor lag-time (T{sub k} = 0–10 days), and the size of tumors on optimal fractionation schedule. Results: Using an α/β ratio of 10 Gy, the average values of tumor max, min, mean BED, and D{sub 95} were up to 19%, 21%, 20%, and 19% larger than those from conventional prescription, depending on T{sub d} and T{sub k} used. Tumor EUD was up to 17% larger than the conventional prescription. For fast proliferating tumors with T{sub d} less than 10 days, there was no significant increase in tumor BED but the treatment course could be shortened without a loss in tumor BED. The improvement in the tumor mean BED was more pronounced with smaller tumors (p-value = 0.08). Conclusions: Spatiotemporal optimization of patient plans has the potential to significantly improve local tumor control (larger BED/EUD) of patients with a favorable geometry, such as smaller tumors with larger distances between the tumor target and nearby OAR. In patients with a less favorable geometry and for fast growing tumors, plans optimized using spatiotemporal optimization and conventional (spatial-only) optimization are equivalent (negligible differences in tumor BED/EUD). However, spatiotemporal optimization yields shorter treatment courses than conventional spatial-only optimization. Personalized, spatiotemporal optimization of treatment schedules can increase patient convenience and help with the efficient allocation of clinical resources. Spatiotemporal optimization can also help identify a subset of patients that might benefit from nonconventional (large dose per fraction) treatments that are ineligible for the current practice of stereotactic body radiation therapy.« less

  20. The Spanish Diagnostic Interview Schedule. Reliability and comparison with clinical diagnoses.

    PubMed

    Burnam, M A; Karno, M; Hough, R L; Escobar, J I; Forsythe, A B

    1983-11-01

    The National Institute of Mental Health Diagnostic Interview Schedule (DIS) was translated into Spanish. The reliability of the Spanish instrument, its equivalence to the English version, and its agreement with clinical diagnoses were examined in a study of 90 bilingual (English-and Spanish-speaking) and 61 monolingual (Spanish-speaking only) patients from a community mental health center. The study design involved two independent DIS administrations and one independent clinical evaluation of each subject.

  1. An operation support expert system based on on-line dynamics simulation and fuzzy reasoning for startup schedule optimization in fossil power plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsumoto, H.; Eki, Y.; Kaji, A.

    1993-12-01

    An expert system which can support operators of fossil power plants in creating the optimum startup schedule and executing it accurately is described. The optimum turbine speed-up and load-up pattern is obtained through an iterative manner which is based on fuzzy resonating using quantitative calculations as plant dynamics models and qualitative knowledge as schedule optimization rules with fuzziness. The rules represent relationships between stress margins and modification rates of the schedule parameters. Simulations analysis proves that the system provides quick and accurate plant startups.

  2. Optimal protocols for slowly driven quantum systems.

    PubMed

    Zulkowski, Patrick R; DeWeese, Michael R

    2015-09-01

    The design of efficient quantum information processing will rely on optimal nonequilibrium transitions of driven quantum systems. Building on a recently developed geometric framework for computing optimal protocols for classical systems driven in finite time, we construct a general framework for optimizing the average information entropy for driven quantum systems. Geodesics on the parameter manifold endowed with a positive semidefinite metric correspond to protocols that minimize the average information entropy production in finite time. We use this framework to explicitly compute the optimal entropy production for a simple two-state quantum system coupled to a heat bath of bosonic oscillators, which has applications to quantum annealing.

  3. Analysis of Salinity Intrusion in the San Francisco Bay-Delta using a GA- Optimized Neural Net, and Application of the Model to Prediction in the Elkhorn Slough Habitat

    NASA Technical Reports Server (NTRS)

    Thompson, David E.; Rajkumar, T.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    The San Francisco Bay Delta is a large hydrodynamic complex that incorporates the Sacramento and San Joaquin Estuaries, the Burman Marsh, and the San Francisco Bay proper. Competition exists for the use of this extensive water system both from the fisheries industry, the agricultural industry, and from the marine and estuarine animal species within the Delta. As tidal fluctuations occur, more saline water pushes upstream allowing fish to migrate beyond the Burman Marsh for breeding and habitat occupation. However, the agriculture industry does not want extensive salinity intrusion to impact water quality for human and plant consumption. The balance is regulated by pumping stations located alone the estuaries and reservoirs whereby flushing of fresh water keeps the saline intrusion at bay. The pumping schedule is driven by data collected at various locations within the Bay Delta and by numerical models that predict the salinity intrusion as part of a larger model of the system. The Interagency Ecological Program (IEP) for the San Francisco Bay/Sacramento-San Joaquin Estuary collects, monitors, and archives the data, and the Department of Water Resources provides a numerical model simulation (DSM2) from which predictions are made that drive the pumping schedule. A problem with this procedure is that the numerical simulation takes roughly 16 hours to complete a C: prediction. We have created a neural net, optimized with a genetic algorithm, that takes as input the archived data from multiple stations and predicts stage, salinity, and flow at the Carquinez Straits (at the downstream end of the Burman Marsh). This model seems to be robust in its predictions and operates much faster than the current numerical DSM2 model. Because the system is strongly tidal driven, we used both Principal Component Analysis and Fast Fourier Transforms to discover dominant features within the IEP data. We then filtered out the dominant tidal forcing to discover non-primary tidal effects, and used this to enhance the neural network by mapping input-output relationships in a more efficient manner. Furthermore, the neural network implicitly incorporates both the hydrodynamic and water quality models into a single predictive system. Although our model has not yet been enhanced to demonstrate improve pumping schedules, it has the possibility to support better decision-making procedures that may then be implemented by State agencies if desired. Our intention is now to use this model in the smaller Elkhorn Slough complex near Monterey Bay where no such hydrodynamic model currently exists. At the Elkhorn Slough, we are fusing the neural net model of tidally-driven flow with in situ flow data and airborne and satellite remote sensation data. These further constrain the behavior of the model in predicting the longer-term health and future of this vital estuary.

  4. A Method for Optimal Load Dispatch of a Multi-zone Power System with Zonal Exchange Constraints

    NASA Astrophysics Data System (ADS)

    Hazarika, Durlav; Das, Ranjay

    2018-04-01

    This paper presented a method for economic generation scheduling of a multi-zone power system having inter zonal operational constraints. For this purpose, the generator rescheduling for a multi area power system having inter zonal operational constraints has been represented as a two step optimal generation scheduling problem. At first, the optimal generation scheduling has been carried out for the zone having surplus or deficient generation with proper spinning reserve using co-ordination equation. The power exchange required for the deficit zones and zones having no generation are estimated based on load demand and generation for the zone. The incremental transmission loss formulas for the transmission lines participating in the power transfer process among the zones are formulated. Using these, incremental transmission loss expression in co-ordination equation, the optimal generation scheduling for the zonal exchange has been determined. Simulation is carried out on IEEE 118 bus test system to examine the applicability and validity of the method.

  5. Unit Specific Crew Rest Strategies: Phase 1 Evaluation of the 1/212th Aviation Battalion during Shiftwork Transitions

    DTIC Science & Technology

    1994-01-01

    but do not provide strategies or specific schedules of crew rest tailored to the- unit’s specific mission demands, environmental conditions, and...and the impact of mission driven work schedules and environmental conditions on crew rest quality. Phase II provides rhythms, sleep/wake cycles...shiftwork schedules , and methods for regulating the body’s biological clock to prevent sleep loss during characteristic mission. This report contains a

  6. An Automated, Adaptive Framework for Optimizing Preprocessing Pipelines in Task-Based Functional MRI

    PubMed Central

    Churchill, Nathan W.; Spring, Robyn; Afshin-Pour, Babak; Dong, Fan; Strother, Stephen C.

    2015-01-01

    BOLD fMRI is sensitive to blood-oxygenation changes correlated with brain function; however, it is limited by relatively weak signal and significant noise confounds. Many preprocessing algorithms have been developed to control noise and improve signal detection in fMRI. Although the chosen set of preprocessing and analysis steps (the “pipeline”) significantly affects signal detection, pipelines are rarely quantitatively validated in the neuroimaging literature, due to complex preprocessing interactions. This paper outlines and validates an adaptive resampling framework for evaluating and optimizing preprocessing choices by optimizing data-driven metrics of task prediction and spatial reproducibility. Compared to standard “fixed” preprocessing pipelines, this optimization approach significantly improves independent validation measures of within-subject test-retest, and between-subject activation overlap, and behavioural prediction accuracy. We demonstrate that preprocessing choices function as implicit model regularizers, and that improvements due to pipeline optimization generalize across a range of simple to complex experimental tasks and analysis models. Results are shown for brief scanning sessions (<3 minutes each), demonstrating that with pipeline optimization, it is possible to obtain reliable results and brain-behaviour correlations in relatively small datasets. PMID:26161667

  7. Discrete harmony search algorithm for scheduling and rescheduling the reprocessing problems in remanufacturing: a case study

    NASA Astrophysics Data System (ADS)

    Gao, Kaizhou; Wang, Ling; Luo, Jianping; Jiang, Hua; Sadollah, Ali; Pan, Quanke

    2018-06-01

    In this article, scheduling and rescheduling problems with increasing processing time and new job insertion are studied for reprocessing problems in the remanufacturing process. To handle the unpredictability of reprocessing time, an experience-based strategy is used. Rescheduling strategies are applied for considering the effect of increasing reprocessing time and the new subassembly insertion. To optimize the scheduling and rescheduling objective, a discrete harmony search (DHS) algorithm is proposed. To speed up the convergence rate, a local search method is designed. The DHS is applied to two real-life cases for minimizing the maximum completion time and the mean of earliness and tardiness (E/T). These two objectives are also considered together as a bi-objective problem. Computational optimization results and comparisons show that the proposed DHS is able to solve the scheduling and rescheduling problems effectively and productively. Using the proposed approach, satisfactory optimization results can be achieved for scheduling and rescheduling on a real-life shop floor.

  8. The Psychiatric Assessment Schedule for Adult with Developmental Disability (PAS-ADD) Checklist: Reliability and Validity of French Version

    ERIC Educational Resources Information Center

    Gerber, F.; Carminati, G. Galli

    2013-01-01

    Background: The lack of psychometric measures of psychopathology especially in intellectual disabilities (ID) population was addressed by creation of the Psychiatric Assessment Schedule for Adult with Developmental Disability (PAS-ADD-10) in Moss et?al. This schedule is a structured interview designed for professionals in psychopathology. The…

  9. Scheduling Earth Observing Satellites with Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna

    2003-01-01

    We hypothesize that evolutionary algorithms can effectively schedule coordinated fleets of Earth observing satellites. The constraints are complex and the bottlenecks are not well understood, a condition where evolutionary algorithms are often effective. This is, in part, because evolutionary algorithms require only that one can represent solutions, modify solutions, and evaluate solution fitness. To test the hypothesis we have developed a representative set of problems, produced optimization software (in Java) to solve them, and run experiments comparing techniques. This paper presents initial results of a comparison of several evolutionary and other optimization techniques; namely the genetic algorithm, simulated annealing, squeaky wheel optimization, and stochastic hill climbing. We also compare separate satellite vs. integrated scheduling of a two satellite constellation. While the results are not definitive, tests to date suggest that simulated annealing is the best search technique and integrated scheduling is superior.

  10. Construction schedules slack time minimizing

    NASA Astrophysics Data System (ADS)

    Krzemiński, Michał

    2017-07-01

    The article presents two copyright models for minimizing downtime working brigades. Models have been developed for construction schedules performed using the method of work uniform. Application of flow shop models is possible and useful for the implementation of large objects, which can be divided into plots. The article also presents a condition describing gives which model should be used, as well as a brief example of optimization schedule. The optimization results confirm the legitimacy of the work on the newly-developed models.

  11. Full glowworm swarm optimization algorithm for whole-set orders scheduling in single machine.

    PubMed

    Yu, Zhang; Yang, Xiaomei

    2013-01-01

    By analyzing the characteristics of whole-set orders problem and combining the theory of glowworm swarm optimization, a new glowworm swarm optimization algorithm for scheduling is proposed. A new hybrid-encoding schema combining with two-dimensional encoding and random-key encoding is given. In order to enhance the capability of optimal searching and speed up the convergence rate, the dynamical changed step strategy is integrated into this algorithm. Furthermore, experimental results prove its feasibility and efficiency.

  12. Reliability based design optimization: Formulations and methodologies

    NASA Astrophysics Data System (ADS)

    Agarwal, Harish

    Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed. A trust region managed sequential approximate optimization methodology is employed for this purpose. Results from numerical test studies indicate that the methodology can be used for performing design optimization under severe uncertainty.

  13. Nurse Scheduling by Cooperative GA with Effective Mutation Operator

    NASA Astrophysics Data System (ADS)

    Ohki, Makoto

    In this paper, we propose an effective mutation operators for Cooperative Genetic Algorithm (CGA) to be applied to a practical Nurse Scheduling Problem (NSP). The nurse scheduling is a very difficult task, because NSP is a complex combinatorial optimizing problem for which many requirements must be considered. In real hospitals, the schedule changes frequently. The changes of the shift schedule yields various problems, for example, a fall in the nursing level. We describe a technique of the reoptimization of the nurse schedule in response to a change. The conventional CGA is superior in ability for local search by means of its crossover operator, but often stagnates at the unfavorable situation because it is inferior to ability for global search. When the optimization stagnates for long generation cycle, a searching point, population in this case, would be caught in a wide local minimum area. To escape such local minimum area, small change in a population should be required. Based on such consideration, we propose a mutation operator activated depending on the optimization speed. When the optimization stagnates, in other words, when the optimization speed decreases, the mutation yields small changes in the population. Then the population is able to escape from a local minimum area by means of the mutation. However, this mutation operator requires two well-defined parameters. This means that user have to consider the value of these parameters carefully. To solve this problem, we propose a periodic mutation operator which has only one parameter to define itself. This simplified mutation operator is effective over a wide range of the parameter value.

  14. Reliable estimates of predictive uncertainty for an Alpine catchment using a non-parametric methodology

    NASA Astrophysics Data System (ADS)

    Matos, José P.; Schaefli, Bettina; Schleiss, Anton J.

    2017-04-01

    Uncertainty affects hydrological modelling efforts from the very measurements (or forecasts) that serve as inputs to the more or less inaccurate predictions that are produced. Uncertainty is truly inescapable in hydrology and yet, due to the theoretical and technical hurdles associated with its quantification, it is at times still neglected or estimated only qualitatively. In recent years the scientific community has made a significant effort towards quantifying this hydrologic prediction uncertainty. Despite this, most of the developed methodologies can be computationally demanding, are complex from a theoretical point of view, require substantial expertise to be employed, and are constrained by a number of assumptions about the model error distribution. These assumptions limit the reliability of many methods in case of errors that show particular cases of non-normality, heteroscedasticity, or autocorrelation. The present contribution builds on a non-parametric data-driven approach that was developed for uncertainty quantification in operational (real-time) forecasting settings. The approach is based on the concept of Pareto optimality and can be used as a standalone forecasting tool or as a postprocessor. By virtue of its non-parametric nature and a general operating principle, it can be applied directly and with ease to predictions of streamflow, water stage, or even accumulated runoff. Also, it is a methodology capable of coping with high heteroscedasticity and seasonal hydrological regimes (e.g. snowmelt and rainfall driven events in the same catchment). Finally, the training and operation of the model are very fast, making it a tool particularly adapted to operational use. To illustrate its practical use, the uncertainty quantification method is coupled with a process-based hydrological model to produce statistically reliable forecasts for an Alpine catchment located in Switzerland. Results are presented and discussed in terms of their reliability and resolution.

  15. Optimization of multi-objective integrated process planning and scheduling problem using a priority based optimization algorithm

    NASA Astrophysics Data System (ADS)

    Ausaf, Muhammad Farhan; Gao, Liang; Li, Xinyu

    2015-12-01

    For increasing the overall performance of modern manufacturing systems, effective integration of process planning and scheduling functions has been an important area of consideration among researchers. Owing to the complexity of handling process planning and scheduling simultaneously, most of the research work has been limited to solving the integrated process planning and scheduling (IPPS) problem for a single objective function. As there are many conflicting objectives when dealing with process planning and scheduling, real world problems cannot be fully captured considering only a single objective for optimization. Therefore considering multi-objective IPPS (MOIPPS) problem is inevitable. Unfortunately, only a handful of research papers are available on solving MOIPPS problem. In this paper, an optimization algorithm for solving MOIPPS problem is presented. The proposed algorithm uses a set of dispatching rules coupled with priority assignment to optimize the IPPS problem for various objectives like makespan, total machine load, total tardiness, etc. A fixed sized external archive coupled with a crowding distance mechanism is used to store and maintain the non-dominated solutions. To compare the results with other algorithms, a C-matric based method has been used. Instances from four recent papers have been solved to demonstrate the effectiveness of the proposed algorithm. The experimental results show that the proposed method is an efficient approach for solving the MOIPPS problem.

  16. An Expert System for Aviation Squadron Flight Scheduling

    DTIC Science & Technology

    1991-09-01

    SCHEDULING A. OVERVIEW A flight schedule is an organization’s plan to accomplish specific missions with its available resources. It details the mission...schedule for every 24 hour period, and will occasionally write a weekly flight schedule for long range planning purposes. The flight schedule is approved...requirements, and 11 aircraft, trainer, and aircrew availability to formulate the flight schedule. It basically is a plan to optimize the squadron’s resources

  17. WFIRST: Exoplanet Target Selection and Scheduling with Greedy Optimization

    NASA Astrophysics Data System (ADS)

    Keithly, Dean; Garrett, Daniel; Delacroix, Christian; Savransky, Dmitry

    2018-01-01

    We present target selection and scheduling algorithms for missions with direct imaging of exoplanets, and the Wide Field Infrared Survey Telescope (WFIRST) in particular, which will be equipped with a coronagraphic instrument (CGI). Optimal scheduling of CGI targets can maximize the expected value of directly imaged exoplanets (completeness). Using target completeness as a reward metric and integration time plus overhead time as a cost metric, we can maximize the sum completeness for a mission with a fixed duration. We optimize over these metrics to create a list of target stars using a greedy optimization algorithm based off altruistic yield optimization (AYO) under ideal conditions. We simulate full missions using EXOSIMS by observing targets in this list for their predetermined integration times. In this poster, we report the theoretical maximum sum completeness, mean number of detected exoplanets from Monte Carlo simulations, and the ideal expected value of the simulated missions.

  18. Optimizing energy for a 'green' vaccine supply chain.

    PubMed

    Lloyd, John; McCarney, Steve; Ouhichi, Ramzi; Lydon, Patrick; Zaffran, Michel

    2015-02-11

    This paper describes an approach piloted in the Kasserine region of Tunisia to increase the energy efficiency of the distribution of vaccines and temperature sensitive drugs. The objectives of an approach, known as the 'net zero energy' (NZE) supply chain were demonstrated within the first year of operation. The existing distribution system was modified to store vaccines and medicines in the same buildings and to transport them according to pre-scheduled and optimized delivery circuits. Electric utility vehicles, dedicated to the integrated delivery of vaccines and medicines, improved the regularity and reliability of the supply chains. Solar energy, linked to the electricity grid at regional and district stores, supplied over 100% of consumption meeting all energy needs for storage, cooling and transportation. Significant benefits to the quality and costs of distribution were demonstrated. Supply trips were scheduled, integrated and reliable, energy consumption was reduced, the recurrent cost of electricity was eliminated and the release of carbon to the atmosphere was reduced. Although the initial capital cost of scaling up implementation of NZE remain high today, commercial forecasts predict cost reduction for solar energy and electric vehicles that may permit a step-wise implementation over the next 7-10 years. Efficiency in the use of energy and in the deployment of transport is already a critical component of distribution logistics in both private and public sectors of industrialized countries. The NZE approach has an intensified rationale in countries where energy costs threaten the maintenance of public health services in areas of low population density. In these countries where the mobility of health personnel and timely arrival of supplies is at risk, NZE has the potential to reduce energy costs and release recurrent budget to other needs of service delivery while also improving the supply chain. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Analysis of electric power industry restructuring

    NASA Astrophysics Data System (ADS)

    Al-Agtash, Salem Yahya

    1998-10-01

    This thesis evaluates alternative structures of the electric power industry in a competitive environment. One structure is based on the principle of creating a mandatory power pool to foster competition and manage system economics. The structure is PoolCo (pool coordination). A second structure is based on the principle of allowing independent multilateral trading and decentralized market coordination. The structure is DecCo (decentralized coordination). The criteria I use to evaluate these two structures are: economic efficiency, system reliability and freedom of choice. Economic efficiency evaluation considers strategic behavior of individual generators as well as behavioral variations of different classes of consumers. A supply-function equilibria model is characterized for deriving bidding strategies of competing generators under PoolCo. It is shown that asymmetric equilibria can exist within the capacities of generators. An augmented Lagrangian approach is introduced to solve iteratively for global optimal operations schedules. Under DecCo, the process involves solving iteratively for system operations schedules. The schedules reflect generators strategic behavior and brokers' interactions for arranging profitable trades, allocating losses and managing network congestion. In the determination of PoolCo and DecCo operations schedules, overall costs of power generation (start-up and shut-down costs and availability of hydro electric power) as well as losses and costs of transmission network are considered. For system reliability evaluation, I examine the effect of PoolCo and DecCo operating conditions on the system security. Random component failure perturbations are generated to simulate the actual system behavior. This is done using Monte Carlo simulation. Freedom of choice evaluation accounts for schemes' beneficial opportunities and capabilities to respond to consumers expressed preferences. An IEEE 24-bus test system is used to illustrate the concepts developed for economic efficiency evaluation. The system was tested over two years time period. The results indicate 2.6684 and 2.7269 percent of efficiency loss on average for PoolCo and DecCo, respectively. These values, however, do not represent forecasts of efficiency losses of PoolCo- and DecCo-based competitive industries. Rather, they are illustrations of the efficiency losses for the given IEEE test system and based on the modeling assumptions underlying framework development.

  20. Multi-objective Optimization of Solar-driven Hollow-fiber Membrane Distillation Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nenoff, Tina M.; Moore, Sarah E.; Mirchandani, Sera

    Securing additional water sources remains a primary concern for arid regions in both the developed and developing world. Climate change is causing fluctuations in the frequency and duration of precipitation, which can be can be seen as prolonged droughts in some arid areas. Droughts decrease the reliability of surface water supplies, which forces communities to find alternate primary water sources. In many cases, ground water can supplement the use of surface supplies during periods of drought, reducing the need for above-ground storage without sacrificing reliability objectives. Unfortunately, accessible ground waters are often brackish, requiring desalination prior to use, and underdevelopedmore » infrastructure and inconsistent electrical grid access can create obstacles to groundwater desalination in developing regions. The objectives of the proposed project are to (i) mathematically simulate the operation of hollow fiber membrane distillation systems and (ii) optimize system design for off-grid treatment of brackish water. It is anticipated that methods developed here can be used to supply potable water at many off-grid locations in semi-arid regions including parts of the Navajo Reservation. This research is a collaborative project between Sandia and the University of Arizona.« less

  1. Permutation flow-shop scheduling problem to optimize a quadratic objective function

    NASA Astrophysics Data System (ADS)

    Ren, Tao; Zhao, Peng; Zhang, Da; Liu, Bingqian; Yuan, Huawei; Bai, Danyu

    2017-09-01

    A flow-shop scheduling model enables appropriate sequencing for each job and for processing on a set of machines in compliance with identical processing orders. The objective is to achieve a feasible schedule for optimizing a given criterion. Permutation is a special setting of the model in which the processing order of the jobs on the machines is identical for each subsequent step of processing. This article addresses the permutation flow-shop scheduling problem to minimize the criterion of total weighted quadratic completion time. With a probability hypothesis, the asymptotic optimality of the weighted shortest processing time schedule under a consistency condition (WSPT-CC) is proven for sufficiently large-scale problems. However, the worst case performance ratio of the WSPT-CC schedule is the square of the number of machines in certain situations. A discrete differential evolution algorithm, where a new crossover method with multiple-point insertion is used to improve the final outcome, is presented to obtain high-quality solutions for moderate-scale problems. A sequence-independent lower bound is designed for pruning in a branch-and-bound algorithm for small-scale problems. A set of random experiments demonstrates the performance of the lower bound and the effectiveness of the proposed algorithms.

  2. Aspects of job scheduling

    NASA Technical Reports Server (NTRS)

    Phillips, K.

    1976-01-01

    A mathematical model for job scheduling in a specified context is presented. The model uses both linear programming and combinatorial methods. While designed with a view toward optimization of scheduling of facility and plant operations at the Deep Space Communications Complex, the context is sufficiently general to be widely applicable. The general scheduling problem including options for scheduling objectives is discussed and fundamental parameters identified. Mathematical algorithms for partitioning problems germane to scheduling are presented.

  3. Dynamic Scheduling for Web Monitoring Crawler

    DTIC Science & Technology

    2009-02-27

    researches on static scheduling methods , but they are not included in this project, because this project mainly focuses on the event-driven...pages from public search engines. This research aims to propose various query generation methods using MCRDR knowledge base and evaluates them to...South Wales Professor Hiroshi Motoda/Osaka University Dr. John Salerno, Air Force Research Laboratory/Information Directorate Report

  4. Baseline Preferences for Daily, Event-Driven, or Periodic HIV Pre-Exposure Prophylaxis among Gay and Bisexual Men in the PRELUDE Demonstration Project.

    PubMed

    Vaccher, Stefanie J; Gianacas, Christopher; Templeton, David J; Poynten, Isobel M; Haire, Bridget G; Ooi, Catriona; Foster, Rosalind; McNulty, Anna; Grulich, Andrew E; Zablotska, Iryna B

    2017-01-01

    The effectiveness of daily pre-exposure prophylaxis (PrEP) is well established. However, there has been increasing interest in non-daily dosing schedules among gay and bisexual men (GBM). This paper explores preferences for PrEP dosing schedules among GBM at baseline in the PRELUDE demonstration project. Individuals at high-risk of HIV were enrolled in a free PrEP demonstration project in New South Wales, Australia, between November 2014 and April 2016. At baseline, they completed an online survey containing detailed behavioural, demographic, and attitudinal questions, including their ideal way to take PrEP: daily (one pill taken every day), event-driven (pills taken only around specific risk events), or periodic (daily dosing during periods of increased risk). Overall, 315 GBM (98% of study sample) provided a preferred PrEP dosing schedule at baseline. One-third of GBM expressed a preference for non-daily PrEP dosing: 20% for event-driven PrEP, and 14% for periodic PrEP. Individuals with a trade/vocational qualification were more likely to prefer periodic to daily PrEP [adjusted odds ratio (aOR) = 4.58, 95% confidence intervals (95% CI): (1.68, 12.49)], compared to individuals whose highest level of education was high school. Having an HIV-positive main regular partner was associated with strong preference for daily, compared to event-driven PrEP [aOR = 0.20, 95% CI: (0.04, 0.87)]. Participants who rated themselves better at taking medications were more likely to prefer daily over periodic PrEP [aOR = 0.39, 95% CI: (0.20, 0.76)]. Individuals' preferences for PrEP schedules are associated with demographic and behavioural factors that may impact on their ability to access health services and information about PrEP and patterns of HIV risk. At the time of data collection, there were limited data available about the efficacy of non-daily PrEP schedules, and clinicians only recommended daily PrEP to study participants. Further research investigating how behaviours and PrEP preferences change correspondingly over time is needed. ClinicalTrials.gov NCT02206555. Registered 28 July 2014.

  5. The LSST operations simulator

    NASA Astrophysics Data System (ADS)

    Delgado, Francisco; Saha, Abhijit; Chandrasekharan, Srinivasan; Cook, Kem; Petry, Catherine; Ridgway, Stephen

    2014-08-01

    The Operations Simulator for the Large Synoptic Survey Telescope (LSST; http://www.lsst.org) allows the planning of LSST observations that obey explicit science driven observing specifications, patterns, schema, and priorities, while optimizing against the constraints placed by design-specific opto-mechanical system performance of the telescope facility, site specific conditions as well as additional scheduled and unscheduled downtime. It has a detailed model to simulate the external conditions with real weather history data from the site, a fully parameterized kinematic model for the internal conditions of the telescope, camera and dome, and serves as a prototype for an automatic scheduler for the real time survey operations with LSST. The Simulator is a critical tool that has been key since very early in the project, to help validate the design parameters of the observatory against the science requirements and the goals from specific science programs. A simulation run records the characteristics of all observations (e.g., epoch, sky position, seeing, sky brightness) in a MySQL database, which can be queried for any desired purpose. Derivative information digests of the observing history are made with an analysis package called Simulation Survey Tools for Analysis and Reporting (SSTAR). Merit functions and metrics have been designed to examine how suitable a specific simulation run is for several different science applications. Software to efficiently compare the efficacy of different survey strategies for a wide variety of science applications using such a growing set of metrics is under development. A recent restructuring of the code allows us to a) use "look-ahead" strategies that avoid cadence sequences that cannot be completed due to observing constraints; and b) examine alternate optimization strategies, so that the most efficient scheduling algorithm(s) can be identified and used: even few-percent efficiency gains will create substantive scientific opportunity. The enhanced simulator is being used to assess the feasibility of desired observing cadences, study the impact of changing science program priorities and assist with performance margin investigations of the LSST system.

  6. A comprehensive approach for diagnosing opportunities for improving the performance of a WWTP.

    PubMed

    Silva, C; Matos, J Saldanha; Rosa, M J

    2016-12-01

    High quality services of wastewater treatment require a continuous assessment and improvement of the technical, environmental and economic performance. This paper demonstrates a comprehensive approach for benchmarking wastewater treatment plants (WWTPs), using performance indicators (PIs) and indices (PXs), in a 'plan-do-check-act' cycle routine driven by objectives. The performance objectives herein illustrated were to diagnose the effectiveness and energy performance of an oxidation ditch WWTP. The PI and PX results demonstrated an effective and reliable oxidation ditch (good-excellent performance), and a non-reliable UV disinfection (unsatisfactory-excellent performance) related with influent transmittance and total suspended solids. The energy performance increased with the treated wastewater volume and was unsatisfactory below 50% of plant capacity utilization. The oxidation ditch aeration performed unsatisfactorily and represented 38% of the plant energy consumption. The results allowed diagnosing opportunities for improving the energy and economic performance considering the influent flows, temperature and concentrations, and for levering the WWTP performance to acceptable-good effectiveness, reliability and energy efficiency. Regarding the plant reliability for fecal coliforms, improvement of UV lamp maintenance and optimization of the UV dose applied and microscreen recommissioning were suggested.

  7. Nuclear electric propulsion operational reliability and crew safety study: NEP systems/modeling report

    NASA Technical Reports Server (NTRS)

    Karns, James

    1993-01-01

    The objective of this study was to establish the initial quantitative reliability bounds for nuclear electric propulsion systems in a manned Mars mission required to ensure crew safety and mission success. Finding the reliability bounds involves balancing top-down (mission driven) requirements and bottom-up (technology driven) capabilities. In seeking this balance we hope to accomplish the following: (1) provide design insights into the achievability of the baseline design in terms of reliability requirements, given the existing technology base; (2) suggest alternative design approaches which might enhance reliability and crew safety; and (3) indicate what technology areas require significant research and development to achieve the reliability objectives.

  8. Machine Maintenance Scheduling with Reliability Engineering Method and Maintenance Value Stream Mapping

    NASA Astrophysics Data System (ADS)

    Sembiring, N.; Nasution, A. H.

    2018-02-01

    Corrective maintenance i.e replacing or repairing the machine component after machine break down always done in a manufacturing company. It causes the production process must be stopped. Production time will decrease due to the maintenance team must replace or repair the damage machine component. This paper proposes a preventive maintenance’s schedule for a critical component of a critical machine of an crude palm oil and kernel company due to increase maintenance efficiency. The Reliability Engineering & Maintenance Value Stream Mapping is used as a method and a tool to analize the reliability of the component and reduce the wastage in any process by segregating value added and non value added activities.

  9. Scheduling optimization of design stream line for production research and development projects

    NASA Astrophysics Data System (ADS)

    Liu, Qinming; Geng, Xiuli; Dong, Ming; Lv, Wenyuan; Ye, Chunming

    2017-05-01

    In a development project, efficient design stream line scheduling is difficult and important owing to large design imprecision and the differences in the skills and skill levels of employees. The relative skill levels of employees are denoted as fuzzy numbers. Multiple execution modes are generated by scheduling different employees for design tasks. An optimization model of a design stream line scheduling problem is proposed with the constraints of multiple executive modes, multi-skilled employees and precedence. The model considers the parallel design of multiple projects, different skills of employees, flexible multi-skilled employees and resource constraints. The objective function is to minimize the duration and tardiness of the project. Moreover, a two-dimensional particle swarm algorithm is used to find the optimal solution. To illustrate the validity of the proposed method, a case is examined in this article, and the results support the feasibility and effectiveness of the proposed model and algorithm.

  10. Discrete Optimization Model for Vehicle Routing Problem with Scheduling Side Cosntraints

    NASA Astrophysics Data System (ADS)

    Juliandri, Dedy; Mawengkang, Herman; Bu'ulolo, F.

    2018-01-01

    Vehicle Routing Problem (VRP) is an important element of many logistic systems which involve routing and scheduling of vehicles from a depot to a set of customers node. This is a hard combinatorial optimization problem with the objective to find an optimal set of routes used by a fleet of vehicles to serve the demands a set of customers It is required that these vehicles return to the depot after serving customers’ demand. The problem incorporates time windows, fleet and driver scheduling, pick-up and delivery in the planning horizon. The goal is to determine the scheduling of fleet and driver and routing policies of the vehicles. The objective is to minimize the overall costs of all routes over the planning horizon. We model the problem as a linear mixed integer program. We develop a combination of heuristics and exact method for solving the model.

  11. Single machine total completion time minimization scheduling with a time-dependent learning effect and deteriorating jobs

    NASA Astrophysics Data System (ADS)

    Wang, Ji-Bo; Wang, Ming-Zheng; Ji, Ping

    2012-05-01

    In this article, we consider a single machine scheduling problem with a time-dependent learning effect and deteriorating jobs. By the effects of time-dependent learning and deterioration, we mean that the job processing time is defined by a function of its starting time and total normal processing time of jobs in front of it in the sequence. The objective is to determine an optimal schedule so as to minimize the total completion time. This problem remains open for the case of -1 < a < 0, where a denotes the learning index; we show that an optimal schedule of the problem is V-shaped with respect to job normal processing times. Three heuristic algorithms utilising the V-shaped property are proposed, and computational experiments show that the last heuristic algorithm performs effectively and efficiently in obtaining near-optimal solutions.

  12. 75 FR 15371 - Time Error Correction Reliability Standard

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-29

    ... Electric Reliability Council of Texas (ERCOT) manages the flow of electric power to 22 million Texas customers. As the independent system operator for the region, ERCOT schedules power on an electric grid that... Coordinating Council (WECC) is responsible for coordinating and promoting bulk electric system reliability in...

  13. Cloud computing task scheduling strategy based on differential evolution and ant colony optimization

    NASA Astrophysics Data System (ADS)

    Ge, Junwei; Cai, Yu; Fang, Yiqiu

    2018-05-01

    This paper proposes a task scheduling strategy DEACO based on the combination of Differential Evolution (DE) and Ant Colony Optimization (ACO), aiming at the single problem of optimization objective in cloud computing task scheduling, this paper combines the shortest task completion time, cost and load balancing. DEACO uses the solution of the DE to initialize the initial pheromone of ACO, reduces the time of collecting the pheromone in ACO in the early, and improves the pheromone updating rule through the load factor. The proposed algorithm is simulated on cloudsim, and compared with the min-min and ACO. The experimental results show that DEACO is more superior in terms of time, cost, and load.

  14. Optimal Scheduling and Fair Service Policy for STDMA in Underwater Networks with Acoustic Communications

    PubMed Central

    2018-01-01

    In this work, a multi-hop string network with a single sink node is analyzed. A periodic optimal scheduling for TDMA operation that considers the characteristic long propagation delay of the underwater acoustic channel is presented. This planning of transmissions is obtained with the help of a new geometrical method based on a 2D lattice in the space-time domain. In order to evaluate the performance of this optimal scheduling, two service policies have been compared: FIFO and Round-Robin. Simulation results, including achievable throughput, packet delay, and queue length, are shown. The network fairness has also been quantified with the Gini index. PMID:29462966

  15. Scheduling Software for Complex Scenarios

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Preparing a vehicle and its payload for a single launch is a complex process that involves thousands of operations. Because the equipment and facilities required to carry out these operations are extremely expensive and limited in number, optimal assignment and efficient use are critically important. Overlapping missions that compete for the same resources, ground rules, safety requirements, and the unique needs of processing vehicles and payloads destined for space impose numerous constraints that, when combined, require advanced scheduling. Traditional scheduling systems use simple algorithms and criteria when selecting activities and assigning resources and times to each activity. Schedules generated by these simple decision rules are, however, frequently far from optimal. To resolve mission-critical scheduling issues and predict possible problem areas, NASA historically relied upon expert human schedulers who used their judgment and experience to determine where things should happen, whether they will happen on time, and whether the requested resources are truly necessary.

  16. Two-machine flow shop scheduling integrated with preventive maintenance planning

    NASA Astrophysics Data System (ADS)

    Wang, Shijin; Liu, Ming

    2016-02-01

    This paper investigates an integrated optimisation problem of production scheduling and preventive maintenance (PM) in a two-machine flow shop with time to failure of each machine subject to a Weibull probability distribution. The objective is to find the optimal job sequence and the optimal PM decisions before each job such that the expected makespan is minimised. To investigate the value of integrated scheduling solution, computational experiments on small-scale problems with different configurations are conducted with total enumeration method, and the results are compared with those of scheduling without maintenance but with machine degradation, and individual job scheduling combined with independent PM planning. Then, for large-scale problems, four genetic algorithm (GA) based heuristics are proposed. The numerical results with several large problem sizes and different configurations indicate the potential benefits of integrated scheduling solution and the results also show that proposed GA-based heuristics are efficient for the integrated problem.

  17. Scheduling IT Staff at a Bank: A Mathematical Programming Approach

    PubMed Central

    Labidi, M.; Mrad, M.; Gharbi, A.; Louly, M. A.

    2014-01-01

    We address a real-world optimization problem: the scheduling of a Bank Information Technologies (IT) staff. This problem can be defined as the process of constructing optimized work schedules for staff. In a general sense, it requires the allocation of suitably qualified staff to specific shifts to meet the demands for services of an organization while observing workplace regulations and attempting to satisfy individual work preferences. A monthly shift schedule is prepared to determine the shift duties of each staff considering shift coverage requirements, seniority-based workload rules, and staff work preferences. Due to the large number of conflicting constraints, a multiobjective programming model has been proposed to automate the schedule generation process. The suggested mathematical model has been implemented using Lingo software. The results indicate that high quality solutions can be obtained within a few seconds compared to the manually prepared schedules. PMID:24772032

  18. Scheduling IT staff at a bank: a mathematical programming approach.

    PubMed

    Labidi, M; Mrad, M; Gharbi, A; Louly, M A

    2014-01-01

    We address a real-world optimization problem: the scheduling of a Bank Information Technologies (IT) staff. This problem can be defined as the process of constructing optimized work schedules for staff. In a general sense, it requires the allocation of suitably qualified staff to specific shifts to meet the demands for services of an organization while observing workplace regulations and attempting to satisfy individual work preferences. A monthly shift schedule is prepared to determine the shift duties of each staff considering shift coverage requirements, seniority-based workload rules, and staff work preferences. Due to the large number of conflicting constraints, a multiobjective programming model has been proposed to automate the schedule generation process. The suggested mathematical model has been implemented using Lingo software. The results indicate that high quality solutions can be obtained within a few seconds compared to the manually prepared schedules.

  19. Towards optimization of ACRT schedules applied to the gradient freeze growth of cadmium zinc telluride

    NASA Astrophysics Data System (ADS)

    Divecha, Mia S.; Derby, Jeffrey J.

    2017-12-01

    Historically, the melt growth of II-VI crystals has benefitted from the application of the accelerated crucible rotation technique (ACRT). Here, we employ a comprehensive numerical model to assess the impact of two ACRT schedules designed for a cadmium zinc telluride growth system per the classical recommendations of Capper and co-workers. The ;flow maximizing; ACRT schedule, with higher rotation, effectively mixes the solutal field in the melt but does not reduce supercooling adjacent to the growth interface. The ACRT schedule derived for stable Ekman flow, with lower rotation, proves more effective in reducing supercooling and promoting stable growth. These counterintuitive results highlight the need for more comprehensive studies on the optimization of ACRT schedules for specific growth systems and for desired growth outcomes.

  20. Hydropower Optimization Using Artificial Neural Network Surrogate Models of a High-Fidelity Hydrodynamics and Water Quality Model

    NASA Astrophysics Data System (ADS)

    Shaw, Amelia R.; Smith Sawyer, Heather; LeBoeuf, Eugene J.; McDonald, Mark P.; Hadjerioua, Boualem

    2017-11-01

    Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2 is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. The reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.

  1. Hydropower Optimization Using Artificial Neural Network Surrogate Models of a High-Fidelity Hydrodynamics and Water Quality Model

    DOE PAGES

    Shaw, Amelia R.; Sawyer, Heather Smith; LeBoeuf, Eugene J.; ...

    2017-10-24

    Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2more » is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. Here, the reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.« less

  2. Hydropower Optimization Using Artificial Neural Network Surrogate Models of a High-Fidelity Hydrodynamics and Water Quality Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaw, Amelia R.; Sawyer, Heather Smith; LeBoeuf, Eugene J.

    Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2more » is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. Here, the reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.« less

  3. Online stochastic optimization of radiotherapy patient scheduling.

    PubMed

    Legrain, Antoine; Fortin, Marie-Andrée; Lahrichi, Nadia; Rousseau, Louis-Martin

    2015-06-01

    The effective management of a cancer treatment facility for radiation therapy depends mainly on optimizing the use of the linear accelerators. In this project, we schedule patients on these machines taking into account their priority for treatment, the maximum waiting time before the first treatment, and the treatment duration. We collaborate with the Centre Intégré de Cancérologie de Laval to determine the best scheduling policy. Furthermore, we integrate the uncertainty related to the arrival of patients at the center. We develop a hybrid method combining stochastic optimization and online optimization to better meet the needs of central planning. We use information on the future arrivals of patients to provide an accurate picture of the expected utilization of resources. Results based on real data show that our method outperforms the policies typically used in treatment centers.

  4. Microgrid Optimal Scheduling With Chance-Constrained Islanding Capability

    DOE PAGES

    Liu, Guodong; Starke, Michael R.; Xiao, B.; ...

    2017-01-13

    To facilitate the integration of variable renewable generation and improve the resilience of electricity sup-ply in a microgrid, this paper proposes an optimal scheduling strategy for microgrid operation considering constraints of islanding capability. A new concept, probability of successful islanding (PSI), indicating the probability that a microgrid maintains enough spinning reserve (both up and down) to meet local demand and accommodate local renewable generation after instantaneously islanding from the main grid, is developed. The PSI is formulated as mixed-integer linear program using multi-interval approximation taking into account the probability distributions of forecast errors of wind, PV and load. With themore » goal of minimizing the total operating cost while preserving user specified PSI, a chance-constrained optimization problem is formulated for the optimal scheduling of mirogrids and solved by mixed integer linear programming (MILP). Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator and a battery demonstrate the effectiveness of the proposed scheduling strategy. Lastly, we verify the relationship between PSI and various factors.« less

  5. An improved ant colony optimization algorithm with fault tolerance for job scheduling in grid computing systems

    PubMed Central

    Idris, Hajara; Junaidu, Sahalu B.; Adewumi, Aderemi O.

    2017-01-01

    The Grid scheduler, schedules user jobs on the best available resource in terms of resource characteristics by optimizing job execution time. Resource failure in Grid is no longer an exception but a regular occurring event as resources are increasingly being used by the scientific community to solve computationally intensive problems which typically run for days or even months. It is therefore absolutely essential that these long-running applications are able to tolerate failures and avoid re-computations from scratch after resource failure has occurred, to satisfy the user’s Quality of Service (QoS) requirement. Job Scheduling with Fault Tolerance in Grid Computing using Ant Colony Optimization is proposed to ensure that jobs are executed successfully even when resource failure has occurred. The technique employed in this paper, is the use of resource failure rate, as well as checkpoint-based roll back recovery strategy. Check-pointing aims at reducing the amount of work that is lost upon failure of the system by immediately saving the state of the system. A comparison of the proposed approach with an existing Ant Colony Optimization (ACO) algorithm is discussed. The experimental results of the implemented Fault Tolerance scheduling algorithm show that there is an improvement in the user’s QoS requirement over the existing ACO algorithm, which has no fault tolerance integrated in it. The performance evaluation of the two algorithms was measured in terms of the three main scheduling performance metrics: makespan, throughput and average turnaround time. PMID:28545075

  6. Multi-objective AGV scheduling in an FMS using a hybrid of genetic algorithm and particle swarm optimization.

    PubMed

    Mousavi, Maryam; Yap, Hwa Jen; Musa, Siti Nurmaya; Tahriri, Farzad; Md Dawal, Siti Zawiah

    2017-01-01

    Flexible manufacturing system (FMS) enhances the firm's flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs). An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and hybrid GA-PSO) to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs' battery charge. Assessment of the numerical examples' scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software.

  7. Multi-objective AGV scheduling in an FMS using a hybrid of genetic algorithm and particle swarm optimization

    PubMed Central

    Yap, Hwa Jen; Musa, Siti Nurmaya; Tahriri, Farzad; Md Dawal, Siti Zawiah

    2017-01-01

    Flexible manufacturing system (FMS) enhances the firm’s flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs). An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and hybrid GA-PSO) to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs’ battery charge. Assessment of the numerical examples’ scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software. PMID:28263994

  8. Energy-driven scheduling algorithm for nanosatellite energy harvesting maximization

    NASA Astrophysics Data System (ADS)

    Slongo, L. K.; Martínez, S. V.; Eiterer, B. V. B.; Pereira, T. G.; Bezerra, E. A.; Paiva, K. V.

    2018-06-01

    The number of tasks that a satellite may execute in orbit is strongly related to the amount of energy its Electrical Power System (EPS) is able to harvest and to store. The manner the stored energy is distributed within the satellite has also a great impact on the CubeSat's overall efficiency. Most CubeSat's EPS do not prioritize energy constraints in their formulation. Unlike that, this work proposes an innovative energy-driven scheduling algorithm based on energy harvesting maximization policy. The energy harvesting circuit is mathematically modeled and the solar panel I-V curves are presented for different temperature and irradiance levels. Considering the models and simulations, the scheduling algorithm is designed to keep solar panels working close to their maximum power point by triggering tasks in the appropriate form. Tasks execution affects battery voltage, which is coupled to the solar panels through a protection circuit. A software based Perturb and Observe strategy allows defining the tasks to be triggered. The scheduling algorithm is tested in FloripaSat, which is an 1U CubeSat. A test apparatus is proposed to emulate solar irradiance variation, considering the satellite movement around the Earth. Tests have been conducted to show that the scheduling algorithm improves the CubeSat energy harvesting capability by 4.48% in a three orbit experiment and up to 8.46% in a single orbit cycle in comparison with the CubeSat operating without the scheduling algorithm.

  9. Revisiting Bevacizumab + Cytotoxics Scheduling Using Mathematical Modeling: Proof of Concept Study in Experimental Non-Small Cell Lung Carcinoma.

    PubMed

    Imbs, Diane-Charlotte; El Cheikh, Raouf; Boyer, Arnaud; Ciccolini, Joseph; Mascaux, Céline; Lacarelle, Bruno; Barlesi, Fabrice; Barbolosi, Dominique; Benzekry, Sébastien

    2018-01-01

    Concomitant administration of bevacizumab and pemetrexed-cisplatin is a common treatment for advanced nonsquamous non-small cell lung cancer (NSCLC). Vascular normalization following bevacizumab administration may transiently enhance drug delivery, suggesting improved efficacy with sequential administration. To investigate optimal scheduling, we conducted a study in NSCLC-bearing mice. First, experiments demonstrated improved efficacy when using sequential vs. concomitant scheduling of bevacizumab and chemotherapy. Combining this data with a mathematical model of tumor growth under therapy accounting for the normalization effect, we predicted an optimal delay of 2.8 days between bevacizumab and chemotherapy. This prediction was confirmed experimentally, with reduced tumor growth of 38% as compared to concomitant scheduling, and prolonged survival (74 vs. 70 days). Alternate sequencing of 8 days failed in achieving a similar increase in efficacy, thus emphasizing the utility of modeling support to identify optimal scheduling. The model could also be a useful tool in the clinic to personally tailor regimen sequences. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  10. A derived heuristics based multi-objective optimization procedure for micro-grid scheduling

    NASA Astrophysics Data System (ADS)

    Li, Xin; Deb, Kalyanmoy; Fang, Yanjun

    2017-06-01

    With the availability of different types of power generators to be used in an electric micro-grid system, their operation scheduling as the load demand changes with time becomes an important task. Besides satisfying load balance constraints and the generator's rated power, several other practicalities, such as limited availability of grid power and restricted ramping of power output from generators, must all be considered during the operation scheduling process, which makes it difficult to decide whether the optimization results are accurate and satisfactory. In solving such complex practical problems, heuristics-based customized optimization algorithms are suggested. However, due to nonlinear and complex interactions of variables, it is difficult to come up with heuristics in such problems off-hand. In this article, a two-step strategy is proposed in which the first task deciphers important heuristics about the problem and the second task utilizes the derived heuristics to solve the original problem in a computationally fast manner. Specifically, the specific operation scheduling is considered from a two-objective (cost and emission) point of view. The first task develops basic and advanced level knowledge bases offline from a series of prior demand-wise optimization runs and then the second task utilizes them to modify optimized solutions in an application scenario. Results on island and grid connected modes and several pragmatic formulations of the micro-grid operation scheduling problem clearly indicate the merit of the proposed two-step procedure.

  11. An Optimal Schedule for Urban Road Network Repair Based on the Greedy Algorithm

    PubMed Central

    Lu, Guangquan; Xiong, Ying; Wang, Yunpeng

    2016-01-01

    The schedule of urban road network recovery caused by rainstorms, snow, and other bad weather conditions, traffic incidents, and other daily events is essential. However, limited studies have been conducted to investigate this problem. We fill this research gap by proposing an optimal schedule for urban road network repair with limited repair resources based on the greedy algorithm. Critical links will be given priority in repair according to the basic concept of the greedy algorithm. In this study, the link whose restoration produces the ratio of the system-wide travel time of the current network to the worst network is the minimum. We define such a link as the critical link for the current network. We will re-evaluate the importance of damaged links after each repair process is completed. That is, the critical link ranking will be changed along with the repair process because of the interaction among links. We repair the most critical link for the specific network state based on the greedy algorithm to obtain the optimal schedule. The algorithm can still quickly obtain an optimal schedule even if the scale of the road network is large because the greedy algorithm can reduce computational complexity. We prove that the problem can obtain the optimal solution using the greedy algorithm in theory. The algorithm is also demonstrated in the Sioux Falls network. The problem discussed in this paper is highly significant in dealing with urban road network restoration. PMID:27768732

  12. Measuring the effects of heterogeneity on distributed systems

    NASA Technical Reports Server (NTRS)

    El-Toweissy, Mohamed; Zeineldine, Osman; Mukkamala, Ravi

    1991-01-01

    Distributed computer systems in daily use are becoming more and more heterogeneous. Currently, much of the design and analysis studies of such systems assume homogeneity. This assumption of homogeneity has been mainly driven by the resulting simplicity in modeling and analysis. A simulation study is presented which investigated the effects of heterogeneity on scheduling algorithms for hard real time distributed systems. In contrast to previous results which indicate that random scheduling may be as good as a more complex scheduler, this algorithm is shown to be consistently better than a random scheduler. This conclusion is more prevalent at high workloads as well as at high levels of heterogeneity.

  13. Capacitated vehicle-routing problem model for scheduled solid waste collection and route optimization using PSO algorithm.

    PubMed

    Hannan, M A; Akhtar, Mahmuda; Begum, R A; Basri, H; Hussain, A; Scavino, Edgar

    2018-01-01

    Waste collection widely depends on the route optimization problem that involves a large amount of expenditure in terms of capital, labor, and variable operational costs. Thus, the more waste collection route is optimized, the more reduction in different costs and environmental effect will be. This study proposes a modified particle swarm optimization (PSO) algorithm in a capacitated vehicle-routing problem (CVRP) model to determine the best waste collection and route optimization solutions. In this study, threshold waste level (TWL) and scheduling concepts are applied in the PSO-based CVRP model under different datasets. The obtained results from different datasets show that the proposed algorithmic CVRP model provides the best waste collection and route optimization in terms of travel distance, total waste, waste collection efficiency, and tightness at 70-75% of TWL. The obtained results for 1 week scheduling show that 70% of TWL performs better than all node consideration in terms of collected waste, distance, tightness, efficiency, fuel consumption, and cost. The proposed optimized model can serve as a valuable tool for waste collection and route optimization toward reducing socioeconomic and environmental impacts. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Simultaneously optimizing dose and schedule of a new cytotoxic agent.

    PubMed

    Braun, Thomas M; Thall, Peter F; Nguyen, Hoang; de Lima, Marcos

    2007-01-01

    Traditionally, phase I clinical trial designs are based upon one predefined course of treatment while varying among patients the dose given at each administration. In actual medical practice, patients receive a schedule comprised of several courses of treatment, and some patients may receive one or more dose reductions or delays during treatment. Consequently, the overall risk of toxicity for each patient is a function of both actual schedule of treatment and the differing doses used at each adminstration. Our goal is to provide a practical phase I clinical trial design that more accurately reflects actual medical practice by accounting for both dose per administration and schedule. We propose an outcome-adaptive Bayesian design that simultaneously optimizes both dose and schedule in terms of the overall risk of toxicity, based on time-to-toxicity outcomes. We use computer simulation as a tool to calibrate design parameters. We describe a phase I trial in allogeneic bone marrow transplantation that was designed and is currently being conducted using our new method. Our computer simulations demonstrate that our method outperforms any method that searches for an optimal dose but does not allow schedule to vary, both in terms of the probability of identifying optimal (dose, schedule) combinations, and the numbers of patients assigned to those combinations in the trial. Our design requires greater sample sizes than those seen in traditional phase I studies due to the larger number of treatment combinations examined. Our design also assumes that the effects of multiple administrations are independent of each other and that the hazard of toxicity is the same for all administrations. Our design is the first for phase I clinical trials that is sufficiently flexible and practical to truly reflect clinical practice by varying both dose and the timing and number of administrations given to each patient.

  15. Optimized Hypervisor Scheduler for Parallel Discrete Event Simulations on Virtual Machine Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S

    2013-01-01

    With the advent of virtual machine (VM)-based platforms for parallel computing, it is now possible to execute parallel discrete event simulations (PDES) over multiple virtual machines, in contrast to executing in native mode directly over hardware as is traditionally done over the past decades. While mature VM-based parallel systems now offer new, compelling benefits such as serviceability, dynamic reconfigurability and overall cost effectiveness, the runtime performance of parallel applications can be significantly affected. In particular, most VM-based platforms are optimized for general workloads, but PDES execution exhibits unique dynamics significantly different from other workloads. Here we first present results frommore » experiments that highlight the gross deterioration of the runtime performance of VM-based PDES simulations when executed using traditional VM schedulers, quantitatively showing the bad scaling properties of the scheduler as the number of VMs is increased. The mismatch is fundamental in nature in the sense that any fairness-based VM scheduler implementation would exhibit this mismatch with PDES runs. We also present a new scheduler optimized specifically for PDES applications, and describe its design and implementation. Experimental results obtained from running PDES benchmarks (PHOLD and vehicular traffic simulations) over VMs show over an order of magnitude improvement in the run time of the PDES-optimized scheduler relative to the regular VM scheduler, with over 20 reduction in run time of simulations using up to 64 VMs. The observations and results are timely in the context of emerging systems such as cloud platforms and VM-based high performance computing installations, highlighting to the community the need for PDES-specific support, and the feasibility of significantly reducing the runtime overhead for scalable PDES on VM platforms.« less

  16. Organizational Factors and the Cancer Screening Process

    PubMed Central

    Zapka, Jane; Edwards, Heather; Taplin, Stephen H.

    2010-01-01

    Cancer screening is a process of care consisting of several steps and interfaces. This article reviews what is known about the association between organizational factors and cancer screening rates and examines how organizational strategies can address the steps and interfaces of cancer screening in the context of both intraorganizational and interorganizational processes. We reviewed 79 studies assessing the relationship between organizational factors and cancer screening. Screening rates are largely driven by strategies to 1) limit the number of interfaces across organizational boundaries; 2) recruit patients, promote referrals, and facilitate appointment scheduling; and 3) promote continuous patient care. Optimal screening rates can be achieved when health-care organizations tailor strategies to the steps and interfaces in the cancer screening process that are most critical for their organizations, the providers who work within them, and the patients they serve. PMID:20386053

  17. Organizational factors and the cancer screening process.

    PubMed

    Anhang Price, Rebecca; Zapka, Jane; Edwards, Heather; Taplin, Stephen H

    2010-01-01

    Cancer screening is a process of care consisting of several steps and interfaces. This article reviews what is known about the association between organizational factors and cancer screening rates and examines how organizational strategies can address the steps and interfaces of cancer screening in the context of both intraorganizational and interorganizational processes. We reviewed 79 studies assessing the relationship between organizational factors and cancer screening. Screening rates are largely driven by strategies to 1) limit the number of interfaces across organizational boundaries; 2) recruit patients, promote referrals, and facilitate appointment scheduling; and 3) promote continuous patient care. Optimal screening rates can be achieved when health-care organizations tailor strategies to the steps and interfaces in the cancer screening process that are most critical for their organizations, the providers who work within them, and the patients they serve.

  18. Metal Matrix Superconductor Composites for SMES-Driven, Ultra High Power BEP Applications: Part 2

    NASA Astrophysics Data System (ADS)

    Gross, Dan A.; Myrabo, Leik N.

    2006-05-01

    A 2.5 TJ superconducting magnetic energy storage (SMES) design presentation is continued from the preceding paper (Part 1) with electromagnetic and associated stress analysis. The application of interest is a rechargeable power-beaming infrastructure for manned microwave Lightcraft operations. It is demonstrated that while operational performance is within manageable parameter bounds, quench (loss of superconducting state) imposes enormous electrical stresses. Therefore, alternative multiple toroid modular configurations are identified, alleviating simultaneously all excessive stress conditions, operational and quench, in the structural, thermal and electromagnetic sense — at some reduction in specific energy, but presenting programmatic advantages for a lengthy technology development, demonstration and operation schedule. To this end several natural units, based on material properties and operating parameters are developed, in order to identify functional relationships and optimization paths more effectively.

  19. Design criteria and candidate electrical power systems for a reusable Space Shuttle booster.

    NASA Technical Reports Server (NTRS)

    Merrifield, D. V.

    1972-01-01

    This paper presents the results of a preliminary study to establish electrical power requirements, investigate candidate power sources, and select a representative power generation concept for the NASA Space Shuttle booster stage. Design guidelines and system performance requirements are established. Candidate power sources and combinations thereof are defined and weight estimates made. The selected power source concept utilizes secondary silver-zinc batteries, engine-driven alternators with constant speed drive, and an airbreathing gas turbine. The need for cost optimization, within safety, reliability, and performance constraints, is emphasized as being the most important criteria in design of the final system.

  20. A Risk Radar driven by Internet of intelligences serving for emergency management in community.

    PubMed

    Huang, Chongfu; Wu, Tong; Renn, Ortwin

    2016-07-01

    Today, most of the commercial risk radars only have the function to show risks, as same as a set of risk matrixes. In this paper, we develop the Internet of intelligences (IOI) to drive a risk radar monitoring dynamic risks for emergency management in community. An IOI scans risks in a community by 4 stages: collecting information and experience about risks; evaluating risk incidents; verifying; and showing risks. Employing the information diffusion method, we optimized to deal with the effective information for calculating risk value. Also, a specific case demonstrates the reliability and practicability of risk radar. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. A framework for using ant colony optimization to schedule environmental flow management alternatives for rivers, wetlands, and floodplains

    NASA Astrophysics Data System (ADS)

    Szemis, J. M.; Maier, H. R.; Dandy, G. C.

    2012-08-01

    Rivers, wetlands, and floodplains are in need of management as they have been altered from natural conditions and are at risk of vanishing because of river development. One method to mitigate these impacts involves the scheduling of environmental flow management alternatives (EFMA); however, this is a complex task as there are generally a large number of ecological assets (e.g., wetlands) that need to be considered, each with species with competing flow requirements. Hence, this problem evolves into an optimization problem to maximize an ecological benefit within constraints imposed by human needs and the physical layout of the system. This paper presents a novel optimization framework which uses ant colony optimization to enable optimal scheduling of EFMAs, given constraints on the environmental water that is available. This optimization algorithm is selected because, unlike other currently popular algorithms, it is able to account for all aspects of the problem. The approach is validated by comparing it to a heuristic approach, and its utility is demonstrated using a case study based on the Murray River in South Australia to investigate (1) the trade-off between plant recruitment (i.e., promoting germination) and maintenance (i.e., maintaining habitat) flow requirements, (2) the trade-off between flora and fauna flow requirements, and (3) a hydrograph inversion case. The results demonstrate the usefulness and flexibility of the proposed framework as it is able to determine EFMA schedules that provide optimal or near-optimal trade-offs between the competing needs of species under a range of operating conditions and valuable insight for managers.

  2. Approximation algorithms for scheduling unrelated parallel machines with release dates

    NASA Astrophysics Data System (ADS)

    Avdeenko, T. V.; Mesentsev, Y. A.; Estraykh, I. V.

    2017-01-01

    In this paper we propose approaches to optimal scheduling of unrelated parallel machines with release dates. One approach is based on the scheme of dynamic programming modified with adaptive narrowing of search domain ensuring its computational effectiveness. We discussed complexity of the exact schedules synthesis and compared it with approximate, close to optimal, solutions. Also we explain how the algorithm works for the example of two unrelated parallel machines and five jobs with release dates. Performance results that show the efficiency of the proposed approach have been given.

  3. Multimode resource-constrained multiple project scheduling problem under fuzzy random environment and its application to a large scale hydropower construction project.

    PubMed

    Xu, Jiuping; Feng, Cuiying

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method.

  4. Mixed Integer Programming and Heuristic Scheduling for Space Communication Networks

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming; Lee, Charles H.

    2012-01-01

    We developed framework and the mathematical formulation for optimizing communication network using mixed integer programming. The design yields a system that is much smaller, in search space size, when compared to the earlier approach. Our constrained network optimization takes into account the dynamics of link performance within the network along with mission and operation requirements. A unique penalty function is introduced to transform the mixed integer programming into the more manageable problem of searching in a continuous space. The constrained optimization problem was proposed to solve in two stages: first using the heuristic Particle Swarming Optimization algorithm to get a good initial starting point, and then feeding the result into the Sequential Quadratic Programming algorithm to achieve the final optimal schedule. We demonstrate the above planning and scheduling methodology with a scenario of 20 spacecraft and 3 ground stations of a Deep Space Network site. Our approach and framework have been simple and flexible so that problems with larger number of constraints and network can be easily adapted and solved.

  5. Multimode Resource-Constrained Multiple Project Scheduling Problem under Fuzzy Random Environment and Its Application to a Large Scale Hydropower Construction Project

    PubMed Central

    Xu, Jiuping

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method. PMID:24550708

  6. Reliability Analysis and Reliability-Based Design Optimization of Circular Composite Cylinders Under Axial Compression

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2001-01-01

    This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.

  7. An integrated modeling framework for real-time irrigation scheduling: the benefit of spectroscopy and weather forecasts

    NASA Astrophysics Data System (ADS)

    Brook, Anna; Polinova, Maria; Housh, Mashor

    2016-04-01

    Agriculture and agricultural landscapes are increasingly under pressure to meet the demands of a constantly increasing human population and globally changing food patterns. At the same time, there is rising concern that climate change and food security will harm agriculture in many regions of the world (Nelson et al., 2009). Facing those treats, majority of Mediterranean countries had chosen irrigated agriculture. For crop plants water is one of the most important inputs, as it is responsible for crop growth, production and it ensures the efficiency of other inputs (e.g. seeds, fertilizers and pesticide) but its use is in competition with other local sectors (e.g. industry, urban human use). Thus, well-timed availability of water is vital to agriculture for ensured yields. The increasing demand for irrigation has necessitated the need for optimal irrigation scheduling techniques that coordinate the timing and amount of irrigation to optimally manage the water use in agriculture systems. The irrigation scheduling problem can be challenging as farmers try to deal with different conflicting objectives of maximizing their yield while minimizing irrigation water use. Another challenge in the irrigation scheduling problem is attributed to the uncertain factors involved in the plant growth process during the growing season. Most notable, the climatic factors such as evapotranspiration and rainfall, these uncertain factors add a third objective to the farmer perspective, namely, minimizing the risk associated with these uncertain factors. Nevertheless, advancements in weather forecasting reduced the uncertainty level associated with future climatic data. Thus, climatic forecasts can be reliably employed to guide optimal irrigation schedule scheme when coupled with stochastic optimization models (Housh et al., 2012). Many studies have concluded that optimal irrigation decisions can provide substantial economic value over conventional irrigation decisions (Wang and Cai 2009). These studies have only incorporated short-term (weekly) forecasts, missing the potential benefit of the mid-term (seasonal) climate forecasts The latest progress in new data acquisition technologies (mainly in the field of Earth observation by remote sensing and imaging spectroscopy systems) as well as the state-of-the-art achievements in the fields of geographical information systems (GIS), computer science and climate and climate impact modelling enable to develop both integrated modelling and realistic spatial simulations. The present method is the use of field spectroscopy technology to keep constant monitoring of the field. The majority of previously developed decision support systems use satellite remote sensing data that provide very limited capabilities (conventional and basic parameters). The alternative is to use a more progressive technology of hyperspectral airborne or ground-based imagery data that provide an exhaustive description of the field. Nevertheless, this alternative is known to be very costly and complex. As such, we will present a low-cost imaging spectroscopy technology supported by detailed and fine-resolution field spectroscopy as a cost effective option for near field real-time monitoring tool. In order to solve the soil water balance and to predict the water irrigation volume a pedological survey is realized in the evaluation study areas.The remote sensing and field spectroscopy were applied to integrate continuous feedbacks from the field (e.g. soil moisture, organic/inorganic carbon, nitrogen, salinity, fertilizers, sulphur acid, texture; crop water-stress, plant stage, LAI , chlorophyll, biomass, yield prediction applying PROSPECT+SILT ; Fraction of Absorbed Photosynthetically Active Radiation FAPAR) estimated based on remote sensing information to minimize the errors associated with crop simulation process. A stochastic optimization model will be formulated that take into account both mid-term seasonal probabilistic climate prediction and short-term weekly forecasts. In order to optimize the water resource use, the irrigation scheduling will be defined by use a simulation model of soil-plant and atmosphere system (e.g. SWAP model, Van Dam et al., 2008). The use of this tool is necessary to: i) take into account the soil spatial variability; ii) to predict the system behaviour under the forecasted climate; iii) define the optimized irrigation water volumes. Given this knowledge in the three domains of optimization under uncertainty, spectroscopy/remote sensing and climate forecasting, we will be presented as an integrated framework for deriving optimal irrigation decisions. References Nelson, Gerald C., et al. Climate change: Impact on agriculture and costs of adaptation. Vol. 21. Intl Food Policy Res Inst, 2009. Housh, Mashor, Avi Ostfeld, and Uri Shamir. "Seasonal multi-year optimal management of quantities and salinities in regional water supply systems." Environmental Modelling & Software 37 (2012): 55-67. Wang, Dingbao, and Ximing Cai. "Irrigation scheduling - Role of weather forecasting and farmers' behavior." Journal of Water Resources Planning and Management 135.5 (2009): 364-372. Van Dam, J. C., et al. SWAP version 3.2: Theory description and user manual. No. 1649. Wageningen, The Netherlands: Alterra, 2008.

  8. Big Software for SmallSats: Adapting cFS to CubeSat Missions

    NASA Technical Reports Server (NTRS)

    Cudmore, Alan P.; Crum, Gary Alex; Sheikh, Salman; Marshall, James

    2015-01-01

    Expanding capabilities and mission objectives for SmallSats and CubeSats is driving the need for reliable, reusable, and robust flight software. While missions are becoming more complicated and the scientific goals more ambitious, the level of acceptable risk has decreased. Design challenges are further compounded by budget and schedule constraints that have not kept pace. NASA's Core Flight Software System (cFS) is an open source solution which enables teams to build flagship satellite level flight software within a CubeSat schedule and budget. NASA originally developed cFS to reduce mission and schedule risk for flagship satellite missions by increasing code reuse and reliability. The Lunar Reconnaissance Orbiter, which launched in 2009, was the first of a growing list of Class B rated missions to use cFS.

  9. The Joint Confidence Level Paradox: A History of Denial

    NASA Technical Reports Server (NTRS)

    Butts, Glenn; Linton, Kent

    2009-01-01

    This paper is intended to provide a reliable methodology for those tasked with generating price tags on construction (C0F) and research and development (R&D) activities in the NASA performance world. This document consists of a collection of cost-related engineering detail and project fulfillment information from early agency days to the present. Accurate historical detail is the first place to start when determining improved methodologies for future cost and schedule estimating. This paper contains a beneficial proposed cost estimating method for arriving at more reliable numbers for future submits. When comparing current cost and schedule methods with earlier cost and schedule approaches, it became apparent that NASA's organizational performance paradigm has morphed. Mission fulfillment speed has slowed and cost calculating factors have increased in 21st Century space exploration.

  10. Life cycle environmental implications of residential swimming pools.

    PubMed

    Forrest, Nigel; Williams, Eric

    2010-07-15

    Ownership of private swimming pools in the U.S. grew 2 to 4% per annum from 1997 to 2007. The environmental implications of pool ownership are analyzed by hybrid life cycle assessment (LCA) for nine U.S. cities. An operational model is constructed estimating consumption of chemicals, water, and energy for a typical residential pool. The model incorporates geographical climatic variations and upstream water and energy use from electricity and water supply networks. Results vary considerably by city: a factor of 5-6 for both water and energy use. Water use is driven by aridness and length of the swimming season, while energy use is mainly driven by length of the swimming season. Water and energy impacts of pools are significant, particularly in arid climates. In Phoenix for example pools account for 22% and 13% of a household's electricity and water use, respectively. Measures to reduce water and energy use in pools such as optimizing the pump schedule and covering the pool in winter can realize greater savings than many common household efficiency improvements. Private versus community pools are also compared. Community pools in Phoenix use 60% less swimming pool water and energy per household than subdivisions without community pools.

  11. Evolutionarily stable learning schedules and cumulative culture in discrete generation models.

    PubMed

    Aoki, Kenichi; Wakano, Joe Yuichiro; Lehmann, Laurent

    2012-06-01

    Individual learning (e.g., trial-and-error) and social learning (e.g., imitation) are alternative ways of acquiring and expressing the appropriate phenotype in an environment. The optimal choice between using individual learning and/or social learning may be dictated by the life-stage or age of an organism. Of special interest is a learning schedule in which social learning precedes individual learning, because such a schedule is apparently a necessary condition for cumulative culture. Assuming two obligatory learning stages per discrete generation, we obtain the evolutionarily stable learning schedules for the three situations where the environment is constant, fluctuates between generations, or fluctuates within generations. During each learning stage, we assume that an organism may target the optimal phenotype in the current environment by individual learning, and/or the mature phenotype of the previous generation by oblique social learning. In the absence of exogenous costs to learning, the evolutionarily stable learning schedules are predicted to be either pure social learning followed by pure individual learning ("bang-bang" control) or pure individual learning at both stages ("flat" control). Moreover, we find for each situation that the evolutionarily stable learning schedule is also the one that optimizes the learned phenotype at equilibrium. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Performance evaluation of different types of particle representation procedures of Particle Swarm Optimization in Job-shop Scheduling Problems

    NASA Astrophysics Data System (ADS)

    Izah Anuar, Nurul; Saptari, Adi

    2016-02-01

    This paper addresses the types of particle representation (encoding) procedures in a population-based stochastic optimization technique in solving scheduling problems known in the job-shop manufacturing environment. It intends to evaluate and compare the performance of different particle representation procedures in Particle Swarm Optimization (PSO) in the case of solving Job-shop Scheduling Problems (JSP). Particle representation procedures refer to the mapping between the particle position in PSO and the scheduling solution in JSP. It is an important step to be carried out so that each particle in PSO can represent a schedule in JSP. Three procedures such as Operation and Particle Position Sequence (OPPS), random keys representation and random-key encoding scheme are used in this study. These procedures have been tested on FT06 and FT10 benchmark problems available in the OR-Library, where the objective function is to minimize the makespan by the use of MATLAB software. Based on the experimental results, it is discovered that OPPS gives the best performance in solving both benchmark problems. The contribution of this paper is the fact that it demonstrates to the practitioners involved in complex scheduling problems that different particle representation procedures can have significant effects on the performance of PSO in solving JSP.

  13. Towards Optimization of ACRT Schedules Applied to the Gradient Freeze Growth of Cadmium Zinc Telluride

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Divecha, Mia S.; Derby, Jeffrey J.

    Historically, the melt growth of II-VI crystals has benefitted by the application of the accelerated crucible rotation technique (ACRT). Here, we employ a comprehensive numerical model to assess the impact of two ACRT schedules designed for a cadmium zinc telluride growth system per the classical recommendations of Capper and co-workers. The “flow maximizing” ACRT schedule, with higher rotation, effectively mixes the solutal field in the melt but does not reduce supercooling adjacent to the growth interface. The ACRT schedule derived for stable Ekman flow, with lower rotation, proves more effective in reducing supercooling and promoting stable growth. Furthermore, these counterintuitivemore » results highlight the need for more comprehensive studies on the optimization of ACRT schedules for specific growth systems and for desired growth outcomes.« less

  14. Towards Optimization of ACRT Schedules Applied to the Gradient Freeze Growth of Cadmium Zinc Telluride

    DOE PAGES

    Divecha, Mia S.; Derby, Jeffrey J.

    2017-10-03

    Historically, the melt growth of II-VI crystals has benefitted by the application of the accelerated crucible rotation technique (ACRT). Here, we employ a comprehensive numerical model to assess the impact of two ACRT schedules designed for a cadmium zinc telluride growth system per the classical recommendations of Capper and co-workers. The “flow maximizing” ACRT schedule, with higher rotation, effectively mixes the solutal field in the melt but does not reduce supercooling adjacent to the growth interface. The ACRT schedule derived for stable Ekman flow, with lower rotation, proves more effective in reducing supercooling and promoting stable growth. Furthermore, these counterintuitivemore » results highlight the need for more comprehensive studies on the optimization of ACRT schedules for specific growth systems and for desired growth outcomes.« less

  15. Bi-Objective Modelling for Hazardous Materials Road–Rail Multimodal Routing Problem with Railway Schedule-Based Space–Time Constraints

    PubMed Central

    Sun, Yan; Lang, Maoxiang; Wang, Danzhu

    2016-01-01

    The transportation of hazardous materials is always accompanied by considerable risk that will impact public and environment security. As an efficient and reliable transportation organization, a multimodal service should participate in the transportation of hazardous materials. In this study, we focus on transporting hazardous materials through the multimodal service network and explore the hazardous materials multimodal routing problem from the operational level of network planning. To formulate this problem more practicably, minimizing the total generalized costs of transporting the hazardous materials and the social risk along the planned routes are set as the optimization objectives. Meanwhile, the following formulation characteristics will be comprehensively modelled: (1) specific customer demands; (2) multiple hazardous material flows; (3) capacitated schedule-based rail service and uncapacitated time-flexible road service; and (4) environmental risk constraint. A bi-objective mixed integer nonlinear programming model is first built to formulate the routing problem that combines the formulation characteristics above. Then linear reformations are developed to linearize and improve the initial model so that it can be effectively solved by exact solution algorithms on standard mathematical programming software. By utilizing the normalized weighted sum method, we can generate the Pareto solutions to the bi-objective optimization problem for a specific case. Finally, a large-scale empirical case study from the Beijing–Tianjin–Hebei Region in China is presented to demonstrate the feasibility of the proposed methods in dealing with the practical problem. Various scenarios are also discussed in the case study. PMID:27483294

  16. Scheduling and calibration strategy for continuous radio monitoring of 1700 sources every three days

    NASA Astrophysics Data System (ADS)

    Max-Moerbeck, Walter

    2014-08-01

    The Owens Valley Radio Observatory 40 meter telescope is currently monitoring a sample of about 1700 blazars every three days at 15 GHz, with the main scientific goal of determining the relation between the variability of blazars at radio and gamma-rays as observed with the Fermi Gamma-ray Space Telescope. The time domain relation between radio and gamma-ray emission, in particular its correlation and time lag, can help us determine the location of the high-energy emission site in blazars, a current open question in blazar research. To achieve this goal, continuous observation of a large sample of blazars in a time scale of less than a week is indispensable. Since we only look at bright targets, the time available for target observations is mostly limited by source observability, calibration requirements and slewing of the telescope. Here I describe the implementation of a practical solution to this scheduling, calibration, and slewing time minimization problem. This solution combines ideas from optimization, in particular the traveling salesman problem, with astronomical and instrumental constraints. A heuristic solution using well established optimization techniques and astronomical insights particular to this situation, allow us to observe all the sources in the required three days cadence while obtaining reliable calibration of the radio flux densities. Problems of this nature will only be more common in the future and the ideas presented here can be relevant for other observing programs.

  17. Supervision strategies for improved reliability of bus routes. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1991-09-01

    The synthesis will be of interest to transit agency managers and supervisors, as well as to operating and planning personnel who are concerned with the reliability and scheduling of buses. Information is provided on service monitoring, service supervision and control, and supervision strategies. Reliability of transit service is critical to bus transit ridership. The extent of service supervision has an important bearing on reliability. The report describes the various procedures that are used by transit agencies to monitor and maintain bus service reliability. Most transit systems conduct checks of the number of riders at maximum load points and monitor schedulemore » adherence at these locations. Other supervisory actions include service restoration techniques, and strategies such as schedule control, headway control, load control, extraboard management, and personnel selection and training. More sophisticated technologies, such as automatic passenger counting (APC) systems and automatic vehicle location and control (AVLC), have been employed by some transit agencies and are described in the synthesis.« less

  18. Imaging Tasks Scheduling for High-Altitude Airship in Emergency Condition Based on Energy-Aware Strategy

    PubMed Central

    Zhimeng, Li; Chuan, He; Dishan, Qiu; Jin, Liu; Manhao, Ma

    2013-01-01

    Aiming to the imaging tasks scheduling problem on high-altitude airship in emergency condition, the programming models are constructed by analyzing the main constraints, which take the maximum task benefit and the minimum energy consumption as two optimization objectives. Firstly, the hierarchy architecture is adopted to convert this scheduling problem into three subproblems, that is, the task ranking, value task detecting, and energy conservation optimization. Then, the algorithms are designed for the sub-problems, and the solving results are corresponding to feasible solution, efficient solution, and optimization solution of original problem, respectively. This paper makes detailed introduction to the energy-aware optimization strategy, which can rationally adjust airship's cruising speed based on the distribution of task's deadline, so as to decrease the total energy consumption caused by cruising activities. Finally, the application results and comparison analysis show that the proposed strategy and algorithm are effective and feasible. PMID:23864822

  19. Optimization of Catheter Based rtPA Thrombolysis in a Novel In Vitro Clot Model for Intracerebral Hemorrhage

    PubMed Central

    Masomi-Bornwasser, Julia; Müller-Werkmeister, Hendrik; Kantelhardt, Sven Rainer; König, Jochem; Kempski, Oliver; Giese, Alf

    2017-01-01

    Hematoma lysis with recombinant tissue plasminogen activator (rtPA) has emerged as an alternative therapy for spontaneous intracerebral hemorrhage (ICH). Optimal dose and schedule are still unclear. The aim of this study was to create a reliable in vitro blood clot model for investigation of optimal drug dose and timing. An in vitro clot model was established, using 25 mL and 50 mL of human blood. Catheters were placed into the clots and three groups, using intraclot application of rtPA, placebo, and catheter alone, were analyzed. Dose-response relationship, repetition, and duration of rtPA treatment and its effectiveness in aged clots were investigated. A significant relative end weight difference was found in rtPA treated clots compared to catheter alone (p = 0.002) and placebo treated clots (p < 0.001). Dose-response analysis revealed 95% effective dose around 1 mg rtPA in 25 and 50 mL clots. Approximately 80% of relative clot lysis could be achieved after 15 min incubation. Lysis of aged clots was less effective. A new clot model for in vitro investigation was established. Our data suggest that current protocols for rtPA based ICH therapy may be optimized by using less rtPA at shorter incubation times. PMID:28459065

  20. Optimization of Catheter Based rtPA Thrombolysis in a Novel In Vitro Clot Model for Intracerebral Hemorrhage.

    PubMed

    Keric, Naureen; Masomi-Bornwasser, Julia; Müller-Werkmeister, Hendrik; Kantelhardt, Sven Rainer; König, Jochem; Kempski, Oliver; Giese, Alf

    2017-01-01

    Hematoma lysis with recombinant tissue plasminogen activator (rtPA) has emerged as an alternative therapy for spontaneous intracerebral hemorrhage (ICH). Optimal dose and schedule are still unclear. The aim of this study was to create a reliable in vitro blood clot model for investigation of optimal drug dose and timing. An in vitro clot model was established, using 25 mL and 50 mL of human blood. Catheters were placed into the clots and three groups, using intraclot application of rtPA, placebo, and catheter alone, were analyzed. Dose-response relationship, repetition, and duration of rtPA treatment and its effectiveness in aged clots were investigated. A significant relative end weight difference was found in rtPA treated clots compared to catheter alone ( p = 0.002) and placebo treated clots ( p < 0.001). Dose-response analysis revealed 95% effective dose around 1 mg rtPA in 25 and 50 mL clots. Approximately 80% of relative clot lysis could be achieved after 15 min incubation. Lysis of aged clots was less effective. A new clot model for in vitro investigation was established. Our data suggest that current protocols for rtPA based ICH therapy may be optimized by using less rtPA at shorter incubation times.

  1. Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission

    NASA Astrophysics Data System (ADS)

    Huang, Yuechen; Li, Haiyang

    2018-06-01

    This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.

  2. Short forms of the Schedule for Nonadaptive and Adaptive Personality (SNAP) for self- and collateral ratings: development, reliability, and validity.

    PubMed

    Harlan, E; Clark, L A

    1999-06-01

    Researchers and clinicians alike increasingly seek brief, reliable, and valid measures to obtain personality trait ratings from both selves and peers. We report the development of a paragraph-descriptor short form of a full-length personality assessment instrument, the Schedule for Nonadaptive and Adaptive Personality (SNAP) with both self- and other versions. Reliability and validity data were collected on a sample of 294 college students, from 90 of whom we also obtained parental ratings of their personality. Internal consistency reliability was good in both self- and parent data. The factorial structures of the self-report short and long forms were very similar. Convergence between parental ratings was moderately high. Self-parent convergence was variable, with lower agreement on scales assessing subjective distress than those assessing more observable behaviors; it also was stronger for higher order factors than for scales.

  3. Utilization Bound of Non-preemptive Fixed Priority Schedulers

    NASA Astrophysics Data System (ADS)

    Park, Moonju; Chae, Jinseok

    It is known that the schedulability of a non-preemptive task set with fixed priority can be determined in pseudo-polynomial time. However, since Rate Monotonic scheduling is not optimal for non-preemptive scheduling, the applicability of existing polynomial time tests that provide sufficient schedulability conditions, such as Liu and Layland's bound, is limited. This letter proposes a new sufficient condition for non-preemptive fixed priority scheduling that can be used for any fixed priority assignment scheme. It is also shown that the proposed schedulability test has a tighter utilization bound than existing test methods.

  4. Energy management and cooperation in microgrids

    NASA Astrophysics Data System (ADS)

    Rahbar, Katayoun

    Microgrids are key components of future smart power grids, which integrate distributed renewable energy generators to efficiently serve the load demand locally. However, random and intermittent characteristics of renewable energy generations may hinder the reliable operation of microgrids. This thesis is thus devoted to investigating new strategies for microgrids to optimally manage their energy consumption, energy storage system (ESS) and cooperation in real time to achieve the reliable and cost-effective operation. This thesis starts with a single microgrid system. The optimal energy scheduling and ESS management policy is derived to minimize the energy cost of the microgrid resulting from drawing conventional energy from the main grid under both the off-line and online setups, where the renewable energy generation/load demand are assumed to be non-causally known and causally known at the microgrid, respectively. The proposed online algorithm is designed based on the optimal off-line solution and works under arbitrary (even unknown) realizations of future renewable energy generation/load demand. Therefore, it is more practically applicable as compared to solutions based on conventional techniques such as dynamic programming and stochastic programming that require the prior knowledge of renewable energy generation and load demand realizations/distributions. Next, for a group of microgrids that cooperate in energy management, we study efficient methods for sharing energy among them for both fully and partially cooperative scenarios, where microgrids are of common interests and self-interested, respectively. For the fully cooperative energy management, the off-line optimization problem is first formulated and optimally solved, where a distributed algorithm is proposed to minimize the total (sum) energy cost of microgrids. Inspired by the results obtained from the off-line optimization, efficient online algorithms are proposed for the real-time energy management, which are of low complexity and work given arbitrary realizations of renewable energy generation/load demand. On the other hand, for self-interested microgrids, the partially cooperative energy management is formulated and a distributed algorithm is proposed to optimize the energy cooperation such that energy costs of individual microgrids reduce simultaneously over the case without energy cooperation while limited information is shared among the microgrids and the central controller.

  5. Optimal updating magnitude in adaptive flat-distribution sampling

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng; Drake, Justin A.; Ma, Jianpeng; Pettitt, B. Montgomery

    2017-11-01

    We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.

  6. Optimal updating magnitude in adaptive flat-distribution sampling.

    PubMed

    Zhang, Cheng; Drake, Justin A; Ma, Jianpeng; Pettitt, B Montgomery

    2017-11-07

    We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.

  7. Task-Driven Optimization of Fluence Field and Regularization for Model-Based Iterative Reconstruction in Computed Tomography.

    PubMed

    Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster

    2017-12-01

    This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.

  8. Data analytics and optimization of an ice-based energy storage system for commercial buildings

    DOE PAGES

    Luo, Na; Hong, Tianzhen; Li, Hui; ...

    2017-07-25

    Ice-based thermal energy storage (TES) systems can shift peak cooling demand and reduce operational energy costs (with time-of-use rates) in commercial buildings. The accurate prediction of the cooling load, and the optimal control strategy for managing the charging and discharging of a TES system, are two critical elements to improving system performance and achieving energy cost savings. This study utilizes data-driven analytics and modeling to holistically understand the operation of an ice–based TES system in a shopping mall, calculating the system’s performance using actual measured data from installed meters and sensors. Results show that there is significant savings potential whenmore » the current operating strategy is improved by appropriately scheduling the operation of each piece of equipment of the TES system, as well as by determining the amount of charging and discharging for each day. A novel optimal control strategy, determined by an optimization algorithm of Sequential Quadratic Programming, was developed to minimize the TES system’s operating costs. Three heuristic strategies were also investigated for comparison with our proposed strategy, and the results demonstrate the superiority of our method to the heuristic strategies in terms of total energy cost savings. Specifically, the optimal strategy yields energy costs of up to 11.3% per day and 9.3% per month compared with current operational strategies. A one-day-ahead hourly load prediction was also developed using machine learning algorithms, which facilitates the adoption of the developed data analytics and optimization of the control strategy in a real TES system operation.« less

  9. Data analytics and optimization of an ice-based energy storage system for commercial buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Na; Hong, Tianzhen; Li, Hui

    Ice-based thermal energy storage (TES) systems can shift peak cooling demand and reduce operational energy costs (with time-of-use rates) in commercial buildings. The accurate prediction of the cooling load, and the optimal control strategy for managing the charging and discharging of a TES system, are two critical elements to improving system performance and achieving energy cost savings. This study utilizes data-driven analytics and modeling to holistically understand the operation of an ice–based TES system in a shopping mall, calculating the system’s performance using actual measured data from installed meters and sensors. Results show that there is significant savings potential whenmore » the current operating strategy is improved by appropriately scheduling the operation of each piece of equipment of the TES system, as well as by determining the amount of charging and discharging for each day. A novel optimal control strategy, determined by an optimization algorithm of Sequential Quadratic Programming, was developed to minimize the TES system’s operating costs. Three heuristic strategies were also investigated for comparison with our proposed strategy, and the results demonstrate the superiority of our method to the heuristic strategies in terms of total energy cost savings. Specifically, the optimal strategy yields energy costs of up to 11.3% per day and 9.3% per month compared with current operational strategies. A one-day-ahead hourly load prediction was also developed using machine learning algorithms, which facilitates the adoption of the developed data analytics and optimization of the control strategy in a real TES system operation.« less

  10. Expert consensus document: Consensus statement on best practice management regarding the use of intravesical immunotherapy with BCG for bladder cancer.

    PubMed

    Kamat, Ashish M; Flaig, Thomas W; Grossman, H Barton; Konety, Badrinath; Lamm, Donald; O'Donnell, Michael A; Uchio, Edward; Efstathiou, Jason A; Taylor, John A

    2015-04-01

    Multiple clinical trials have demonstrated that intravesical Bacillus Calmette-Guérin (BCG) treatment reduces recurrences and progression in patients with non-muscle-invasive bladder cancer (NMIBC). However, although BCG has been in use for almost 40 years, this agent is often underutilized and practice patterns of administration vary. This neglect is most likely caused by uncertainties about the optimal use of BCG, including unawareness of optimal treatment schedules and about patient populations that most benefit from BCG treatment. To address this deficit, a focus group of specialized urologic oncologists (urologists, medical oncologists and radiation oncologists) reviewed the current guidelines and clinical evidence, discussed their experiences and formed a consensus regarding the optimal use of BCG in the management of patients with NIMBC. The experts concluded that continuing therapy with 3-week BCG maintenance is superior to induction treatment only and is the single most important factor in improving outcomes in patients with NMIBC. They also concluded that a reliable alternative to radical cystectomy in truly BCG-refractory disease remains the subject of clinical trials. In addition, definitions for common terms of BCG failure, such as BCG-refractory and BCG-intolerant, have been formulated.

  11. Application of multiobjective optimization to scheduling capacity expansion of urban water resource systems

    NASA Astrophysics Data System (ADS)

    Mortazavi-Naeini, Mohammad; Kuczera, George; Cui, Lijie

    2014-06-01

    Significant population increase in urban areas is likely to result in a deterioration of drought security and level of service provided by urban water resource systems. One way to cope with this is to optimally schedule the expansion of system resources. However, the high capital costs and environmental impacts associated with expanding or building major water infrastructure warrant the investigation of scheduling system operational options such as reservoir operating rules, demand reduction policies, and drought contingency plans, as a way of delaying or avoiding the expansion of water supply infrastructure. Traditionally, minimizing cost has been considered the primary objective in scheduling capacity expansion problems. In this paper, we consider some of the drawbacks of this approach. It is shown that there is no guarantee that the social burden of coping with drought emergencies is shared equitably across planning stages. In addition, it is shown that previous approaches do not adequately exploit the benefits of joint optimization of operational and infrastructure options and do not adequately address the need for the high level of drought security expected for urban systems. To address these shortcomings, a new multiobjective optimization approach to scheduling capacity expansion in an urban water resource system is presented and illustrated in a case study involving the bulk water supply system for Canberra. The results show that the multiobjective approach can address the temporal equity issue of sharing the burden of drought emergencies and that joint optimization of operational and infrastructure options can provide solutions superior to those just involving infrastructure options.

  12. Application of the Software as a Service Model to the Control of Complex Building Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stadler, Michael; Donadee, Jonathan; Marnay, Chris

    2011-03-17

    In an effort to create broad access to its optimization software, Lawrence Berkeley National Laboratory (LBNL), in collaboration with the University of California at Davis (UC Davis) and OSISoft, has recently developed a Software as a Service (SaaS) Model for reducing energy costs, cutting peak power demand, and reducing carbon emissions for multipurpose buildings. UC Davis currently collects and stores energy usage data from buildings on its campus. Researchers at LBNL sought to demonstrate that a SaaS application architecture could be built on top of this data system to optimize the scheduling of electricity and heat delivery in the building.more » The SaaS interface, known as WebOpt, consists of two major parts: a) the investment& planning and b) the operations module, which builds on the investment& planning module. The operational scheduling and load shifting optimization models within the operations module use data from load prediction and electrical grid emissions models to create an optimal operating schedule for the next week, reducing peak electricity consumption while maintaining quality of energy services. LBNL's application also provides facility managers with suggested energy infrastructure investments for achieving their energy cost and emission goals based on historical data collected with OSISoft's system. This paper describes these models as well as the SaaS architecture employed by LBNL researchers to provide asset scheduling services to UC Davis. The peak demand, emissions, and cost implications of the asset operation schedule and investments suggested by this optimization model are analysed.« less

  13. Application of the Software as a Service Model to the Control of Complex Building Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stadler, Michael; Donadee, Jon; Marnay, Chris

    2011-03-18

    In an effort to create broad access to its optimization software, Lawrence Berkeley National Laboratory (LBNL), in collaboration with the University of California at Davis (UC Davis) and OSISoft, has recently developed a Software as a Service (SaaS) Model for reducing energy costs, cutting peak power demand, and reducing carbon emissions for multipurpose buildings. UC Davis currently collects and stores energy usage data from buildings on its campus. Researchers at LBNL sought to demonstrate that a SaaS application architecture could be built on top of this data system to optimize the scheduling of electricity and heat delivery in the building.more » The SaaS interface, known as WebOpt, consists of two major parts: a) the investment& planning and b) the operations module, which builds on the investment& planning module. The operational scheduling and load shifting optimization models within the operations module use data from load prediction and electrical grid emissions models to create an optimal operating schedule for the next week, reducing peak electricity consumption while maintaining quality of energy services. LBNL's application also provides facility managers with suggested energy infrastructure investments for achieving their energy cost and emission goals based on historical data collected with OSISoft's system. This paper describes these models as well as the SaaS architecture employed by LBNL researchers to provide asset scheduling services to UC Davis. The peak demand, emissions, and cost implications of the asset operation schedule and investments suggested by this optimization model are analyzed.« less

  14. OMEGA FY13 HED requests - LANL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Workman, Jonathan B; Loomis, Eric N

    2012-06-25

    This is a summary of scientific work to be performed on the OMEGA laser system located at the Laboratory for Laser Energetics in Rochester New York. The work is funded through Science and ICF Campagins and falls under the category of laser-driven High-Energy Density Physics experiments. This summary is presented to the Rochester scheduling committee on an annual basis for scheduling and planning purposes.

  15. Tactical Satellite (TacSat) Feasibility Study: A Scenario Driven Approach

    DTIC Science & Technology

    2006-09-01

    Mobile User Objective System NAFCOM NASA /Air Force Cost Model NAVNETWARCOM Naval Network Warfare Command NGA National Geospatial Intelligence...by providing frequent imagery updates as they search for disaster survivors and trek into regions where all terrain has been destroyed and altered to...Kwajalein Atoll; Wallops Island; NASA . Assets will be located in adjacent to launch sites. 4) Launch schedule- Launch schedule will enable full

  16. Incentive-compatible guaranteed renewable health insurance premiums.

    PubMed

    Herring, Bradley; Pauly, Mark V

    2006-05-01

    Theoretical models of guaranteed renewable insurance display front-loaded premium schedules. Such schedules both cover lifetime total claims of low-risk and high-risk individuals and provide an incentive for those who remain low-risk to continue to purchase the policy. Questions have been raised of whether actual individual insurance markets in the US approximate the behavior predicted by these models, both because young consumers may not be able to "afford" front-loading and because insurers may behave strategically in ways that erode the value of protection against risk reclassification. In this paper, the optimal competitive age-based premium schedule for a benchmark guaranteed renewable health insurance policy is estimated using medical expenditure data. Several factors are shown to reduce the amount of front-loading necessary. Indeed, the resulting optimal premium path increases with age. Actual premium paths exhibited by purchasers of individual insurance are close to the optimal renewable schedule we estimate. Finally, consumer utility associated with the feature is examined.

  17. Applications of colored petri net and genetic algorithms to cluster tool scheduling

    NASA Astrophysics Data System (ADS)

    Liu, Tung-Kuan; Kuo, Chih-Jen; Hsiao, Yung-Chin; Tsai, Jinn-Tsong; Chou, Jyh-Horng

    2005-12-01

    In this paper, we propose a method, which uses Coloured Petri Net (CPN) and genetic algorithm (GA) to obtain an optimal deadlock-free schedule and to solve re-entrant problem for the flexible process of the cluster tool. The process of the cluster tool for producing a wafer usually can be classified into three types: 1) sequential process, 2) parallel process, and 3) sequential parallel process. But these processes are not economical enough to produce a variety of wafers in small volume. Therefore, this paper will propose the flexible process where the operations of fabricating wafers are randomly arranged to achieve the best utilization of the cluster tool. However, the flexible process may have deadlock and re-entrant problems which can be detected by CPN. On the other hand, GAs have been applied to find the optimal schedule for many types of manufacturing processes. Therefore, we successfully integrate CPN and GAs to obtain an optimal schedule with the deadlock and re-entrant problems for the flexible process of the cluster tool.

  18. Metroplex Optimization Model Expansion and Analysis: The Airline Fleet, Route, and Schedule Optimization Model (AFRS-OM)

    NASA Technical Reports Server (NTRS)

    Sherry, Lance; Ferguson, John; Hoffman, Karla; Donohue, George; Beradino, Frank

    2012-01-01

    This report describes the Airline Fleet, Route, and Schedule Optimization Model (AFRS-OM) that is designed to provide insights into airline decision-making with regards to markets served, schedule of flights on these markets, the type of aircraft assigned to each scheduled flight, load factors, airfares, and airline profits. The main inputs to the model are hedged fuel prices, airport capacity limits, and candidate markets. Embedded in the model are aircraft performance and associated cost factors, and willingness-to-pay (i.e. demand vs. airfare curves). Case studies demonstrate the application of the model for analysis of the effects of increased capacity and changes in operating costs (e.g. fuel prices). Although there are differences between airports (due to differences in the magnitude of travel demand and sensitivity to airfare), the system is more sensitive to changes in fuel prices than capacity. Further, the benefits of modernization in the form of increased capacity could be undermined by increases in hedged fuel prices

  19. Reliable fuzzy H∞ control for active suspension of in-wheel motor driven electric vehicles with dynamic damping

    NASA Astrophysics Data System (ADS)

    Shao, Xinxin; Naghdy, Fazel; Du, Haiping

    2017-03-01

    A fault-tolerant fuzzy H∞ control design approach for active suspension of in-wheel motor driven electric vehicles in the presence of sprung mass variation, actuator faults and control input constraints is proposed. The controller is designed based on the quarter-car active suspension model with a dynamic-damping-in-wheel-motor-driven-system, in which the suspended motor is operated as a dynamic absorber. The Takagi-Sugeno (T-S) fuzzy model is used to model this suspension with possible sprung mass variation. The parallel-distributed compensation (PDC) scheme is deployed to derive a fault-tolerant fuzzy controller for the T-S fuzzy suspension model. In order to reduce the motor wear caused by the dynamic force transmitted to the in-wheel motor, the dynamic force is taken as an additional controlled output besides the traditional optimization objectives such as sprung mass acceleration, suspension deflection and actuator saturation. The H∞ performance of the proposed controller is derived as linear matrix inequalities (LMIs) comprising three equality constraints which are solved efficiently by means of MATLAB LMI Toolbox. The proposed controller is applied to an electric vehicle suspension and its effectiveness is demonstrated through computer simulation.

  20. Within-trial contrast: when you see it and when you don't.

    PubMed

    Zentall, Thomas R

    2008-02-01

    Within-trial contrast occurs when a discriminative stimulus that is preceded by a relatively aversive event is preferred over another that is preceded by a less aversive event. Recent failures to replicate (Arantes & Grace, 2008; Vasconcelos, Urcuioli, & Lionello-DeNolf, 2007) may allow us to identify factors that may be responsible. In the case of Vasconcelos et al., it is likely that insufficient training was provided (often 35-65 sessions are required). In the case of Arantes and Grace (Experiment 2), these pigeons had been involved in prior experiments involving lean schedules of reinforcement, and we find that prior experience its occurrence. with lean (relatively aversive) schedules appears to reduce the presumed aversiveness of the many-peck requirement, thus obviating the contrast effect. Finally, in the case of Vasconcelos and Urcuioli (2008), although the contrast effect with a simultaneous discrimination was not reliable, it was not reliably smaller than with a successive discrimination that did show a reliable effect, and the contrast effect was also similar in magnitude to a reliable effect reported by Kacelnik and Marsh (2002). Thus, although there have been several failures to replicate the original effects reported by Clement, Feltus, Kaiser and Zentall (2000), insufficient training, prior history with lean schedules of reinforcement, and low statistical power may have been responsible for those failures.

  1. VAXELN Experimentation: Programming a Real-Time Periodic Task Dispatcher Using VAXELN Ada 1.1

    DTIC Science & Technology

    1987-11-01

    synchronization to the SQM and VAXELN semaphores. Based on real-time scheduling theory, the optimal rate-monotonic scheduling algorithm [Lui 73...schedulability test based on the rate-monotonic algorithm , namely task-lumping [Sha 871, was necessary to cal- culate the theoretically expected schedulability...8217 Guide Digital Equipment Corporation, Maynard, MA, 1986. [Lui 73] Liu, C.L., Layland, J.W. Scheduling Algorithms for Multi-programming in a Hard-Real-Time

  2. Strategies GeoCape Intelligent Observation Studies @ GSFC

    NASA Technical Reports Server (NTRS)

    Cappelaere, Pat; Frye, Stu; Moe, Karen; Mandl, Dan; LeMoigne, Jacqueline; Flatley, Tom; Geist, Alessandro

    2015-01-01

    This presentation provides information a summary of the tradeoff studies conducted for GeoCape by the GSFC team in terms of how to optimize GeoCape observation efficiency. Tradeoffs include total ground scheduling with simple priorities, ground scheduling with cloud forecast, ground scheduling with sub-area forecast, onboard scheduling with onboard cloud detection and smart onboard scheduling and onboard image processing. The tradeoffs considered optimzing cost, downlink bandwidth and total number of images acquired.

  3. Using Knowledge Base for Event-Driven Scheduling of Web Monitoring Systems

    NASA Astrophysics Data System (ADS)

    Kim, Yang Sok; Kang, Sung Won; Kang, Byeong Ho; Compton, Paul

    Web monitoring systems report any changes to their target web pages by revisiting them frequently. As they operate under significant resource constraints, it is essential to minimize revisits while ensuring minimal delay and maximum coverage. Various statistical scheduling methods have been proposed to resolve this problem; however, they are static and cannot easily cope with events in the real world. This paper proposes a new scheduling method that manages unpredictable events. An MCRDR (Multiple Classification Ripple-Down Rules) document classification knowledge base was reused to detect events and to initiate a prompt web monitoring process independent of a static monitoring schedule. Our experiment demonstrates that the approach improves monitoring efficiency significantly.

  4. Fast Optimization for Aircraft Descent and Approach Trajectory

    NASA Technical Reports Server (NTRS)

    Luchinsky, Dmitry G.; Schuet, Stefan; Brenton, J.; Timucin, Dogan; Smith, David; Kaneshige, John

    2017-01-01

    We address problem of on-line scheduling of the aircraft descent and approach trajectory. We formulate a general multiphase optimal control problem for optimization of the descent trajectory and review available methods of its solution. We develop a fast algorithm for solution of this problem using two key components: (i) fast inference of the dynamical and control variables of the descending trajectory from the low dimensional flight profile data and (ii) efficient local search for the resulting reduced dimensionality non-linear optimization problem. We compare the performance of the proposed algorithm with numerical solution obtained using optimal control toolbox General Pseudospectral Optimal Control Software. We present results of the solution of the scheduling problem for aircraft descent using novel fast algorithm and discuss its future applications.

  5. Smart Irrigation From Soil Moisture Forecast Using Satellite And Hydro -Meteorological Modelling

    NASA Astrophysics Data System (ADS)

    Corbari, Chiara; Mancini, Marco; Ravazzani, Giovanni; Ceppi, Alessandro; Salerno, Raffaele; Sobrino, Josè

    2017-04-01

    Increased water demand and climate change impacts have recently enhanced the need to improve water resources management, even in those areas which traditionally have an abundant supply of water. The highest consumption of water is devoted to irrigation for agricultural production, and so it is in this area that efforts have to be focused to study possible interventions. The SIM project funded by EU in the framework of the WaterWorks2014 - Water Joint Programming Initiative aims at developing an operational tool for real-time forecast of crops irrigation water requirements to support parsimonious water management and to optimize irrigation scheduling providing real-time and forecasted soil moisture behavior at high spatial and temporal resolutions with forecast horizons from few up to thirty days. This study discusses advances in coupling satellite driven soil water balance model and meteorological forecast as support for precision irrigation use comparing different case studies in Italy, in the Netherlands, in China and Spain, characterized by different climatic conditions, water availability, crop types and irrigation techniques and water distribution rules. Herein, the applications in two operative farms in vegetables production in the South of Italy where semi-arid climatic conditions holds, two maize fields in Northern Italy in a more water reach environment with flood irrigation will be presented. This system combines state of the art mathematical models and new technologies for environmental monitoring, merging ground observed data with Earth observations. Discussion on the methodology approach is presented, comparing for a reanalysis periods the forecast system outputs with observed soil moisture and crop water needs proving the reliability of the forecasting system and its benefits. The real-time visualization of the implemented system is also presented through web-dashboards.

  6. Mesopic and Photopic Rod and Cone Photoreceptor-Driven Visual Processes in Mice With Long-Wavelength-Shifted Cone Pigments.

    PubMed

    Tsai, Tina I; Joachimsthaler, Anneka; Kremers, Jan

    2017-10-01

    The clearer divergence in spectral sensitivity between native rod and human L-cone (L*-cone) opsins in the transgenic Opn1lwLIAIS mouse (LIAIS) allows normal visual processes mediated by these photoreceptor subtypes to be isolated effectively using the silent substitution technique. The objective of this study was to further characterize the influence of mean luminance and temporal frequency on the functional properties of signals originating in each photoreceptor separately and independently of adaptation state in LIAIS mice. Electroretinographic (ERG) recordings to sine-wave rod and L*-cone modulation at different mean luminances (0.1-130.0 cd/m2) and temporal frequencies (6-26 Hz) were examined in anesthetized LIAIS (N = 17) and C57Bl/6 mice (N = 8). We report maximum rod-driven response with 8-Hz modulation at 0.1 to 0.5 cd/m2, which was almost four times larger than maximum cone-driven response at 8 Hz, 21.5 to 130 cd/m2. Over these optimal luminances, both rod- and cone-driven response amplitudes exhibited low-pass functions with similar frequency resolution limits, albeit their distinct luminance sensitivities. There were, however, two distinguishing features: (1) the frequency-dependent amplitude decrease of rod-driven responses was more profound, and (2) linear relationships describing rod-driven response phases as a function of stimulus frequency were steeper. Employing the silent substitution method with stimuli of appropriate luminance on the LIAIS mouse (as on human observers) increases the specificity, robustness, and scope to which photoreceptor-driven responses can be reliably assayed compared to the standard photoreceptor isolation methods.

  7. Dose Schedule Optimization and the Pharmacokinetic Driver of Neutropenia

    PubMed Central

    Patel, Mayankbhai; Palani, Santhosh; Chakravarty, Arijit; Yang, Johnny; Shyu, Wen Chyi; Mettetal, Jerome T.

    2014-01-01

    Toxicity often limits the utility of oncology drugs, and optimization of dose schedule represents one option for mitigation of this toxicity. Here we explore the schedule-dependency of neutropenia, a common dose-limiting toxicity. To this end, we analyze previously published mathematical models of neutropenia to identify a pharmacokinetic (PK) predictor of the neutrophil nadir, and confirm this PK predictor in an in vivo experimental system. Specifically, we find total AUC and Cmax are poor predictors of the neutrophil nadir, while a PK measure based on the moving average of the drug concentration correlates highly with neutropenia. Further, we confirm this PK parameter for its ability to predict neutropenia in vivo following treatment with different doses and schedules. This work represents an attempt at mechanistically deriving a fundamental understanding of the underlying pharmacokinetic drivers of neutropenia, and provides insights that can be leveraged in a translational setting during schedule selection. PMID:25360756

  8. Optimizing Chemotherapy Dose and Schedule by Norton-Simon Mathematical Modeling

    PubMed Central

    Traina, Tiffany A.; Dugan, Ute; Higgins, Brian; Kolinsky, Kenneth; Theodoulou, Maria; Hudis, Clifford A.; Norton, Larry

    2011-01-01

    Background To hasten and improve anticancer drug development, we created a novel approach to generating and analyzing preclinical dose-scheduling data so as to optimize benefit-to-toxicity ratios. Methods We applied mathematical methods based upon Norton-Simon growth kinetic modeling to tumor-volume data from breast cancer xenografts treated with capecitabine (Xeloda®, Roche) at the conventional schedule of 14 days of treatment followed by a 7-day rest (14 - 7). Results The model predicted that 7 days of treatment followed by a 7-day rest (7 - 7) would be superior. Subsequent preclinical studies demonstrated that this biweekly capecitabine schedule allowed for safe delivery of higher daily doses, improved tumor response, and prolonged animal survival. Conclusions We demonstrated that the application of Norton-Simon modeling to the design and analysis of preclinical data predicts an improved capecitabine dosing schedule in xenograft models. This method warrants further investigation and application in clinical drug development. PMID:20519801

  9. A Three-Stage Enhanced Reactive Power and Voltage Optimization Method for High Penetration of Solar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ke, Xinda; Huang, Renke; Vallem, Mallikarjuna R.

    This paper presents a three-stage enhanced volt/var optimization method to stabilize voltage fluctuations in transmission networks by optimizing the usage of reactive power control devices. In contrast with existing volt/var optimization algorithms, the proposed method optimizes the voltage profiles of the system, while keeping the voltage and real power output of the generators as close to the original scheduling values as possible. This allows the method to accommodate realistic power system operation and market scenarios, in which the original generation dispatch schedule will not be affected. The proposed method was tested and validated on a modified IEEE 118-bus system withmore » photovoltaic data.« less

  10. EOS Operations Systems: EDOS Implemented Changes to Reduce Operations Costs

    NASA Technical Reports Server (NTRS)

    Cordier, Guy R.; Gomez-Rosa, Carlos; McLemore, Bruce D.

    2007-01-01

    The authors describe in this paper the progress achieved to-date with the reengineering of the Earth Observing System (EOS) Data and Operations System (EDOS), the experience gained in the process and the ensuing reduction of ground systems operations costs. The reengineering effort included a major methodology change, applying to an existing schedule driven system, a data-driven system approach.

  11. System and method for optimal load and source scheduling in context aware homes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shetty, Pradeep; Foslien Graber, Wendy; Mangsuli, Purnaprajna R.

    A controller for controlling energy consumption in a home includes a constraints engine to define variables for multiple appliances in the home corresponding to various home modes and persona of an occupant of the home. A modeling engine models multiple paths of energy utilization of the multiple appliances to place the home into a desired state from a current context. An optimal scheduler receives the multiple paths of energy utilization and generates a schedule as a function of the multiple paths and a selected persona to place the home in a desired state.

  12. Scheduling algorithms for rapid imaging using agile Cubesat constellations

    NASA Astrophysics Data System (ADS)

    Nag, Sreeja; Li, Alan S.; Merrick, James H.

    2018-02-01

    Distributed Space Missions such as formation flight and constellations, are being recognized as important Earth Observation solutions to increase measurement samples over space and time. Cubesats are increasing in size (27U, ∼40 kg in development) with increasing capabilities to host imager payloads. Given the precise attitude control systems emerging in the commercial market, Cubesats now have the ability to slew and capture images within short notice. We propose a modular framework that combines orbital mechanics, attitude control and scheduling optimization to plan the time-varying, full-body orientation of agile Cubesats in a constellation such that they maximize the number of observed images and observation time, within the constraints of Cubesat hardware specifications. The attitude control strategy combines bang-bang and PD control, with constraints such as power consumption, response time, and stability factored into the optimality computations and a possible extension to PID control to account for disturbances. Schedule optimization is performed using dynamic programming with two levels of heuristics, verified and improved upon using mixed integer linear programming. The automated scheduler is expected to run on ground station resources and the resultant schedules uplinked to the satellites for execution, however it can be adapted for onboard scheduling, contingent on Cubesat hardware and software upgrades. The framework is generalizable over small steerable spacecraft, sensor specifications, imaging objectives and regions of interest, and is demonstrated using multiple 20 kg satellites in Low Earth Orbit for two case studies - rapid imaging of Landsat's land and coastal images and extended imaging of global, warm water coral reefs. The proposed algorithm captures up to 161% more Landsat images than nadir-pointing sensors with the same field of view, on a 2-satellite constellation over a 12-h simulation. Integer programming was able to verify that optimality of the dynamic programming solution for single satellites was within 10%, and find up to 5% more optimal solutions. The optimality gap for constellations was found to be 22% at worst, but the dynamic programming schedules were found at nearly four orders of magnitude better computational speed than integer programming. The algorithm can include cloud cover predictions, ground downlink windows or any other spatial, temporal or angular constraints into the orbital module and be integrated into planning tools for agile constellations.

  13. Transmission overhaul estimates for partial and full replacement at repair

    NASA Technical Reports Server (NTRS)

    Savage, M.; Lewicki, D. G.

    1991-01-01

    Timely transmission overhauls increase in-flight service reliability greater than the calculated design reliabilities of the individual aircraft transmission components. Although necessary for aircraft safety, transmission overhauls contribute significantly to aircraft expense. Predictions of a transmission's maintenance needs at the design stage should enable the development of more cost effective and reliable transmissions in the future. The frequency is estimated of overhaul along with the number of transmissions or components needed to support the overhaul schedule. Two methods based on the two parameter Weibull statistical distribution for component life are used to estimate the time between transmission overhauls. These methods predict transmission lives for maintenance schedules which repair the transmission with a complete system replacement or repair only failed components of the transmission. An example illustrates the methods.

  14. Large area space qualified thermoelectrically (TE) cooled HgCdTe MW photovoltaic detectors for the Halogen Occultation Experiment (HALOE)

    NASA Technical Reports Server (NTRS)

    Norton, P. W.; Zimmermann, P. H.; Briggs, R. J.; Hartle, N. M.

    1986-01-01

    Large-area, HgCdTe MW photovoltaic detectors have been developed for the NASA-HALOE instrument scheduled for operation on the Upper Atmospheric Research Satellite. The photodiodes will be TE-cooled and were designed to operate in the 5.1-5.4 micron band at 185 K to measure nitric oxide concentrations in the atmosphere. The active area required 15 micron thick devices and a full backside common contact. Reflections from the backside contact doubled the effective thickness of the detectors. Optical interference from reflections was eliminated with a dual layer front surface A/R coating. Bakeout reliability was optimized by having Au metallization for both n and p interconnects. Detailed performance data and a model for the optical stack are presented.

  15. Planning Risk-Based SQC Schedules for Bracketed Operation of Continuous Production Analyzers.

    PubMed

    Westgard, James O; Bayat, Hassan; Westgard, Sten A

    2018-02-01

    To minimize patient risk, "bracketed" statistical quality control (SQC) is recommended in the new CLSI guidelines for SQC (C24-Ed4). Bracketed SQC requires that a QC event both precedes and follows (brackets) a group of patient samples. In optimizing a QC schedule, the frequency of QC or run size becomes an important planning consideration to maintain quality and also facilitate responsive reporting of results from continuous operation of high production analytic systems. Different plans for optimizing a bracketed SQC schedule were investigated on the basis of Parvin's model for patient risk and CLSI C24-Ed4's recommendations for establishing QC schedules. A Sigma-metric run size nomogram was used to evaluate different QC schedules for processes of different sigma performance. For high Sigma performance, an effective SQC approach is to employ a multistage QC procedure utilizing a "startup" design at the beginning of production and a "monitor" design periodically throughout production. Example QC schedules are illustrated for applications with measurement procedures having 6-σ, 5-σ, and 4-σ performance. Continuous production analyzers that demonstrate high σ performance can be effectively controlled with multistage SQC designs that employ a startup QC event followed by periodic monitoring or bracketing QC events. Such designs can be optimized to minimize the risk of harm to patients. © 2017 American Association for Clinical Chemistry.

  16. Systematic Evaluation of Stochastic Methods in Power System Scheduling and Dispatch with Renewable Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yishen; Zhou, Zhi; Liu, Cong

    2016-08-01

    As more wind power and other renewable resources are being integrated into the electric power grid, the forecast uncertainty brings operational challenges for the power system operators. In this report, different operational strategies for uncertainty management are presented and evaluated. A comprehensive and consistent simulation framework is developed to analyze the performance of different reserve policies and scheduling techniques under uncertainty in wind power. Numerical simulations are conducted on a modified version of the IEEE 118-bus system with a 20% wind penetration level, comparing deterministic, interval, and stochastic unit commitment strategies. The results show that stochastic unit commitment provides amore » reliable schedule without large increases in operational costs. Moreover, decomposition techniques, such as load shift factor and Benders decomposition, can help in overcoming the computational obstacles to stochastic unit commitment and enable the use of a larger scenario set to represent forecast uncertainty. In contrast, deterministic and interval unit commitment tend to give higher system costs as more reserves are being scheduled to address forecast uncertainty. However, these approaches require a much lower computational effort Choosing a proper lower bound for the forecast uncertainty is important for balancing reliability and system operational cost in deterministic and interval unit commitment. Finally, we find that the introduction of zonal reserve requirements improves reliability, but at the expense of higher operational costs.« less

  17. Optimizing human activity patterns using global sensitivity analysis.

    PubMed

    Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M

    2014-12-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  18. Integrating Reservations and Queuing in Remote Laboratory Scheduling

    ERIC Educational Resources Information Center

    Lowe, D.

    2013-01-01

    Remote laboratories (RLs) have become increasingly seen as a useful tool in supporting flexible shared access to scarce laboratory resources. An important element in supporting shared access is coordinating the scheduling of the laboratory usage. Optimized scheduling can significantly decrease access waiting times and improve the utilization level…

  19. Irrigation scheduling by ET and soil water sensing

    USDA-ARS?s Scientific Manuscript database

    Irrigation scheduling is the process of deciding when, where and how much to irrigate, usually with the goal of optimizing economic return on investment in land, equipment, inputs and personnel. This hour-long seminar presents methods of irrigation scheduling based, on the one hand on estimates of t...

  20. Scheduling work zones in multi-modal networks phase 1: scheduling work zones in transportation service networks.

    DOT National Transportation Integrated Search

    2016-06-01

    The purpose of this project is to study the optimal scheduling of work zones so that they have minimum negative impact (e.g., travel delay, gas consumption, accidents, etc.) on transport service vehicle flows. In this project, a mixed integer linear ...

  1. Naval Postgraduate School Scheduling Support System (NPS4)

    DTIC Science & Technology

    1992-03-01

    NPSS ...... .................. 156 2. Final Exam Scheduler .. .......... 159 F. PRESENTATION SYSTEM ... ............. . 160 G. USER INTERFACE... NPSS ...... .................. 185 2. Final Exam Model ... ............ 186 3. The Class Schedulers .. .......... 186 4. Assessment of Problem Model...Information Distribution ....... 150 4.13 NPSS Optimization Process .... ............ . 157 4.14 NPSS Performance ..... ................ . 159 4.15 Department

  2. Evolutionary Scheduler for the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Guillaume, Alexandre; Lee, Seungwon; Wang, Yeou-Fang; Zheng, Hua; Chau, Savio; Tung, Yu-Wen; Terrile, Richard J.; Hovden, Robert

    2010-01-01

    A computer program assists human schedulers in satisfying, to the maximum extent possible, competing demands from multiple spacecraft missions for utilization of the transmitting/receiving Earth stations of NASA s Deep Space Network. The program embodies a concept of optimal scheduling to attain multiple objectives in the presence of multiple constraints.

  3. Computing the Expected Cost of an Appointment Schedule for Statistically Identical Customers with Probabilistic Service Times

    PubMed Central

    Dietz, Dennis C.

    2014-01-01

    A cogent method is presented for computing the expected cost of an appointment schedule where customers are statistically identical, the service time distribution has known mean and variance, and customer no-shows occur with time-dependent probability. The approach is computationally efficient and can be easily implemented to evaluate candidate schedules within a schedule optimization algorithm. PMID:24605070

  4. 2 kWe Solar Dynamic Ground Test Demonstration Project. Volume 1; Executive Summary

    NASA Technical Reports Server (NTRS)

    Alexander, Dennis

    1997-01-01

    The Solar Dynamic Ground Test Demonstration (SDGTD) successfully demonstrated a solar-powered closed Brayton cycle system in a relevant space thermal environment. In addition to meeting technical requirements the project was completed 4 months ahead of schedule and under budget. The following conclusions can be supported: 1. The component technology for solar dynamic closed Brayton cycle technology has clearly been demonstrated. 2. The thermal, optical, control, and electrical integration aspects of systems integration have also been successfully demonstrated. Physical integration aspects were not attempted as these tend to be driven primarily by mission-specific requirements. 3. System efficiency of greater than 15 percent (all losses fully accounted for) was demonstrated using equipment and designs which were not optimized. Some preexisting hardware was used to minimize cost and schedule. 4. Power generation of 2 kWe. 5. A NASA/industry team was developed that successfully worked together to accomplish project goals. The material presented in this report will show that the technology necessary to design and fabricate solar dynamic electrical power systems for space has been successfully developed and demonstrated. The data will further show that achieved results compare well with pretest predictions. The next step in the development of solar dynamic space power will be a flight test.

  5. Software For Integer Programming

    NASA Technical Reports Server (NTRS)

    Fogle, F. R.

    1992-01-01

    Improved Exploratory Search Technique for Pure Integer Linear Programming Problems (IESIP) program optimizes objective function of variables subject to confining functions or constraints, using discrete optimization or integer programming. Enables rapid solution of problems up to 10 variables in size. Integer programming required for accuracy in modeling systems containing small number of components, distribution of goods, scheduling operations on machine tools, and scheduling production in general. Written in Borland's TURBO Pascal.

  6. Fog computing job scheduling optimization based on bees swarm

    NASA Astrophysics Data System (ADS)

    Bitam, Salim; Zeadally, Sherali; Mellouk, Abdelhamid

    2018-04-01

    Fog computing is a new computing architecture, composed of a set of near-user edge devices called fog nodes, which collaborate together in order to perform computational services such as running applications, storing an important amount of data, and transmitting messages. Fog computing extends cloud computing by deploying digital resources at the premise of mobile users. In this new paradigm, management and operating functions, such as job scheduling aim at providing high-performance, cost-effective services requested by mobile users and executed by fog nodes. We propose a new bio-inspired optimization approach called Bees Life Algorithm (BLA) aimed at addressing the job scheduling problem in the fog computing environment. Our proposed approach is based on the optimized distribution of a set of tasks among all the fog computing nodes. The objective is to find an optimal tradeoff between CPU execution time and allocated memory required by fog computing services established by mobile users. Our empirical performance evaluation results demonstrate that the proposal outperforms the traditional particle swarm optimization and genetic algorithm in terms of CPU execution time and allocated memory.

  7. Optimal non-linear health insurance.

    PubMed

    Blomqvist, A

    1997-06-01

    Most theoretical and empirical work on efficient health insurance has been based on models with linear insurance schedules (a constant co-insurance parameter). In this paper, dynamic optimization techniques are used to analyse the properties of optimal non-linear insurance schedules in a model similar to one originally considered by Spence and Zeckhauser (American Economic Review, 1971, 61, 380-387) and reminiscent of those that have been used in the literature on optimal income taxation. The results of a preliminary numerical example suggest that the welfare losses from the implicit subsidy to employer-financed health insurance under US tax law may be a good deal smaller than previously estimated using linear models.

  8. Bandwidth reduction for video-on-demand broadcasting using secondary content insertion

    NASA Astrophysics Data System (ADS)

    Golynski, Alexander; Lopez-Ortiz, Alejandro; Poirier, Guillaume; Quimper, Claude-Guy

    2005-01-01

    An optimal broadcasting scheme under the presence of secondary content (i.e. advertisements) is proposed. The proposed scheme works both for movies encoded in a Constant Bit Rate (CBR) or a Variable Bit Rate (VBR) format. It is shown experimentally that secondary content in movies can make Video-on-Demand (VoD) broadcasting systems more efficient. An efficient algorithm is given to compute the optimal broadcasting schedule with secondary content, which in particular significantly improves over the best previously known algorithm for computing the optimal broadcasting schedule without secondary content.

  9. Optimal sampling designs for estimation of Plasmodium falciparum clearance rates in patients treated with artemisinin derivatives

    PubMed Central

    2013-01-01

    Background The emergence of Plasmodium falciparum resistance to artemisinins in Southeast Asia threatens the control of malaria worldwide. The pharmacodynamic hallmark of artemisinin derivatives is rapid parasite clearance (a short parasite half-life), therefore, the in vivo phenotype of slow clearance defines the reduced susceptibility to the drug. Measurement of parasite counts every six hours during the first three days after treatment have been recommended to measure the parasite clearance half-life, but it remains unclear whether simpler sampling intervals and frequencies might also be sufficient to reliably estimate this parameter. Methods A total of 2,746 parasite density-time profiles were selected from 13 clinical trials in Thailand, Cambodia, Mali, Vietnam, and Kenya. In these studies, parasite densities were measured every six hours until negative after treatment with an artemisinin derivative (alone or in combination with a partner drug). The WWARN Parasite Clearance Estimator (PCE) tool was used to estimate “reference” half-lives from these six-hourly measurements. The effect of four alternative sampling schedules on half-life estimation was investigated, and compared to the reference half-life (time zero, 6, 12, 24 (A1); zero, 6, 18, 24 (A2); zero, 12, 18, 24 (A3) or zero, 12, 24 (A4) hours and then every 12 hours). Statistical bootstrap methods were used to estimate the sampling distribution of half-lives for parasite populations with different geometric mean half-lives. A simulation study was performed to investigate a suite of 16 potential alternative schedules and half-life estimates generated by each of the schedules were compared to the “true” half-life. The candidate schedules in the simulation study included (among others) six-hourly sampling, schedule A1, schedule A4, and a convenience sampling schedule at six, seven, 24, 25, 48 and 49 hours. Results The median (range) parasite half-life for all clinical studies combined was 3.1 (0.7-12.9) hours. Schedule A1 consistently performed the best, and schedule A4 the worst, both for the individual patient estimates and for the populations generated with the bootstrapping algorithm. In both cases, the differences between the reference and alternative schedules decreased as half-life increased. In the simulation study, 24-hourly sampling performed the worst, and six-hourly sampling the best. The simulation study confirmed that more dense parasite sampling schedules are required to accurately estimate half-life for profiles with short half-life (≤three hours) and/or low initial parasite density (≤10,000 per μL). Among schedules in the simulation study with six or fewer measurements in the first 48 hours, a schedule with measurements at times (time windows) of 0 (0–2), 6 (4–8), 12 (10–14), 24 (22–26), 36 (34–36) and 48 (46–50) hours, or at times 6, 7 (two samples in time window 5–8), 24, 25 (two samples during time 23–26), and 48, 49 (two samples during time 47–50) hours, until negative most accurately estimated the “true” half-life. For a given schedule, continuing sampling after two days had little effect on the estimation of half-life, provided that adequate sampling was performed in the first two days and the half-life was less than three hours. If the measured parasitaemia at two days exceeded 1,000 per μL, continued sampling for at least once a day was needed for accurate half-life estimates. Conclusions This study has revealed important insights on sampling schedules for accurate and reliable estimation of Plasmodium falciparum half-life following treatment with an artemisinin derivative (alone or in combination with a partner drug). Accurate measurement of short half-lives (rapid clearance) requires more dense sampling schedules (with more than twice daily sampling). A more intensive sampling schedule is, therefore, recommended in locations where P. falciparum susceptibility to artemisinins is not known and the necessary resources are available. Counting parasite density at six hours is important, and less frequent sampling is satisfactory for estimating long parasite half-lives in areas where artemisinin resistance is present. PMID:24225303

  10. Optimal sampling designs for estimation of Plasmodium falciparum clearance rates in patients treated with artemisinin derivatives.

    PubMed

    Flegg, Jennifer A; Guérin, Philippe J; Nosten, Francois; Ashley, Elizabeth A; Phyo, Aung Pyae; Dondorp, Arjen M; Fairhurst, Rick M; Socheat, Duong; Borrmann, Steffen; Björkman, Anders; Mårtensson, Andreas; Mayxay, Mayfong; Newton, Paul N; Bethell, Delia; Se, Youry; Noedl, Harald; Diakite, Mahamadou; Djimde, Abdoulaye A; Hien, Tran T; White, Nicholas J; Stepniewska, Kasia

    2013-11-13

    The emergence of Plasmodium falciparum resistance to artemisinins in Southeast Asia threatens the control of malaria worldwide. The pharmacodynamic hallmark of artemisinin derivatives is rapid parasite clearance (a short parasite half-life), therefore, the in vivo phenotype of slow clearance defines the reduced susceptibility to the drug. Measurement of parasite counts every six hours during the first three days after treatment have been recommended to measure the parasite clearance half-life, but it remains unclear whether simpler sampling intervals and frequencies might also be sufficient to reliably estimate this parameter. A total of 2,746 parasite density-time profiles were selected from 13 clinical trials in Thailand, Cambodia, Mali, Vietnam, and Kenya. In these studies, parasite densities were measured every six hours until negative after treatment with an artemisinin derivative (alone or in combination with a partner drug). The WWARN Parasite Clearance Estimator (PCE) tool was used to estimate "reference" half-lives from these six-hourly measurements. The effect of four alternative sampling schedules on half-life estimation was investigated, and compared to the reference half-life (time zero, 6, 12, 24 (A1); zero, 6, 18, 24 (A2); zero, 12, 18, 24 (A3) or zero, 12, 24 (A4) hours and then every 12 hours). Statistical bootstrap methods were used to estimate the sampling distribution of half-lives for parasite populations with different geometric mean half-lives. A simulation study was performed to investigate a suite of 16 potential alternative schedules and half-life estimates generated by each of the schedules were compared to the "true" half-life. The candidate schedules in the simulation study included (among others) six-hourly sampling, schedule A1, schedule A4, and a convenience sampling schedule at six, seven, 24, 25, 48 and 49 hours. The median (range) parasite half-life for all clinical studies combined was 3.1 (0.7-12.9) hours. Schedule A1 consistently performed the best, and schedule A4 the worst, both for the individual patient estimates and for the populations generated with the bootstrapping algorithm. In both cases, the differences between the reference and alternative schedules decreased as half-life increased. In the simulation study, 24-hourly sampling performed the worst, and six-hourly sampling the best. The simulation study confirmed that more dense parasite sampling schedules are required to accurately estimate half-life for profiles with short half-life (≤ three hours) and/or low initial parasite density (≤ 10,000 per μL). Among schedules in the simulation study with six or fewer measurements in the first 48 hours, a schedule with measurements at times (time windows) of 0 (0-2), 6 (4-8), 12 (10-14), 24 (22-26), 36 (34-36) and 48 (46-50) hours, or at times 6, 7 (two samples in time window 5-8), 24, 25 (two samples during time 23-26), and 48, 49 (two samples during time 47-50) hours, until negative most accurately estimated the "true" half-life. For a given schedule, continuing sampling after two days had little effect on the estimation of half-life, provided that adequate sampling was performed in the first two days and the half-life was less than three hours. If the measured parasitaemia at two days exceeded 1,000 per μL, continued sampling for at least once a day was needed for accurate half-life estimates. This study has revealed important insights on sampling schedules for accurate and reliable estimation of Plasmodium falciparum half-life following treatment with an artemisinin derivative (alone or in combination with a partner drug). Accurate measurement of short half-lives (rapid clearance) requires more dense sampling schedules (with more than twice daily sampling). A more intensive sampling schedule is, therefore, recommended in locations where P. falciparum susceptibility to artemisinins is not known and the necessary resources are available. Counting parasite density at six hours is important, and less frequent sampling is satisfactory for estimating long parasite half-lives in areas where artemisinin resistance is present.

  11. Modelling Temporal Schedule of Urban Trains Using Agent-Based Simulation and NSGA2-BASED Multiobjective Optimization Approaches

    NASA Astrophysics Data System (ADS)

    Sahelgozin, M.; Alimohammadi, A.

    2015-12-01

    Increasing distances between locations of residence and services leads to a large number of daily commutes in urban areas. Developing subway systems has been taken into consideration of transportation managers as a response to this huge amount of travel demands. In developments of subway infrastructures, representing a temporal schedule for trains is an important task; because an appropriately designed timetable decreases Total passenger travel times, Total Operation Costs and Energy Consumption of trains. Since these variables are not positively correlated, subway scheduling is considered as a multi-criteria optimization problem. Therefore, proposing a proper solution for subway scheduling has been always a controversial issue. On the other hand, research on a phenomenon requires a summarized representation of the real world that is known as Model. In this study, it is attempted to model temporal schedule of urban trains that can be applied in Multi-Criteria Subway Schedule Optimization (MCSSO) problems. At first, a conceptual framework is represented for MCSSO. Then, an agent-based simulation environment is implemented to perform Sensitivity Analysis (SA) that is used to extract the interrelations between the framework components. These interrelations is then taken into account in order to construct the proposed model. In order to evaluate performance of the model in MCSSO problems, Tehran subway line no. 1 is considered as the case study. Results of the study show that the model was able to generate an acceptable distribution of Pareto-optimal solutions which are applicable in the real situations while solving a MCSSO is the goal. Also, the accuracy of the model in representing the operation of subway systems was significant.

  12. Multi-time Scale Joint Scheduling Method Considering the Grid of Renewable Energy

    NASA Astrophysics Data System (ADS)

    Zhijun, E.; Wang, Weichen; Cao, Jin; Wang, Xin; Kong, Xiangyu; Quan, Shuping

    2018-01-01

    Renewable new energy power generation prediction error like wind and light, brings difficulties to dispatch the power system. In this paper, a multi-time scale robust scheduling method is set to solve this problem. It reduces the impact of clean energy prediction bias to the power grid by using multi-time scale (day-ahead, intraday, real time) and coordinating the dispatching power output of various power supplies such as hydropower, thermal power, wind power, gas power and. The method adopts the robust scheduling method to ensure the robustness of the scheduling scheme. By calculating the cost of the abandon wind and the load, it transforms the robustness into the risk cost and optimizes the optimal uncertainty set for the smallest integrative costs. The validity of the method is verified by simulation.

  13. Dataflow Design Tool: User's Manual

    NASA Technical Reports Server (NTRS)

    Jones, Robert L., III

    1996-01-01

    The Dataflow Design Tool is a software tool for selecting a multiprocessor scheduling solution for a class of computational problems. The problems of interest are those that can be described with a dataflow graph and are intended to be executed repetitively on a set of identical processors. Typical applications include signal processing and control law problems. The software tool implements graph-search algorithms and analysis techniques based on the dataflow paradigm. Dataflow analyses provided by the software are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool provides performance optimization through the inclusion of artificial precedence constraints among the schedulable tasks. The user interface and tool capabilities are described. Examples are provided to demonstrate the analysis, scheduling, and optimization functions facilitated by the tool.

  14. Open shop scheduling problem to minimize total weighted completion time

    NASA Astrophysics Data System (ADS)

    Bai, Danyu; Zhang, Zhihai; Zhang, Qiang; Tang, Mengqian

    2017-01-01

    A given number of jobs in an open shop scheduling environment must each be processed for given amounts of time on each of a given set of machines in an arbitrary sequence. This study aims to achieve a schedule that minimizes total weighted completion time. Owing to the strong NP-hardness of the problem, the weighted shortest processing time block (WSPTB) heuristic is presented to obtain approximate solutions for large-scale problems. Performance analysis proves the asymptotic optimality of the WSPTB heuristic in the sense of probability limits. The largest weight block rule is provided to seek optimal schedules in polynomial time for a special case. A hybrid discrete differential evolution algorithm is designed to obtain high-quality solutions for moderate-scale problems. Simulation experiments demonstrate the effectiveness of the proposed algorithms.

  15. IceProd 2 Usage Experience

    NASA Astrophysics Data System (ADS)

    Delventhal, D.; Schultz, D.; Diaz Velez, J. C.

    2017-10-01

    IceProd is a data processing and management framework developed by the IceCube Neutrino Observatory for processing of Monte Carlo simulations, detector data, and data driven analysis. It runs as a separate layer on top of grid and batch systems. This is accomplished by a set of daemons which process job workflow, maintaining configuration and status information on the job before, during, and after processing. IceProd can also manage complex workflow DAGs across distributed computing grids in order to optimize usage of resources. IceProd has recently been rewritten to increase its scaling capabilities, handle user analysis workflows together with simulation production, and facilitate the integration with 3rd party scheduling tools. IceProd 2, the second generation of IceProd, has been running in production for several months now. We share our experience setting up the system and things we’ve learned along the way.

  16. Optimal route scheduling-driven study of a photovoltaic charging-station (parking lot) for electric mini-buses

    NASA Astrophysics Data System (ADS)

    Zacharaki, V.; Papanikolaou, S.; Voulgaraki, Ch; Karantinos, A.; Sioumpouras, D.; Tsiamitros, D.; Stimoniaris, D.; Maropoulos, S.; Stephanedes, Y.

    2016-11-01

    The objective of the present study is threefold: To highlight how electro-mobility can: (a) Contribute to the promotion of the environmental conservation of the rural areas (through an integrated solution for reducing the carbon footprint of road facilities and transport), (b) Enhance tourism-based economical development, (c) facilitate students in their daily transport and residents (elderly, disabled, distant-residents) in their daily on-demand transport. The overall goal is to design an energy-efficient, regional intelligent transportation system with innovative solar-energy charging-stations for e-vehicles in municipalities with many geographically scattered small villages and small cities. The innovative character of the study is that it tries to tackle all three specific objectives simultaneously and with the same means, since it utilizes Intelligent Transportation Systems (ITS). The study is adapted and applied to an area with the above characteristics, in order to demonstrate the proof of concept.

  17. Optimal scheduling and its Lyapunov stability for advanced load-following energy plants with CO 2 capture

    DOE PAGES

    Bankole, Temitayo; Jones, Dustin; Bhattacharyya, Debangsu; ...

    2017-11-03

    In this study, a two-level control methodology consisting of an upper-level scheduler and a lower-level supervisory controller is proposed for an advanced load-following energy plant with CO 2 capture. With the use of an economic objective function that considers fluctuation in electricity demand and price at the upper level, optimal scheduling of energy plant electricity production and carbon capture with respect to several carbon tax scenarios is implemented. The optimal operational profiles are then passed down to corresponding lower-level supervisory controllers designed using a methodological approach that balances control complexity with performance. Finally, it is shown how optimal carbon capturemore » and electricity production rate profiles for an energy plant such as the integrated gasification combined cycle (IGCC) plant are affected by electricity demand and price fluctuations under different carbon tax scenarios. As a result, the paper also presents a Lyapunov stability analysis of the proposed scheme.« less

  18. Optimal scheduling and its Lyapunov stability for advanced load-following energy plants with CO 2 capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bankole, Temitayo; Jones, Dustin; Bhattacharyya, Debangsu

    In this study, a two-level control methodology consisting of an upper-level scheduler and a lower-level supervisory controller is proposed for an advanced load-following energy plant with CO 2 capture. With the use of an economic objective function that considers fluctuation in electricity demand and price at the upper level, optimal scheduling of energy plant electricity production and carbon capture with respect to several carbon tax scenarios is implemented. The optimal operational profiles are then passed down to corresponding lower-level supervisory controllers designed using a methodological approach that balances control complexity with performance. Finally, it is shown how optimal carbon capturemore » and electricity production rate profiles for an energy plant such as the integrated gasification combined cycle (IGCC) plant are affected by electricity demand and price fluctuations under different carbon tax scenarios. As a result, the paper also presents a Lyapunov stability analysis of the proposed scheme.« less

  19. A COTS-Based Attitude Dependent Contact Scheduling System

    NASA Technical Reports Server (NTRS)

    DeGumbia, Jonathan D.; Stezelberger, Shane T.; Woodard, Mark

    2006-01-01

    The mission architecture of the Gamma-ray Large Area Space Telescope (GLAST) requires a sophisticated ground system component for scheduling the downlink of science data. Contacts between the ````````````````` satellite and the Tracking and Data Relay Satellite System (TDRSS) are restricted by the limited field-of-view of the science data downlink antenna. In addition, contacts must be scheduled when permitted by the satellite s complex and non-repeating attitude profile. Complicating the matter further, the long lead-time required to schedule TDRSS services, combined with the short duration of the downlink contact opportunities, mandates accurate GLAST orbit and attitude modeling. These circumstances require the development of a scheduling system that is capable of predictively and accurately modeling not only the orbital position of GLAST but also its attitude. This paper details the methods used in the design of a Commercial Off The Shelf (COTS)-based attitude-dependent. TDRSS contact Scheduling system that meets the unique scheduling requirements of the GLAST mission, and it suggests a COTS-based scheduling approach to support future missions. The scheduling system applies filtering and smoothing algorithms to telemetered GPS data to produce high-accuracy predictive GLAST orbit ephemerides. Next, bus pointing commands from the GLAST Science Support Center are used to model the complexities of the two dynamic science gathering attitude modes. Attitude-dependent view periods are then generated between GLAST and each of the supporting TDRSs. Numerous scheduling constraints are then applied to account for various mission specific resource limitations. Next, an optimization engine is used to produce an optimized TDRSS contact schedule request which is sent to TDRSS scheduling for confirmation. Lastly, the confirmed TDRSS contact schedule is rectified with an updated ephemeris and adjusted bus pointing commands to produce a final science downlink contact schedule.

  20. Autism detection in early childhood (ADEC): reliability and validity data for a Level 2 screening tool for autistic disorder.

    PubMed

    Nah, Yong-Hwee; Young, Robyn L; Brewer, Neil; Berlingeri, Genna

    2014-03-01

    The Autism Detection in Early Childhood (ADEC; Young, 2007) was developed as a Level 2 clinician-administered autistic disorder (AD) screening tool that was time-efficient, suitable for children under 3 years, easy to administer, and suitable for persons with minimal training and experience with AD. A best estimate clinical Diagnostic and Statistical Manual of Mental Disorders (4th ed., text rev.; DSM-IV-TR; American Psychiatric Association, 2000) diagnosis of AD was made for 70 children using all available information and assessment results, except for the ADEC data. A screening study compared these children on the ADEC with 57 children with other developmental disorders and 64 typically developing children. Results indicated high internal consistency (α = .91). Interrater reliability and test-retest reliability of the ADEC were also adequate. ADEC scores reliably discriminated different diagnostic groups after controlling for nonverbal IQ and Vineland Adaptive Behavior Composite scores. Construct validity (using exploratory factor analysis) and concurrent validity using performance on the Autism Diagnostic Observation Schedule (Lord et al., 2000), the Autism Diagnostic Interview-Revised (Le Couteur, Lord, & Rutter, 2003), and DSM-IV-TR criteria were also demonstrated. Signal detection analysis identified the optimal ADEC cutoff score, with the ADEC identifying all children who had an AD (N = 70, sensitivity = 1.0) but overincluding children with other disabilities (N = 13, specificity ranging from .74 to .90). Together, the reliability and validity data indicate that the ADEC has potential to be established as a suitable and efficient screening tool for infants with AD. 2014 APA

  1. Sub-30 nm patterning of molecular resists based on crosslinking through tip based oxidation

    NASA Astrophysics Data System (ADS)

    Lorenzoni, Matteo; Wagner, Daniel; Neuber, Christian; Schmidt, Hans-Werner; Perez-Murano, Francesc

    2018-06-01

    Oxidation Scanning Probe Lithography (o-SPL) is an established method employed for device patterning at the nanometer scale. It represents a feasible and inexpensive alternative to standard lithographic techniques such as electron beam lithography (EBL) and nanoimprint lithography (NIL). In this work we applied non-contact o-SPL to an engineered class of molecular resists in order to obtain crosslinking by electrochemical driven oxidation. By patterning and developing various resist formulas we were able to obtain a reliable negative tone resist behavior based on local oxidation. Under optimal conditions, directly written patterns can routinely reach sub-30 nm lateral resolution, while the final developed features result wider, approaching 50 nm width.

  2. Beam Design and User Scheduling for Nonorthogonal Multiple Access With Multiple Antennas Based on Pareto Optimality

    NASA Astrophysics Data System (ADS)

    Seo, Junyeong; Sung, Youngchul

    2018-06-01

    In this paper, an efficient transmit beam design and user scheduling method is proposed for multi-user (MU) multiple-input single-output (MISO) non-orthogonal multiple access (NOMA) downlink, based on Pareto-optimality. The proposed beam design and user scheduling method groups simultaneously-served users into multiple clusters with practical two users in each cluster, and then applies spatical zeroforcing (ZF) across clusters to control inter-cluster interference (ICI) and Pareto-optimal beam design with successive interference cancellation (SIC) to two users in each cluster to remove interference to strong users and leverage signal-to-interference-plus-noise ratios (SINRs) of interference-experiencing weak users. The proposed method has flexibility to control the rates of strong and weak users and numerical results show that the proposed method yields good performance.

  3. Optimal scheduling of micro grids based on single objective programming

    NASA Astrophysics Data System (ADS)

    Chen, Yue

    2018-04-01

    Faced with the growing demand for electricity and the shortage of fossil fuels, how to optimally optimize the micro-grid has become an important research topic to maximize the economic, technological and environmental benefits of the micro-grid. This paper considers the role of the battery and the micro-grid and power grid to allow the exchange of power not exceeding 150kW preconditions, the main study of the economy to load for the goal is to minimize the electricity cost (abandonment of wind), to establish an optimization model, and to solve the problem by genetic algorithm. The optimal scheduling scheme is obtained and the utilization of renewable energy and the impact of the battery involved in regulation are analyzed.

  4. Research on logistics scheduling based on PSO

    NASA Astrophysics Data System (ADS)

    Bao, Huifang; Zhou, Linli; Liu, Lei

    2017-08-01

    With the rapid development of e-commerce based on the network, the logistics distribution support of e-commerce is becoming more and more obvious. The optimization of vehicle distribution routing can improve the economic benefit and realize the scientific of logistics [1]. Therefore, the study of logistics distribution vehicle routing optimization problem is not only of great theoretical significance, but also of considerable value of value. Particle swarm optimization algorithm is a kind of evolutionary algorithm, which is based on the random solution and the optimal solution by iteration, and the quality of the solution is evaluated through fitness. In order to obtain a more ideal logistics scheduling scheme, this paper proposes a logistics model based on particle swarm optimization algorithm.

  5. A Survey of Electronics Obsolescence and Reliability

    DTIC Science & Technology

    2010-07-01

    properties but there are many minor and major variations (e.g. curing schedule) affecting their usage in packaging processes and in reworking. Curing...within them. Electronic obsolescence is increasingly associated with physical characteristics that reduce component and system reliability, both in usage ...semiconductor technologies and of electronic systems, both in usage and in storage. By design, electronics technologies include few reliability margins

  6. Feasibility, reliability, and validity of the Japanese version of the 12-item World Health Organization Disability Assessment Schedule-2 in preoperative patients.

    PubMed

    Ida, Mitsuru; Naito, Yusuke; Tanaka, Yuu; Matsunari, Yasunori; Inoue, Satoki; Kawaguchi, Masahiko

    2017-08-01

    The avoidance of postoperative functional disability is one of the most important concerns of patients facing surgery, but methods to evaluate disability have not been definitively established. The aim of our study was to evaluate the feasibility, reliability, and validity of the Japanese version of the 12-item World Health Organization Disability Assessment Schedule-2 (WHODAS 2.0-J) in preoperative patients. Individuals aged ≥55 years who were scheduled to undergo surgery in a tertiary-care hospital in Japan between April 2016 and September 2016 were eligible for enrolment in the study. All patients were assessed preoperatively using the WHODAS 2.0-J, the 8-Item Short Form (SF-8) questionnaire, and the Tokyo Metropolitan Institute of Gerontology Index (TMIG Index). The feasibility, reliability, and validity of WHODAS2.0-J were evaluated using response rate, Cronbach's alpha (a measure of reliability), and the correlation between the WHODAS 2.0-J and the SF-8 questionnaire and TMIG Index, respectively. A total of 934 patients were enrolled in the study during the study period, of whom 930 completed the WHODAS 2.0-J (response rate 99.5%) preoperatively. Reliability and validity were assessed in the 898 patients who completed all three assessment tools (WHODAS 2.0-J, SF-8 questionnaire, and TMIG Index) and for whom all demographic data were available. Cronbach's alpha was 0.92. The total score of the WHODAS 2.0-J showed a mild or moderate correlation with the SF-8 questionnaire and TMIG Index (r = -0.63 to -0.34). The WHODAS 2.0-J is a feasible, reliable, and valid instrument for evaluating preoperative functional disability in surgical patients.

  7. The nurse scheduling problem: a goal programming and nonlinear optimization approaches

    NASA Astrophysics Data System (ADS)

    Hakim, L.; Bakhtiar, T.; Jaharuddin

    2017-01-01

    Nurses scheduling is an activity of allocating nurses to conduct a set of tasks at certain room at a hospital or health centre within a certain period. One of obstacles in the nurse scheduling is the lack of resources in order to fulfil the needs of the hospital. Nurse scheduling which is undertaken manually will be at risk of not fulfilling some nursing rules set by the hospital. Therefore, this study aimed to perform scheduling models that satisfy all the specific rules set by the management of Bogor State Hospital. We have developed three models to overcome the scheduling needs. Model 1 is designed to schedule nurses who are solely assigned to a certain inpatient unit and Model 2 is constructed to manage nurses who are assigned to an inpatient room as well as at Polyclinic room as conjunct nurses. As the assignment of nurses on each shift is uneven, then we propose Model 3 to minimize the variance of the workload in order to achieve equitable assignment on every shift. The first two models are formulated in goal programming framework, while the last model is in nonlinear optimization form.

  8. The Modified-Classroom Observation Schedule to Measure Intentional Communication (M-COSMIC): Evaluation of Reliability and Validity

    ERIC Educational Resources Information Center

    Clifford, Sally; Hudry, Kristelle; Brown, Laura; Pasco, Greg; Charman, Tony

    2010-01-01

    The Modified-Classroom Observation Schedule to Measure Intentional Communication (M-COSMIC) was developed as an ecologically valid measure of social-communication behaviour, delineating forms, functions, and intended partners of children's spontaneous communication acts. Forty-one children with autism spectrum disorder (ASD) aged 48-73 months were…

  9. Uncertainty analysis of an irrigation scheduling model for water management in crop production

    USDA-ARS?s Scientific Manuscript database

    Irrigation scheduling tools are critical to allow producers to manage water resources for crop production in an accurate and timely manner. To be useful, these tools need to be accurate, complete, and relatively reliable. The current work presents the uncertainty analysis and its results for the Mis...

  10. Operational ET remote sensing (RS) program for irrigation scheduling and management: challenges and opportunities

    Treesearch

    Prasanna Gowda

    2016-01-01

    Evapotranspiration (ET) is an essential component of the water balance and a major consumptive use of irrigation water and precipitation on cropland. Any attempt to improve water use efficiency must be based on reliable estimates of ET for irrigation scheduling purposes.

  11. Environmental damage schedules: community judgments of importance and assessments of losses

    Treesearch

    Ratana Chuenpagdee; Jack L. Knetsch; Thomas C. Brown

    2001-01-01

    Available methods of valuing environmental changes are often limited in their applicability to current issues such as damage assessment and implementing regulatory controls, or may otherwise not provide reliable readings of community preferences. An alternative is to base decisions on predetermined fixed schedules of sanctions, restrictions, damage awards, and other...

  12. Calibration of resistance factors needed in the LRFD design of driven piles.

    DOT National Transportation Integrated Search

    2009-05-01

    This research project presents the calibration of resistance factors for the Load and Resistance Factor Design (LRFD) method of driven : piles driven into Louisiana soils based on reliability theory. Fifty-three square Precast-Prestressed-Concrete (P...

  13. Calibration of Resistance Factors Needed in the LRFD Design of Driven Piles

    DOT National Transportation Integrated Search

    2009-05-01

    This research project presents the calibration of resistance factors for the Load and Resistance Factor Design (LRFD) method of driven : piles driven into Louisiana soils based on reliability theory. Fifty-three square Precast-Prestressed-Concrete (P...

  14. Optimizing an F-16 Squadron Weekly Pilot Schedule for the Turkish Air Force

    DTIC Science & Technology

    2010-03-01

    disrupted schedules are rescheduled , minimizing the total number of changes with respect to the previous schedule’s objective function. Output...producing rosters for a nursing staff in a large general hospital (Dowsland, 1998) and afterwards Aickelin and Dowsland use an Indirect Genetic...algorithm to improve the solutions of the nurse scheduling problem which is similar to the fighter squadron pilot scheduling problem (Aickelin and

  15. Optimization of nas lemoore scheduling to support a growing aircraft population

    DTIC Science & Technology

    2017-03-01

    requirements, and, without knowing the other squadrons’ flight plans , creates his or her squadron’s flight schedule. Figure 2 illustrates the process each...Lemoore, they do not communicate their flight schedules among themselves; hence, the daily flight plan generated by each squadron is independently...manual process for aircraft flight scheduling at Naval Air Station (NAS) Lemoore accommodates the independent needs of 16 fighter resident squadrons as

  16. Stochastic flow shop scheduling of overlapping jobs on tandem machines in application to optimizing the US Army's deliberate nuclear, biological, and chemical decontamination process, (final report). Master's thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novikov, V.

    1991-05-01

    The U.S. Army's detailed equipment decontamination process is a stochastic flow shop which has N independent non-identical jobs (vehicles) which have overlapping processing times. This flow shop consists of up to six non-identical machines (stations). With the exception of one station, the processing times of the jobs are random variables. Based on an analysis of the processing times, the jobs for the 56 Army heavy division companies were scheduled according to the best shortest expected processing time - longest expected processing time (SEPT-LEPT) sequence. To assist in this scheduling the Gap Comparison Heuristic was developed to select the best SEPT-LEPTmore » schedule. This schedule was then used in balancing the detailed equipment decon line in order to find the best possible site configuration subject to several constraints. The detailed troop decon line, in which all jobs are independent and identically distributed, was then balanced. Lastly, an NBC decon optimization computer program was developed using the scheduling and line balancing results. This program serves as a prototype module for the ANBACIS automated NBC decision support system.... Decontamination, Stochastic flow shop, Scheduling, Stochastic scheduling, Minimization of the makespan, SEPT-LEPT Sequences, Flow shop line balancing, ANBACIS.« less

  17. A hybrid online scheduling mechanism with revision and progressive techniques for autonomous Earth observation satellite

    NASA Astrophysics Data System (ADS)

    Li, Guoliang; Xing, Lining; Chen, Yingwu

    2017-11-01

    The autonomicity of self-scheduling on Earth observation satellite and the increasing scale of satellite network attract much attention from researchers in the last decades. In reality, the limited onboard computational resource presents challenge for the online scheduling algorithm. This study considered online scheduling problem for a single autonomous Earth observation satellite within satellite network environment. It especially addressed that the urgent tasks arrive stochastically during the scheduling horizon. We described the problem and proposed a hybrid online scheduling mechanism with revision and progressive techniques to solve this problem. The mechanism includes two decision policies, a when-to-schedule policy combining periodic scheduling and critical cumulative number-based event-driven rescheduling, and a how-to-schedule policy combining progressive and revision approaches to accommodate two categories of task: normal tasks and urgent tasks. Thus, we developed two heuristic (re)scheduling algorithms and compared them with other generally used techniques. Computational experiments indicated that the into-scheduling percentage of urgent tasks in the proposed mechanism is much higher than that in periodic scheduling mechanism, and the specific performance is highly dependent on some mechanism-relevant and task-relevant factors. For the online scheduling, the modified weighted shortest imaging time first and dynamic profit system benefit heuristics outperformed the others on total profit and the percentage of successfully scheduled urgent tasks.

  18. User’s guide to SNAP for ArcGIS® :ArcGIS interface for scheduling and network analysis program

    Treesearch

    Woodam Chung; Dennis Dykstra; Fred Bower; Stephen O’Brien; Richard Abt; John. and Sessions

    2012-01-01

    This document introduces a computer software named SNAP for ArcGIS® , which has been developed to streamline scheduling and transportation planning for timber harvest areas. Using modern optimization techniques, it can be used to spatially schedule timber harvest with consideration of harvesting costs, multiple products, alternative...

  19. Set covering algorithm, a subprogram of the scheduling algorithm for mission planning and logistic evaluation

    NASA Technical Reports Server (NTRS)

    Chang, H.

    1976-01-01

    A computer program using Lemke, Salkin and Spielberg's Set Covering Algorithm (SCA) to optimize a traffic model problem in the Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE) was documented. SCA forms a submodule of SAMPLE and provides for input and output, subroutines, and an interactive feature for performing the optimization and arranging the results in a readily understandable form for output.

  20. Scheduling Jobs and a Variable Maintenance on a Single Machine with Common Due-Date Assignment

    PubMed Central

    Wan, Long

    2014-01-01

    We investigate a common due-date assignment scheduling problem with a variable maintenance on a single machine. The goal is to minimize the total earliness, tardiness, and due-date cost. We derive some properties on an optimal solution for our problem. For a special case with identical jobs we propose an optimal polynomial time algorithm followed by a numerical example. PMID:25147861

  1. Model-based optimization of G-CSF treatment during cytotoxic chemotherapy.

    PubMed

    Schirm, Sibylle; Engel, Christoph; Loibl, Sibylle; Loeffler, Markus; Scholz, Markus

    2018-02-01

    Although G-CSF is widely used to prevent or ameliorate leukopenia during cytotoxic chemotherapies, its optimal use is still under debate and depends on many therapy parameters such as dosing and timing of cytotoxic drugs and G-CSF, G-CSF pharmaceuticals used and individual risk factors of patients. We integrate available biological knowledge and clinical data regarding cell kinetics of bone marrow granulopoiesis, the cytotoxic effects of chemotherapy and pharmacokinetics and pharmacodynamics of G-CSF applications (filgrastim or pegfilgrastim) into a comprehensive model. The model explains leukocyte time courses of more than 70 therapy scenarios comprising 10 different cytotoxic drugs. It is applied to develop optimized G-CSF schedules for a variety of clinical scenarios. Clinical trial results showed validity of model predictions regarding alternative G-CSF schedules. We propose modifications of G-CSF treatment for the chemotherapies 'BEACOPP escalated' (Hodgkin's disease), 'ETC' (breast cancer), and risk-adapted schedules for 'CHOP-14' (aggressive non-Hodgkin's lymphoma in elderly patients). We conclude that we established a model of human granulopoiesis under chemotherapy which allows predictions of yet untested G-CSF schedules, comparisons between them, and optimization of filgrastim and pegfilgrastim treatment. As a general rule of thumb, G-CSF treatment should not be started too early and patients could profit from filgrastim treatment continued until the end of the chemotherapy cycle.

  2. Scheduling Jobs with Variable Job Processing Times on Unrelated Parallel Machines

    PubMed Central

    Zhang, Guang-Qian; Wang, Jian-Jun; Liu, Ya-Jing

    2014-01-01

    m unrelated parallel machines scheduling problems with variable job processing times are considered, where the processing time of a job is a function of its position in a sequence, its starting time, and its resource allocation. The objective is to determine the optimal resource allocation and the optimal schedule to minimize a total cost function that dependents on the total completion (waiting) time, the total machine load, the total absolute differences in completion (waiting) times on all machines, and total resource cost. If the number of machines is a given constant number, we propose a polynomial time algorithm to solve the problem. PMID:24982933

  3. Hubble Systems Optimize Hospital Schedules

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Don Rosenthal, a former Ames Research Center computer scientist who helped design the Hubble Space Telescope's scheduling software, co-founded Allocade Inc. of Menlo Park, California, in 2004. Allocade's OnCue software helps hospitals reclaim unused capacity and optimize constantly changing schedules for imaging procedures. After starting to use the software, one medical center soon reported noticeable improvements in efficiency, including a 12 percent increase in procedure volume, 35 percent reduction in staff overtime, and significant reductions in backlog and technician phone time. Allocade now offers versions for outpatient and inpatient magnetic resonance imaging (MRI), ultrasound, interventional radiology, nuclear medicine, Positron Emission Tomography (PET), radiography, radiography-fluoroscopy, and mammography.

  4. Cascaded Optimization for a Persistent Data Ferrying Unmanned Aircraft

    NASA Astrophysics Data System (ADS)

    Carfang, Anthony

    This dissertation develops and assesses a cascaded method for designing optimal periodic trajectories and link schedules for an unmanned aircraft to ferry data between stationary ground nodes. This results in a fast solution method without the need to artificially constrain system dynamics. Focusing on a fundamental ferrying problem that involves one source and one destination, but includes complex vehicle and Radio-Frequency (RF) dynamics, a cascaded structure to the system dynamics is uncovered. This structure is exploited by reformulating the nonlinear optimization problem into one that reduces the independent control to the vehicle's motion, while the link scheduling control is folded into the objective function and implemented as an optimal policy that depends on candidate motion control. This formulation is proven to maintain optimality while reducing computation time in comparison to traditional ferry optimization methods. The discrete link scheduling problem takes the form of a combinatorial optimization problem that is known to be NP-Hard. A derived necessary condition for optimality guides the development of several heuristic algorithms, specifically the Most-Data-First Algorithm and the Knapsack Adaptation. These heuristics are extended to larger ferrying scenarios, and assessed analytically and through Monte Carlo simulation, showing better throughput performance in the same order of magnitude of computation time in comparison to other common link scheduling policies. The cascaded optimization method is implemented with a novel embedded software system on a small, unmanned aircraft to validate the simulation results with field experiments. To address the sensitivity of results on trajectory tracking performance, a system that combines motion and link control with waypoint-based navigation is developed and assessed through field experiments. The data ferrying algorithms are further extended by incorporating a Gaussian process to opportunistically learn the RF environment. By continuously improving RF models, the cascaded planner can continually improve the ferrying system's overall performance.

  5. Genetic algorithm to solve the problems of lectures and practicums scheduling

    NASA Astrophysics Data System (ADS)

    Syahputra, M. F.; Apriani, R.; Sawaluddin; Abdullah, D.; Albra, W.; Heikal, M.; Abdurrahman, A.; Khaddafi, M.

    2018-02-01

    Generally, the scheduling process is done manually. However, this method has a low accuracy level, along with possibilities that a scheduled process collides with another scheduled process. When doing theory class and practicum timetable scheduling process, there are numerous problems, such as lecturer teaching schedule collision, schedule collision with another schedule, practicum lesson schedules that collides with theory class, and the number of classrooms available. In this research, genetic algorithm is implemented to perform theory class and practicum timetable scheduling process. The algorithm will be used to process the data containing lists of lecturers, courses, and class rooms, obtained from information technology department at University of Sumatera Utara. The result of scheduling process using genetic algorithm is the most optimal timetable that conforms to available time slots, class rooms, courses, and lecturer schedules.

  6. Minimizing metastatic risk in radiotherapy fractionation schedules

    NASA Astrophysics Data System (ADS)

    Badri, Hamidreza; Ramakrishnan, Jagdish; Leder, Kevin

    2015-11-01

    Metastasis is the process by which cells from a primary tumor disperse and form new tumors at distant anatomical locations. The treatment and prevention of metastatic cancer remains an extremely challenging problem. This work introduces a novel biologically motivated objective function to the radiation optimization community that takes into account metastatic risk instead of the status of the primary tumor. In this work, we consider the problem of developing fractionated irradiation schedules that minimize production of metastatic cancer cells while keeping normal tissue damage below an acceptable level. A dynamic programming framework is utilized to determine the optimal fractionation scheme. We evaluated our approach on a breast cancer case using the heart and the lung as organs-at-risk (OAR). For small tumor α /β values, hypo-fractionated schedules were optimal, which is consistent with standard models. However, for relatively larger α /β values, we found the type of schedule depended on various parameters such as the time when metastatic risk was evaluated, the α /β values of the OARs, and the normal tissue sparing factors. Interestingly, in contrast to standard models, hypo-fractionated and semi-hypo-fractionated schedules (large initial doses with doses tapering off with time) were suggested even with large tumor α/β values. Numerical results indicate the potential for significant reduction in metastatic risk.

  7. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).

  8. 78 FR 15722 - Federal Advisory Committee Act; Communications Security, Reliability, and Interoperability Council

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-12

    ... Alerting, E9-1-1 Location Accuracy, Network Security Best Practices, DNSSEC Implementation Practices for... FEDERAL COMMUNICATIONS COMMISSION Federal Advisory Committee Act; Communications Security... Security, Reliability, and Interoperability Council (CSRIC) meeting that was scheduled for March 6, 2013 is...

  9. Application of Hybrid Optimization-Expert System for Optimal Power Management on Board Space Power Station

    NASA Technical Reports Server (NTRS)

    Momoh, James; Chattopadhyay, Deb; Basheer, Omar Ali AL

    1996-01-01

    The space power system has two sources of energy: photo-voltaic blankets and batteries. The optimal power management problem on-board has two broad operations: off-line power scheduling to determine the load allocation schedule of the next several hours based on the forecast of load and solar power availability. The nature of this study puts less emphasis on speed requirement for computation and more importance on the optimality of the solution. The second category problem, on-line power rescheduling, is needed in the event of occurrence of a contingency to optimally reschedule the loads to minimize the 'unused' or 'wasted' energy while keeping the priority on certain type of load and minimum disturbance of the original optimal schedule determined in the first-stage off-line study. The computational performance of the on-line 'rescheduler' is an important criterion and plays a critical role in the selection of the appropriate tool. The Howard University Center for Energy Systems and Control has developed a hybrid optimization-expert systems based power management program. The pre-scheduler has been developed using a non-linear multi-objective optimization technique called the Outer Approximation method and implemented using the General Algebraic Modeling System (GAMS). The optimization model has the capability of dealing with multiple conflicting objectives viz. maximizing energy utilization, minimizing the variation of load over a day, etc. and incorporates several complex interaction between the loads in a space system. The rescheduling is performed using an expert system developed in PROLOG which utilizes a rule-base for reallocation of the loads in an emergency condition viz. shortage of power due to solar array failure, increase of base load, addition of new activity, repetition of old activity etc. Both the modules handle decision making on battery charging and discharging and allocation of loads over a time-horizon of a day divided into intervals of 10 minutes. The models have been extensively tested using a case study for the Space Station Freedom and the results for the case study will be presented. Several future enhancements of the pre-scheduler and the 'rescheduler' have been outlined which include graphic analyzer for the on-line module, incorporating probabilistic considerations, including spatial location of the loads and the connectivity using a direct current (DC) load flow model.

  10. Multiresource allocation and scheduling for periodic soft real-time applications

    NASA Astrophysics Data System (ADS)

    Gopalan, Kartik; Chiueh, Tzi-cker

    2001-12-01

    Real-time applications that utilize multiple system resources, such as CPU, disks, and network links, require coordinated scheduling of these resources in order to meet their end-to-end performance requirements. Most state-of-the-art operating systems support independent resource allocation and deadline-driven scheduling but lack coordination among multiple heterogeneous resources. This paper describes the design and implementation of an Integrated Real-time Resource Scheduler (IRS) that performs coordinated allocation and scheduling of multiple heterogeneous resources on the same machine for periodic soft real-time application. The principal feature of IRS is a heuristic multi-resource allocation algorithm that reserves multiple resources for real-time applications in a manner that can maximize the number of applications admitted into the system in the long run. At run-time, a global scheduler dispatches the tasks of the soft real-time application to individual resource schedulers according to the precedence constraints between tasks. The individual resource schedulers, which could be any deadline based schedulers, can make scheduling decisions locally and yet collectively satisfy a real-time application's performance requirements. The tightness of overall timing guarantees is ultimately determined by the properties of individual resource schedulers. However, IRS maximizes overall system resource utilization efficiency by coordinating deadline assignment across multiple tasks in a soft real-time application.

  11. Do we perform surgical programming well? How can we improve it?

    PubMed

    Albareda, J; Clavel, D; Mahulea, C; Blanco, N; Ezquerra, L; Gómez, J; Silva, J M

    The objective is to establish the duration of our interventions, intermediate times and surgical performance. This will create a virtual waiting list to apply a mathematical programme that performs programming with maximum performance. Retrospective review of 49 surgical sessions obtaining the delay in start time, intermediate time and surgical performance. Retrospective review of 4,045 interventions performed in the last 3 years to obtain the average duration of each type of surgery. Creation of a virtual waiting list of 700 patients in order to perform virtual programming through the MIQCP-P until achieving optimal performance. Our surgical performance with manual programming was 75.9%, ending 22.4% later than 3pm. The performance in the days without suspensions was 78.4%. The delay at start time was 9.7min. The optimum performance was 77.5% with a confidence of finishing before 15h of 80.6%. The waiting list has been scheduled in 254 sessions. Our manual surgical performance without suspensions (78.4%) was superior to the optimal (77.5%), generating days finished later than 3pm and suspensions. The possibilities for improvement are to achieve punctuality at the start time and adjust the schedule to the ideal performance. The virtual programming has allowed us to obtain our ideal performance and to establish the number of operating rooms necessary to solve the waiting list created. The data obtained in virtual mathematical programming are reliable enough to implement this model with guarantees. Copyright © 2017 SECOT. Publicado por Elsevier España, S.L.U. All rights reserved.

  12. Efficiently Scheduling Multi-core Guest Virtual Machines on Multi-core Hosts in Network Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S

    2011-01-01

    Virtual machine (VM)-based simulation is a method used by network simulators to incorporate realistic application behaviors by executing actual VMs as high-fidelity surrogates for simulated end-hosts. A critical requirement in such a method is the simulation time-ordered scheduling and execution of the VMs. Prior approaches such as time dilation are less efficient due to the high degree of multiplexing possible when multiple multi-core VMs are simulated on multi-core host systems. We present a new simulation time-ordered scheduler to efficiently schedule multi-core VMs on multi-core real hosts, with a virtual clock realized on each virtual core. The distinguishing features of ourmore » approach are: (1) customizable granularity of the VM scheduling time unit on the simulation time axis, (2) ability to take arbitrary leaps in virtual time by VMs to maximize the utilization of host (real) cores when guest virtual cores idle, and (3) empirically determinable optimality in the tradeoff between total execution (real) time and time-ordering accuracy levels. Experiments show that it is possible to get nearly perfect time-ordered execution, with a slight cost in total run time, relative to optimized non-simulation VM schedulers. Interestingly, with our time-ordered scheduler, it is also possible to reduce the time-ordering error from over 50% of non-simulation scheduler to less than 1% realized by our scheduler, with almost the same run time efficiency as that of the highly efficient non-simulation VM schedulers.« less

  13. Re-scheduling as a tool for the power management on board a spacecraft

    NASA Technical Reports Server (NTRS)

    Albasheer, Omar; Momoh, James A.

    1995-01-01

    The scheduling of events on board a spacecraft is based on forecast energy levels. The real time values of energy may not coincide with the forecast values; consequently, a dynamic revising to the allocation of power is needed. The re-scheduling is also needed for other reasons on board a spacecraft like the addition of new event which must be scheduled, or a failure of an event due to many different contingencies. This need of rescheduling is very important to the survivability of the spacecraft. In this presentation, a re-scheduling tool will be presented as a part of an overall scheme for the power management on board a spacecraft from the allocation of energy point of view. The overall scheme is based on the optimal use of energy available on board a spacecraft using expert systems combined with linear optimization techniques. The system will be able to schedule maximum number of events utilizing most energy available. The outcome is more events scheduled to share the operation cost of that spacecraft. The system will also be able to re-schedule in case of a contingency with minimal time and minimal disturbance of the original schedule. The end product is a fully integrated planning system capable of producing the right decisions in short time with less human error. The overall system will be presented with the re-scheduling algorithm discussed in detail, then the tests and results will be presented for validations.

  14. ACARA - AVAILABILITY, COST AND RESOURCE ALLOCATION

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1994-01-01

    ACARA is a program for analyzing availability, lifecycle cost, and resource scheduling. It uses a statistical Monte Carlo method to simulate a system's capacity states as well as component failure and repair. Component failures are modelled using a combination of exponential and Weibull probability distributions. ACARA schedules component replacement to achieve optimum system performance. The scheduling will comply with any constraints on component production, resupply vehicle capacity, on-site spares, or crew manpower and equipment. ACARA is capable of many types of analyses and trade studies because of its integrated approach. It characterizes the system performance in terms of both state availability and equivalent availability (a weighted average of state availability). It can determine the probability of exceeding a capacity state to assess reliability and loss of load probability. It can also evaluate the effect of resource constraints on system availability and lifecycle cost. ACARA interprets the results of a simulation and displays tables and charts for: (1) performance, i.e., availability and reliability of capacity states, (2) frequency of failure and repair, (3) lifecycle cost, including hardware, transportation, and maintenance, and (4) usage of available resources, including mass, volume, and maintenance man-hours. ACARA incorporates a user-friendly, menu-driven interface with full screen data entry. It provides a file management system to store and retrieve input and output datasets for system simulation scenarios. ACARA is written in APL2 using the APL2 interpreter for IBM PC compatible systems running MS-DOS. Hardware requirements for the APL2 system include 640K of RAM, 2Mb of extended memory, and an 80386 or 80486 processor with an 80x87 math co-processor. A dot matrix printer is required if the user wishes to print a graph from a results table. A sample MS-DOS executable is provided on the distribution medium. The executable contains licensed material from the APL2 for the IBM PC product which is program property of IBM; Copyright IBM Corporation 1988 - All rights reserved. It is distributed with IBM's permission. The standard distribution medium for this program is a set of three 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. ACARA was developed in 1992.

  15. Man as the main component of the closed ecological system of the spacecraft or planetary station.

    PubMed

    Parin, V V; Adamovich, B A

    1968-01-01

    Current life-support systems of the spacecraft provide human requirements for food, water and oxygen only. Advanced life-support systems will involve man as their main component and will ensure completely his material and energy requirements. The design of individual components of such systems will assure their entire suitability and mutual control effects. Optimization of the performance of the crew and ecological system, on the basis of the information characterizing their function, demands efficient methods of collection and treatment of the information obtained through wireless recording of physiological parameters and their automatic treatment. Peculiarities of interplanetary missions and planetary stations make it necessary to conform the schedule of physiological recordings with the work-and-rest cycle of the space crew and inertness of components of the ecological system, especially of those responsible for oxygen regeneration. It is rational to model ecological systems and their components, taking into consideration the correction effect of the information on the health conditions and performance of the crewmen. Wide application of physiological data will allow the selection of optimal designs and sharply increase reliability of ecological systems.

  16. Optimizing the Shunting Schedule of Electric Multiple Units Depot Using an Enhanced Particle Swarm Optimization Algorithm

    PubMed Central

    Jin, Junchen

    2016-01-01

    The shunting schedule of electric multiple units depot (SSED) is one of the essential plans for high-speed train maintenance activities. This paper presents a 0-1 programming model to address the problem of determining an optimal SSED through automatic computing. The objective of the model is to minimize the number of shunting movements and the constraints include track occupation conflicts, shunting routes conflicts, time durations of maintenance processes, and shunting running time. An enhanced particle swarm optimization (EPSO) algorithm is proposed to solve the optimization problem. Finally, an empirical study from Shanghai South EMU Depot is carried out to illustrate the model and EPSO algorithm. The optimization results indicate that the proposed method is valid for the SSED problem and that the EPSO algorithm outperforms the traditional PSO algorithm on the aspect of optimality. PMID:27436998

  17. Research on Novel Algorithms for Smart Grid Reliability Assessment and Economic Dispatch

    NASA Astrophysics Data System (ADS)

    Luo, Wenjin

    In this dissertation, several studies of electric power system reliability and economy assessment methods are presented. To be more precise, several algorithms in evaluating power system reliability and economy are studied. Furthermore, two novel algorithms are applied to this field and their simulation results are compared with conventional results. As the electrical power system develops towards extra high voltage, remote distance, large capacity and regional networking, the application of a number of new technique equipments and the electric market system have be gradually established, and the results caused by power cut has become more and more serious. The electrical power system needs the highest possible reliability due to its complication and security. In this dissertation the Boolean logic Driven Markov Process (BDMP) method is studied and applied to evaluate power system reliability. This approach has several benefits. It allows complex dynamic models to be defined, while maintaining its easy readability as conventional methods. This method has been applied to evaluate IEEE reliability test system. The simulation results obtained are close to IEEE experimental data which means that it could be used for future study of the system reliability. Besides reliability, modern power system is expected to be more economic. This dissertation presents a novel evolutionary algorithm named as quantum evolutionary membrane algorithm (QEPS), which combines the concept and theory of quantum-inspired evolutionary algorithm and membrane computation, to solve the economic dispatch problem in renewable power system with on land and offshore wind farms. The case derived from real data is used for simulation tests. Another conventional evolutionary algorithm is also used to solve the same problem for comparison. The experimental results show that the proposed method is quick and accurate to obtain the optimal solution which is the minimum cost for electricity supplied by wind farm system.

  18. Autonomous Hybrid Priority Queueing for Scheduling Residential Energy Demands

    NASA Astrophysics Data System (ADS)

    Kalimullah, I. Q.; Shamroukh, M.; Sahar, N.; Shetty, S.

    2017-05-01

    The advent of smart grid technologies has opened up opportunities to manage the energy consumption of the users within a residential smart grid system. Demand response management is particularly being employed to reduce the overall load on an electricity network which could in turn reduce outages and electricity costs. The objective of this paper is to develop an intelligible scheduler to optimize the energy available to a micro grid through hybrid queueing algorithm centered around the consumers’ energy demands. This is achieved by shifting certain schedulable load appliances to light load hours. Various factors such as the type of demand, grid load, consumers’ energy usage patterns and preferences are considered while formulating the logical constraints required for the algorithm. The algorithm thus obtained is then implemented in MATLAB workspace to simulate its execution by an Energy Consumption Scheduler (ECS) found within smart meters, which automatically finds the optimal energy consumption schedule tailor made to fit each consumer within the micro grid network.

  19. Using the principles of circadian physiology enhances shift schedule design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Connolly, J.J.; Moore-Ede, M.C.

    1987-01-01

    Nuclear power plants must operate 24 h, 7 days a week. For the most part, shift schedules currently in use at nuclear power plants have been designed to meet operational needs without considering the biological clocks of the human operators. The development of schedules that also take circadian principles into account is a positive step that can be taken to improve plant safety by optimizing operator alertness. These schedules reduce the probability of human errors especially during backshifts. In addition, training programs that teach round-the-clock workers how to deal with the problems of shiftwork can help to optimize performance andmore » alertness. These programs teach shiftworkers the underlying causes of the sleep problems associated with shiftwork and also provide coping strategies for improving sleep and dealing with the transition between shifts. When these training programs are coupled with an improved schedule, the problems associated with working round-the-clock can be significantly reduced.« less

  20. Multi-time scale energy management of wind farms based on comprehensive evaluation technology

    NASA Astrophysics Data System (ADS)

    Xu, Y. P.; Huang, Y. H.; Liu, Z. J.; Wang, Y. F.; Li, Z. Y.; Guo, L.

    2017-11-01

    A novel energy management of wind farms is proposed in this paper. Firstly, a novel comprehensive evaluation system is proposed to quantify economic properties of each wind farm to make the energy management more economical and reasonable. Then, a combination of multi time-scale schedule method is proposed to develop a novel energy management. The day-ahead schedule optimizes unit commitment of thermal power generators. The intraday schedule is established to optimize power generation plan for all thermal power generating units, hydroelectric generating sets and wind power plants. At last, the power generation plan can be timely revised in the process of on-line schedule. The paper concludes with simulations conducted on a real provincial integrated energy system in northeast China. Simulation results have validated the proposed model and corresponding solving algorithms.

  1. Optimal Shift Duration and Sequence: Recommended Approach for Short-Term Emergency Response Activations for Public Health and Emergency Management

    PubMed Central

    Burgess, Paula A.

    2007-01-01

    Since September 11, 2001, and the consequent restructuring of the US preparedness and response activities, public health workers are increasingly called on to activate a temporary round-the-clock staffing schedule. These workers may have to make key decisions that could significantly impact the health and safety of the public. The unique physiological demands of rotational shift work and night shift work have the potential to negatively impact decisionmaking ability. A responsible, evidence-based approach to scheduling applies the principles of circadian physiology, as well as unique individual physiologies and preferences. Optimal scheduling would use a clockwise (morning-afternoon-night) rotational schedule: limiting night shifts to blocks of 3, limiting shift duration to 8 hours, and allowing 3 days of recuperation after night shifts. PMID:17413074

  2. Optimization Models for Scheduling of Jobs

    PubMed Central

    Indika, S. H. Sathish; Shier, Douglas R.

    2006-01-01

    This work is motivated by a particular scheduling problem that is faced by logistics centers that perform aircraft maintenance and modification. Here we concentrate on a single facility (hangar) which is equipped with several work stations (bays). Specifically, a number of jobs have already been scheduled for processing at the facility; the starting times, durations, and work station assignments for these jobs are assumed to be known. We are interested in how best to schedule a number of new jobs that the facility will be processing in the near future. We first develop a mixed integer quadratic programming model (MIQP) for this problem. Since the exact solution of this MIQP formulation is time consuming, we develop a heuristic procedure, based on existing bin packing techniques. This heuristic is further enhanced by application of certain local optimality conditions. PMID:27274921

  3. Artificial Bee Colony Optimization for Short-Term Hydrothermal Scheduling

    NASA Astrophysics Data System (ADS)

    Basu, M.

    2014-12-01

    Artificial bee colony optimization is applied to determine the optimal hourly schedule of power generation in a hydrothermal system. Artificial bee colony optimization is a swarm-based algorithm inspired by the food foraging behavior of honey bees. The algorithm is tested on a multi-reservoir cascaded hydroelectric system having prohibited operating zones and thermal units with valve point loading. The ramp-rate limits of thermal generators are taken into consideration. The transmission losses are also accounted for through the use of loss coefficients. The algorithm is tested on two hydrothermal multi-reservoir cascaded hydroelectric test systems. The results of the proposed approach are compared with those of differential evolution, evolutionary programming and particle swarm optimization. From numerical results, it is found that the proposed artificial bee colony optimization based approach is able to provide better solution.

  4. Scheduling, revenue management, and fairness in an academic-hospital radiology division.

    PubMed

    Baum, Richard; Bertsimas, Dimitris; Kallus, Nathan

    2014-10-01

    Physician staff of academic hospitals today practice in several geographic locations including their main hospital. This is referred to as the extended campus. With extended campuses expanding, the growing complexity of a single division's schedule means that a naive approach to scheduling compromises revenue. Moreover, it may provide an unfair allocation of individual revenue, desirable or burdensome assignments, and the extent to which the preferences of each individual are met. This has adverse consequences on incentivization and employee satisfaction and is simply against business policy. We identify the daily scheduling of physicians in this context as an operational problem that incorporates scheduling, revenue management, and fairness. Noting previous success of operations research and optimization in each of these disciplines, we propose a simple unified optimization formulation of this scheduling problem using mixed-integer optimization. Through a study of implementing the approach at the Division of Angiography and Interventional Radiology at the Brigham and Women's Hospital, which is directed by one of the authors, we exemplify the flexibility of the model to adapt to specific applications, the tractability of solving the model in practical settings, and the significant impact of the approach, most notably in increasing revenue by 8.2% over previous operating revenue while adhering strictly to a codified fairness and objectivity. We found that the investment in implementing such a system is far outweighed by the large potential revenue increase and the other benefits outlined. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  5. Deep Space Network Scheduling Using Evolutionary Computational Methods

    NASA Technical Reports Server (NTRS)

    Guillaume, Alexandre; Lee, Seugnwon; Wang, Yeou-Fang; Terrile, Richard J.

    2007-01-01

    The paper presents the specific approach taken to formulate the problem in terms of gene encoding, fitness function, and genetic operations. The genome is encoded such that a subset of the scheduling constraints is automatically satisfied. Several fitness functions are formulated to emphasize different aspects of the scheduling problem. The optimal solutions of the different fitness functions demonstrate the trade-off of the scheduling problem and provide insight into a conflict resolution process.

  6. Enhanced Specification and Verification for Timed Planning

    DTIC Science & Technology

    2009-02-28

    Scheduling Problem The job-shop scheduling problem ( JSSP ) is a generic resource allocation problem in which common resources (“machines”) are required...interleaving of all processes Pi with the non-delay and mutual exclusion constraints: JSSP =̂ |||0<i6n Pi Where mutual-exclusion( JSSP ) For every complete...execution of JSSP (which terminates), its associated sched- ule S is a feasible schedule. An optimal schedule is a trace of JSSP with the minimum ending

  7. Successful multi-technology NO{sub x} reduction project experience at New England Power Company - Salem Harbor station

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fynan, G.A.; Sload, A.; Adamson, E.J.

    This paper presents the successes and lessons learned during recent low NOx burner and SNCR projects on generating units at New England Power`s Salem Harbor Generating Station. The principals involved in the project were New England Power Company, New England Power Service Company, Stone and Webster Engineering Corp. and Deutsche-Babcock Riley Inc. One unit was retrofitted with 16 Riley CCV burners with an OFA system, the other with 12 low NOx burners only. In addition to the burners, a SNCR system was also installed on three units. Since each of the burner systems are interdependent (SNCR was treated separately duringmore » design phases and optimized along with the burner systems), close cooperation during the design stages was essential to ensuring a successful installation, startup and optimization. This paper will present the coordinated effort put forth by each company toward this goal with the hope of assisting others who may be planning a similar effort. A summary of the operating results will also be presented. The up front teamwork and advance planning that went into the design stages of the project resulted in a number of successful outcomes e.g. scanner reliability, properly operating oil supply system, compatibility of burners and burner front oil system with new Burner Management System (BMS), reliable first attempt burner ignition and more. Advance planning facilitated pre-outage work and factored into keeping schedules and budgets on track.« less

  8. Real-time Tumor Oxygenation Changes After Single High-dose Radiation Therapy in Orthotopic and Subcutaneous Lung Cancer in Mice: Clinical Implication for Stereotactic Ablative Radiation Therapy Schedule Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Changhoon; Hong, Beom-Ju; Bok, Seoyeon

    Purpose: To investigate the serial changes of tumor hypoxia in response to single high-dose irradiation by various clinical and preclinical methods to propose an optimal fractionation schedule for stereotactic ablative radiation therapy. Methods and Materials: Syngeneic Lewis lung carcinomas were grown either orthotopically or subcutaneously in C57BL/6 mice and irradiated with a single dose of 15 Gy to mimic stereotactic ablative radiation therapy used in the clinic. Serial [{sup 18}F]-misonidazole (F-MISO) positron emission tomography (PET) imaging, pimonidazole fluorescence-activated cell sorting analyses, hypoxia-responsive element-driven bioluminescence, and Hoechst 33342 perfusion were performed before irradiation (day −1), at 6 hours (day 0), and 2 (daymore » 2) and 6 (day 6) days after irradiation for both subcutaneous and orthotopic lung tumors. For F-MISO, the tumor/brain ratio was analyzed. Results: Hypoxic signals were too low to quantitate for orthotopic tumors using F-MISO PET or hypoxia-responsive element-driven bioluminescence imaging. In subcutaneous tumors, the maximum tumor/brain ratio was 2.87 ± 0.483 at day −1, 1.67 ± 0.116 at day 0, 2.92 ± 0.334 at day 2, and 2.13 ± 0.385 at day 6, indicating that tumor hypoxia was decreased immediately after irradiation and had returned to the pretreatment levels at day 2, followed by a slight decrease by day 6 after radiation. Pimonidazole analysis also revealed similar patterns. Using Hoechst 33342 vascular perfusion dye, CD31, and cleaved caspase 3 co-immunostaining, we found a rapid and transient vascular collapse, which might have resulted in poor intratumor perfusion of F-MISO PET tracer or pimonidazole delivered at day 0, leading to decreased hypoxic signals at day 0 by PET or pimonidazole analyses. Conclusions: We found tumor hypoxia levels decreased immediately after delivery of a single dose of 15 Gy and had returned to the pretreatment levels 2 days after irradiation and had decreased slightly by day 6. Our results indicate that single high-dose irradiation can produce a rapid, but reversible, vascular collapse in tumors.« less

  9. Knowledge-Based Scheduling of Arrival Aircraft in the Terminal Area

    NASA Technical Reports Server (NTRS)

    Krzeczowski, K. J.; Davis, T.; Erzberger, H.; Lev-Ram, Israel; Bergh, Christopher P.

    1995-01-01

    A knowledge based method for scheduling arrival aircraft in the terminal area has been implemented and tested in real time simulation. The scheduling system automatically sequences, assigns landing times, and assign runways to arrival aircraft by utilizing continuous updates of aircraft radar data and controller inputs. The scheduling algorithm is driven by a knowledge base which was obtained in over two thousand hours of controller-in-the-loop real time simulation. The knowledge base contains a series of hierarchical 'rules' and decision logic that examines both performance criteria, such as delay reductions, as well as workload reduction criteria, such as conflict avoidance. The objective of the algorithm is to devise an efficient plan to land the aircraft in a manner acceptable to the air traffic controllers. This paper describes the scheduling algorithms, gives examples of their use, and presents data regarding their potential benefits to the air traffic system.

  10. Knowledge-based scheduling of arrival aircraft

    NASA Technical Reports Server (NTRS)

    Krzeczowski, K.; Davis, T.; Erzberger, H.; Lev-Ram, I.; Bergh, C.

    1995-01-01

    A knowledge-based method for scheduling arrival aircraft in the terminal area has been implemented and tested in real-time simulation. The scheduling system automatically sequences, assigns landing times, and assigns runways to arrival aircraft by utilizing continuous updates of aircraft radar data and controller inputs. The scheduling algorithms is driven by a knowledge base which was obtained in over two thousand hours of controller-in-the-loop real-time simulation. The knowledge base contains a series of hierarchical 'rules' and decision logic that examines both performance criteria, such as delay reduction, as well as workload reduction criteria, such as conflict avoidance. The objective of the algorithms is to devise an efficient plan to land the aircraft in a manner acceptable to the air traffic controllers. This paper will describe the scheduling algorithms, give examples of their use, and present data regarding their potential benefits to the air traffic system.

  11. Asymptotic analysis of SPTA-based algorithms for no-wait flow shop scheduling problem with release dates.

    PubMed

    Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang

    2014-01-01

    We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms.

  12. Asymptotic Analysis of SPTA-Based Algorithms for No-Wait Flow Shop Scheduling Problem with Release Dates

    PubMed Central

    Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang

    2014-01-01

    We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms. PMID:24764774

  13. Data-Driven Robust M-LS-SVR-Based NARX Modeling for Estimation and Control of Molten Iron Quality Indices in Blast Furnace Ironmaking.

    PubMed

    Zhou, Ping; Guo, Dongwei; Wang, Hong; Chai, Tianyou

    2017-09-29

    Optimal operation of an industrial blast furnace (BF) ironmaking process largely depends on a reliable measurement of molten iron quality (MIQ) indices, which are not feasible using the conventional sensors. This paper proposes a novel data-driven robust modeling method for the online estimation and control of MIQ indices. First, a nonlinear autoregressive exogenous (NARX) model is constructed for the MIQ indices to completely capture the nonlinear dynamics of the BF process. Then, considering that the standard least-squares support vector regression (LS-SVR) cannot directly cope with the multioutput problem, a multitask transfer learning is proposed to design a novel multioutput LS-SVR (M-LS-SVR) for the learning of the NARX model. Furthermore, a novel M-estimator is proposed to reduce the interference of outliers and improve the robustness of the M-LS-SVR model. Since the weights of different outlier data are properly given by the weight function, their corresponding contributions on modeling can properly be distinguished, thus a robust modeling result can be achieved. Finally, a novel multiobjective evaluation index on the modeling performance is developed by comprehensively considering the root-mean-square error of modeling and the correlation coefficient on trend fitting, based on which the nondominated sorting genetic algorithm II is used to globally optimize the model parameters. Both experiments using industrial data and industrial applications illustrate that the proposed method can eliminate the adverse effect caused by the fluctuation of data in BF process efficiently. This indicates its stronger robustness and higher accuracy. Moreover, control testing shows that the developed model can be well applied to realize data-driven control of the BF process.

  14. Data-Driven Robust M-LS-SVR-Based NARX Modeling for Estimation and Control of Molten Iron Quality Indices in Blast Furnace Ironmaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ping; Guo, Dongwei; Wang, Hong

    Optimal operation of an industrial blast furnace (BF) ironmaking process largely depends on a reliable measurement of molten iron quality (MIQ) indices, which are not feasible using the conventional sensors. This paper proposes a novel data-driven robust modeling method for the online estimation and control of MIQ indices. First, a nonlinear autoregressive exogenous (NARX) model is constructed for the MIQ indices to completely capture the nonlinear dynamics of the BF process. Then, considering that the standard least-squares support vector regression (LS-SVR) cannot directly cope with the multioutput problem, a multitask transfer learning is proposed to design a novel multioutput LS-SVRmore » (M-LS-SVR) for the learning of the NARX model. Furthermore, a novel M-estimator is proposed to reduce the interference of outliers and improve the robustness of the M-LS-SVR model. Since the weights of different outlier data are properly given by the weight function, their corresponding contributions on modeling can properly be distinguished, thus a robust modeling result can be achieved. Finally, a novel multiobjective evaluation index on the modeling performance is developed by comprehensively considering the root-mean-square error of modeling and the correlation coefficient on trend fitting, based on which the nondominated sorting genetic algorithm II is used to globally optimize the model parameters. Both experiments using industrial data and industrial applications illustrate that the proposed method can eliminate the adverse effect caused by the fluctuation of data in BF process efficiently. In conclusion, this indicates its stronger robustness and higher accuracy. Moreover, control testing shows that the developed model can be well applied to realize data-driven control of the BF process.« less

  15. Data-Driven Robust M-LS-SVR-Based NARX Modeling for Estimation and Control of Molten Iron Quality Indices in Blast Furnace Ironmaking

    DOE PAGES

    Zhou, Ping; Guo, Dongwei; Wang, Hong; ...

    2017-09-29

    Optimal operation of an industrial blast furnace (BF) ironmaking process largely depends on a reliable measurement of molten iron quality (MIQ) indices, which are not feasible using the conventional sensors. This paper proposes a novel data-driven robust modeling method for the online estimation and control of MIQ indices. First, a nonlinear autoregressive exogenous (NARX) model is constructed for the MIQ indices to completely capture the nonlinear dynamics of the BF process. Then, considering that the standard least-squares support vector regression (LS-SVR) cannot directly cope with the multioutput problem, a multitask transfer learning is proposed to design a novel multioutput LS-SVRmore » (M-LS-SVR) for the learning of the NARX model. Furthermore, a novel M-estimator is proposed to reduce the interference of outliers and improve the robustness of the M-LS-SVR model. Since the weights of different outlier data are properly given by the weight function, their corresponding contributions on modeling can properly be distinguished, thus a robust modeling result can be achieved. Finally, a novel multiobjective evaluation index on the modeling performance is developed by comprehensively considering the root-mean-square error of modeling and the correlation coefficient on trend fitting, based on which the nondominated sorting genetic algorithm II is used to globally optimize the model parameters. Both experiments using industrial data and industrial applications illustrate that the proposed method can eliminate the adverse effect caused by the fluctuation of data in BF process efficiently. In conclusion, this indicates its stronger robustness and higher accuracy. Moreover, control testing shows that the developed model can be well applied to realize data-driven control of the BF process.« less

  16. An ant colony optimization heuristic for an integrated production and distribution scheduling problem

    NASA Astrophysics Data System (ADS)

    Chang, Yung-Chia; Li, Vincent C.; Chiang, Chia-Ju

    2014-04-01

    Make-to-order or direct-order business models that require close interaction between production and distribution activities have been adopted by many enterprises in order to be competitive in demanding markets. This article considers an integrated production and distribution scheduling problem in which jobs are first processed by one of the unrelated parallel machines and then distributed to corresponding customers by capacitated vehicles without intermediate inventory. The objective is to find a joint production and distribution schedule so that the weighted sum of total weighted job delivery time and the total distribution cost is minimized. This article presents a mathematical model for describing the problem and designs an algorithm using ant colony optimization. Computational experiments illustrate that the algorithm developed is capable of generating near-optimal solutions. The computational results also demonstrate the value of integrating production and distribution in the model for the studied problem.

  17. Transmission Scheduling and Routing Algorithms for Delay Tolerant Networks

    NASA Technical Reports Server (NTRS)

    Dudukovich, Rachel; Raible, Daniel E.

    2016-01-01

    The challenges of data processing, transmission scheduling and routing within a space network present a multi-criteria optimization problem. Long delays, intermittent connectivity, asymmetric data rates and potentially high error rates make traditional networking approaches unsuitable. The delay tolerant networking architecture and protocols attempt to mitigate many of these issues, yet transmission scheduling is largely manually configured and routes are determined by a static contact routing graph. A high level of variability exists among the requirements and environmental characteristics of different missions, some of which may allow for the use of more opportunistic routing methods. In all cases, resource allocation and constraints must be balanced with the optimization of data throughput and quality of service. Much work has been done researching routing techniques for terrestrial-based challenged networks in an attempt to optimize contact opportunities and resource usage. This paper examines several popular methods to determine their potential applicability to space networks.

  18. A Comparison of the DISASTER (Trademark) Scheduling Software with a Simultaneous Scheduling Algorithm for Minimizing Maximum Tardiness in Job Shops

    DTIC Science & Technology

    1993-09-01

    goal ( Heizer , Render , and Stair, 1993:94). Integer Prgronmming. Integer programming is a general purpose approach used to optimally solve job shop...Scheduling," Operations Research Journal. 29, No 4: 646-667 (July-August 1981). Heizer , Jay, Barry Render and Ralph M. Stair, Jr. Production and Operations

  19. Quantum annealing with parametrically driven nonlinear oscillators

    NASA Astrophysics Data System (ADS)

    Puri, Shruti

    While progress has been made towards building Ising machines to solve hard combinatorial optimization problems, quantum speedups have so far been elusive. Furthermore, protecting annealers against decoherence and achieving long-range connectivity remain important outstanding challenges. With the hope of overcoming these challenges, I introduce a new paradigm for quantum annealing that relies on continuous variable states. Unlike the more conventional approach based on two-level systems, in this approach, quantum information is encoded in two coherent states that are stabilized by parametrically driving a nonlinear resonator. I will show that a fully connected Ising problem can be mapped onto a network of such resonators, and outline an annealing protocol based on adiabatic quantum computing. During the protocol, the resonators in the network evolve from vacuum to coherent states representing the ground state configuration of the encoded problem. In short, the system evolves between two classical states following non-classical dynamics. As will be supported by numerical results, this new annealing paradigm leads to superior noise resilience. Finally, I will discuss a realistic circuit QED realization of an all-to-all connected network of parametrically driven nonlinear resonators. The continuous variable nature of the states in the large Hilbert space of the resonator provides new opportunities for exploring quantum phase transitions and non-stoquastic dynamics during the annealing schedule.

  20. 75 FR 14103 - Version One Regional Reliability Standard for Resource and Demand Balancing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-24

    ... meant to maintain scheduled frequency and avoid loss of firm load following transmission or generation... capacity is available at all times to maintain scheduled frequency, and avoid loss of firm load following... the possibility that firm load could be shed due to the loss of a single element on the system.\\40...

  1. Diagnosing Autism Spectrum Disorders in Adults: The Use of Autism Diagnostic Observation Schedule (ADOS) Module 4

    ERIC Educational Resources Information Center

    Bastiaansen, Jojanneke A.; Meffert, Harma; Hein, Simone; Huizinga, Petra; Ketelaars, Cees; Pijnenborg, Marieke; Bartels, Arnold; Minderaa, Ruud; Keysers, Christian; de Bildt, Annelies

    2011-01-01

    Autism Diagnostic Observation Schedule (ADOS) module 4 was investigated in an independent sample of high-functioning adult males with an autism spectrum disorder (ASD) compared to three specific diagnostic groups: schizophrenia, psychopathy, and typical development. ADOS module 4 proves to be a reliable instrument with good predictive value. It…

  2. Scheduling time-critical graphics on multiple processors

    NASA Technical Reports Server (NTRS)

    Meyer, Tom W.; Hughes, John F.

    1995-01-01

    This paper describes an algorithm for the scheduling of time-critical rendering and computation tasks on single- and multiple-processor architectures, with minimal pipelining. It was developed to manage scientific visualization scenes consisting of hundreds of objects, each of which can be computed and displayed at thousands of possible resolution levels. The algorithm generates the time-critical schedule using progressive-refinement techniques; it always returns a feasible schedule and, when allowed to run to completion, produces a near-optimal schedule which takes advantage of almost the entire multiple-processor system.

  3. An Improved Recovery Algorithm for Decayed AES Key Schedule Images

    NASA Astrophysics Data System (ADS)

    Tsow, Alex

    A practical algorithm that recovers AES key schedules from decayed memory images is presented. Halderman et al. [1] established this recovery capability, dubbed the cold-boot attack, as a serious vulnerability for several widespread software-based encryption packages. Our algorithm recovers AES-128 key schedules tens of millions of times faster than the original proof-of-concept release. In practice, it enables reliable recovery of key schedules at 70% decay, well over twice the decay capacity of previous methods. The algorithm is generalized to AES-256 and is empirically shown to recover 256-bit key schedules that have suffered 65% decay. When solutions are unique, the algorithm efficiently validates this property and outputs the solution for memory images decayed up to 60%.

  4. A randomized phase 3 study on the optimization of the combination of bevacizumab with FOLFOX/OXXEL in the treatment of patients with metastatic colorectal cancer-OBELICS (Optimization of BEvacizumab scheduLIng within Chemotherapy Scheme).

    PubMed

    Avallone, Antonio; Piccirillo, Maria Carmela; Aloj, Luigi; Nasti, Guglielmo; Delrio, Paolo; Izzo, Francesco; Di Gennaro, Elena; Tatangelo, Fabiana; Granata, Vincenza; Cavalcanti, Ernesta; Maiolino, Piera; Bianco, Francesco; Aprea, Pasquale; De Bellis, Mario; Pecori, Biagio; Rosati, Gerardo; Carlomagno, Chiara; Bertolini, Alessandro; Gallo, Ciro; Romano, Carmela; Leone, Alessandra; Caracò, Corradina; de Lutio di Castelguidone, Elisabetta; Daniele, Gennaro; Catalano, Orlando; Botti, Gerardo; Petrillo, Antonella; Romano, Giovanni M; Iaffaioli, Vincenzo R; Lastoria, Secondo; Perrone, Francesco; Budillon, Alfredo

    2016-02-08

    Despite the improvements in diagnosis and treatment, colorectal cancer (CRC) is the second cause of cancer deaths in both sexes. Therefore, research in this field remains of great interest. The approval of bevacizumab, a humanized anti-vascular endothelial growth factor (VEGF) monoclonal antibody, in combination with a fluoropyrimidine-based chemotherapy in the treatment of metastatic CRC has changed the oncology practice in this disease. However, the efficacy of bevacizumab-based treatment, has thus far been rather modest. Efforts are ongoing to understand the better way to combine bevacizumab and chemotherapy, and to identify valid predictive biomarkers of benefit to avoid unnecessary and costly therapy to nonresponder patients. The BRANCH study in high-risk locally advanced rectal cancer patients showed that varying bevacizumab schedule may impact on the feasibility and efficacy of chemo-radiotherapy. OBELICS is a multicentre, open-label, randomised phase 3 trial comparing in mCRC patients two treatment arms (1:1): standard concomitant administration of bevacizumab with chemotherapy (mFOLFOX/OXXEL regimen) vs experimental sequential bevacizumab given 4 days before chemotherapy, as first or second treatment line. Primary end point is the objective response rate (ORR) measured according to RECIST criteria. A sample size of 230 patients was calculated allowing reliable assessment in all plausible first-second line case-mix conditions, with a 80% statistical power and 2-sided alpha error of 0.05. Secondary endpoints are progression free-survival (PFS), overall survival (OS), toxicity and quality of life. The evaluation of the potential predictive role of several circulating biomarkers (circulating endothelial cells and progenitors, VEGF and VEGF-R SNPs, cytokines, microRNAs, free circulating DNA) as well as the value of the early [(18)F]-Fluorodeoxyglucose positron emission tomography (FDG-PET) response, are the objectives of the traslational project. Overall this study could optimize bevacizumab scheduling in combination with chemotherapy in mCRC patients. Moreover, correlative studies could improve the knowledge of the mechanisms by which bevacizumab enhance chemotherapy effect and could identify early predictors of response. EudraCT Number: 2011-004997-27 TRIAL REGISTRATION: ClinicalTrials.gove number, NCT01718873.

  5. Minimizing Project Cost by Integrating Subcontractor Selection Decisions with Scheduling

    NASA Astrophysics Data System (ADS)

    Biruk, Sławomir; Jaśkowski, Piotr; Czarnigowska, Agata

    2017-10-01

    Subcontracting has been a worldwide practice in the construction industry. It enables the construction enterprises to focus on their core competences and, at the same time, it makes complex project possible to be delivered. Since general contractors bear full responsibility for the works carried out by their subcontractors, it is their task and their risk to select a right subcontractor for a particular work. Although subcontractor management has been admitted to significantly affect the construction project’s performance, current practices and past research deal with subcontractor management and scheduling separately. The proposed model aims to support subcontracting decisions by integrating subcontractor selection with scheduling to enable the general contractor to select the optimal combination of subcontractors and own crews for all work packages of the project. The model allows for the interactions between the subcontractors and their impacts on the overall project performance in terms of cost and, indirectly, time and quality. The model is intended to be used at the general contractor’s bid preparation stage. The authors claim that the subcontracting decisions should be taken in a two-stage process. The first stage is a prequalification - provision of a short list of capable and reliable subcontractors; this stage is not the focus of the paper. The resulting pool of available resources is divided into two subsets: subcontractors, and general contractor’s in-house crews. Once it has been defined, the next stage is to assign them to the work packages that, bound by fixed precedence constraints, form the project’s network diagram. Each package is possible to be delivered by the general contractor’s crew or some of the potential subcontractors, at a specific time and cost. Particular crews and subcontractors can be contracted more than one package, but not at the same time. Other constraints include the predefined project completion date (the project is not allowed to take longer) and maximum total value of subcontracted work. The problem is modelled as a mixed binary linear program that minimizes project cost. It can be solved using universal solvers (e.g. LINGO, AIMMS, CPLEX, MATLAB and Optimization Toolbox, etc.). However, developing a dedicated decision-support tool would facilitate practical applications. To illustrate the idea of the model, the authors present a numerical example to find the optimal set of resources allocated to a project.

  6. Application of process monitoring to anomaly detection in nuclear material processing systems via system-centric event interpretation of data from multiple sensors of varying reliability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, Humberto E.; Simpson, Michael F.; Lin, Wen-Chiao

    In this paper, we apply an advanced safeguards approach and associated methods for process monitoring to a hypothetical nuclear material processing system. The assessment regarding the state of the processing facility is conducted at a systemcentric level formulated in a hybrid framework. This utilizes architecture for integrating both time- and event-driven data and analysis for decision making. While the time-driven layers of the proposed architecture encompass more traditional process monitoring methods based on time series data and analysis, the event-driven layers encompass operation monitoring methods based on discrete event data and analysis. By integrating process- and operation-related information and methodologiesmore » within a unified framework, the task of anomaly detection is greatly improved. This is because decision-making can benefit from not only known time-series relationships among measured signals but also from known event sequence relationships among generated events. This available knowledge at both time series and discrete event layers can then be effectively used to synthesize observation solutions that optimally balance sensor and data processing requirements. The application of the proposed approach is then implemented on an illustrative monitored system based on pyroprocessing and results are discussed.« less

  7. Orbit transfer vehicle engine study, phase A extension. Volume 2A: Study results

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Engine trade studies and systems analyses leading to a baseline engine selection for advanced expander cycle engine are discussed with emphasis on: (1) performance optimization of advanced expander cycle engines in the 10 to 20K pound thrust range; (2) selection of a recommended advanced expander engine configuration based on maximized performance and minimized mission risk, and definition of the components for this configuration; (3) characterization of the low thrust adaptation requirements and performance for the staged combustion engine; (4) generation of a suggested safety and reliability approach for OTV engines independent of engine cycle; (5) definition of program risk relationships between expander and staged combustion cycle engines; and (6) development of schedules and costs for the DDT&E, production, and operation phases of the 10K pound thrust expander engine program.

  8. Test-Retest Reliability of Innovated Strength Tests for Hip Muscles

    PubMed Central

    Meyer, Christophe; Corten, Kristoff; Wesseling, Mariska; Peers, Koen; Simon, Jean-Pierre; Jonkers, Ilse; Desloovere, Kaat

    2013-01-01

    The burden of hip muscles weakness and its relation to other impairments has been well documented. It is therefore a pre-requisite to have a reliable method for clinical assessment of hip muscles function allowing the design and implementation of a proper strengthening program. Motor-driven dynamometry has been widely accepted as the gold-standard for lower limb muscle strength assessment but is mainly related to the knee joint. Studies focusing on the hip joint are less exhaustive and somewhat discrepant with regard to optimal participants position, consequently influencing outcome measures. Thus, we aimed to develop a standardized test setup for the assessment of hip muscles strength, i.e. flexors/extensors and abductors/adductors, with improved participant stability and to define its psychometric characteristics. Eighteen participants performed unilateral isokinetic and isometric contractions of the hip muscles in the sagittal and coronal plane at two separate occasions. Peak torque and normalized peak torque were measured for each contraction. Relative and absolute measures of reliability were calculated using the intraclass correlation coefficient and standard error of measurement, respectively. Results from this study revealed higher levels of between-day reliability of isokinetic/isometric hip abduction/flexion peak torque compared to existing literature. The least reliable measures were found for hip extension and adduction, which could be explained by a less efficient stabilization technique. Our study additionally provided a first set of reference normalized data which can be used in future research. PMID:24260550

  9. Analysis of Salinity Intrusion in the San Francisco Bay-Delta Using a GA-Optimized Neural Net, and Application of the Model to Prediction in the Elkhorn Slough Habitat

    NASA Astrophysics Data System (ADS)

    Thompson, D. E.; Rajkumar, T.

    2002-12-01

    The San Francisco Bay Delta is a large hydrodynamic complex that incorporates the Sacramento and San Joaquin Estuaries, the Suisan Marsh, and the San Francisco Bay proper. Competition exists for the use of this extensive water system both from the fisheries industry, the agricultural industry, and from the marine and estuarine animal species within the Delta. As tidal fluctuations occur, more saline water pushes upstream allowing fish to migrate beyond the Suisan Marsh for breeding and habitat occupation. However, the agriculture industry does not want extensive salinity intrusion to impact water quality for human and plant consumption. The balance is regulated by pumping stations located along the estuaries and reservoirs whereby flushing of fresh water keeps the saline intrusion at bay. The pumping schedule is driven by data collected at various locations within the Bay Delta and by numerical models that predict the salinity intrusion as part of a larger model of the system. The Interagency Ecological Program (IEP) for the San Francisco Bay / Sacramento-San Joaquin Estuary collects, monitors, and archives the data, and the Department of Water Resources provides a numerical model simulation (DSM2) from which predictions are made that drive the pumping schedule. A problem with DSM2 is that the numerical simulation takes roughly 16 hours to complete a prediction. We have created a neural net, optimized with a genetic algorithm, that takes as input the archived data from multiple gauging stations and predicts stage, salinity, and flow at the Carquinez Straits (at the downstream end of the Suisan Marsh). This model seems to be robust in its predictions and operates much faster than the current numerical DSM2 model. Because the Bay-Delta is strongly tidally driven, we used both Principal Component Analysis and Fast Fourier Transforms to discover dominant features within the IEP data. We then filtered out the dominant tidal forcing to discover non-primary tidal effects, and used this to enhance the neural network by mapping input-output relationships in a more efficient manner. Furthermore, the neural network implicitly incorporates both the hydrodynamic and water quality models into a single predictive system. Although our model has not yet been enhanced to demonstrate improve pumping schedules, it has the possibility to support better decision-making procedures that may then be implemented by State agencies if desired. Our intention is now to use our calibrated Bay-Delta neural model in the smaller Elkhorn Slough complex near Monterey Bay where no such hydrodynamic model currently exists. At the Elkhorn Slough, we are fusing the neural net model of tidally-driven flow with in situ flow data and airborne and satellite remote sensing data. These further constrain the behavior of the model in predicting the longer-term health and future of this vital estuary. In particular, we are using visible data to explore the effects of the sediment plume that wastes into Monterey Bay, and infrared data and thermal emissivities to characterize the plant habitat along the margins of the Slough as salinity intrusion and sediment removal change the boundary of the estuary. The details of the Bay-Delta neural net model and its application to the Elkhorn Slough are presented in this paper.

  10. Incentive-compatible demand-side management for smart grids based on review strategies

    NASA Astrophysics Data System (ADS)

    Xu, Jie; van der Schaar, Mihaela

    2015-12-01

    Demand-side load management is able to significantly improve the energy efficiency of smart grids. Since the electricity production cost depends on the aggregate energy usage of multiple consumers, an important incentive problem emerges: self-interested consumers want to increase their own utilities by consuming more than the socially optimal amount of energy during peak hours since the increased cost is shared among the entire set of consumers. To incentivize self-interested consumers to take the socially optimal scheduling actions, we design a new class of protocols based on review strategies. These strategies work as follows: first, a review stage takes place in which a statistical test is performed based on the daily prices of the previous billing cycle to determine whether or not the other consumers schedule their electricity loads in a socially optimal way. If the test fails, the consumers trigger a punishment phase in which, for a certain time, they adjust their energy scheduling in such a way that everybody in the consumer set is punished due to an increased price. Using a carefully designed protocol based on such review strategies, consumers then have incentives to take the socially optimal load scheduling to avoid entering this punishment phase. We rigorously characterize the impact of deploying protocols based on review strategies on the system's as well as the users' performance and determine the optimal design (optimal billing cycle, punishment length, etc.) for various smart grid deployment scenarios. Even though this paper considers a simplified smart grid model, our analysis provides important and useful insights for designing incentive-compatible demand-side management schemes based on aggregate energy usage information in a variety of practical scenarios.

  11. Optimizing nursing care by integrating theory-driven evidence-based practice.

    PubMed

    Pipe, Teri Britt

    2007-01-01

    An emerging challenge for nursing leadership is how to convey the importance of both evidence-based practice (EBP) and theory-driven care in ensuring patient safety and optimizing outcomes. This article describes a specific example of a leadership strategy based on Rosswurm and Larrabee's model for change to EBP, which was effective in aligning the processes of EBP and theory-driven care.

  12. Optimization and Flight Schedules of Pioneer Routes in Papua Province

    NASA Astrophysics Data System (ADS)

    Ronting, Y.; Adisasmita, S. A.; Hamid, S.; Hustim, M.

    2018-04-01

    The province of Papua has a very varied topography, ranging from swampy lowlands, hills, and plateaus up steep hills. The total area of land is 410,660 km2, which consists of 28 counties and one city, 389 districts and 5.420 villages. The population of Papua Province in 2017 was 3.265.202 people with an average growth of 4.21% per year. The transportation services is still low, especially in the mountainous region, which is isolated and could only be reached by an air transportation mode, causing a considerable price disparity between coastal and mountainous areas. The purpose of this paper is to develop the route optimization and pioneer flight schedules models as an airbridge. This research is conducted by collecting primary data and secondary data. Data is based on field surveys; interviews; discussions with airport authority, official government, etc; and also from various agencies. The analytical tools used to optimization flight schedule and route are analyzed by add-in solver in Microsoft Excel. The results of the analysis we can get a more optimal route so that it can save transportation costs by 7.26%.

  13. Optimal stimulus scheduling for active estimation of evoked brain networks.

    PubMed

    Kafashan, MohammadMehdi; Ching, ShiNung

    2015-12-01

    We consider the problem of optimal probing to learn connections in an evoked dynamic network. Such a network, in which each edge measures an input-output relationship between sites in sensor/actuator-space, is relevant to emerging applications in neural mapping and neural connectivity estimation. We show that the problem of scheduling nodes to a probe (i.e., stimulate) amounts to a problem of optimal sensor scheduling. By formulating the evoked network in state-space, we show that the solution to the greedy probing strategy has a convenient form and, under certain conditions, is optimal over a finite horizon. We adopt an expectation maximization technique to update the state-space parameters in an online fashion and demonstrate the efficacy of the overall approach in a series of detailed numerical examples. The proposed method provides a principled means to actively probe time-varying connections in neuronal networks. The overall method can be implemented in real time and is particularly well-suited to applications in stimulation-based cortical mapping in which the underlying network dynamics are changing over time.

  14. Optimal stimulus scheduling for active estimation of evoked brain networks

    NASA Astrophysics Data System (ADS)

    Kafashan, MohammadMehdi; Ching, ShiNung

    2015-12-01

    Objective. We consider the problem of optimal probing to learn connections in an evoked dynamic network. Such a network, in which each edge measures an input-output relationship between sites in sensor/actuator-space, is relevant to emerging applications in neural mapping and neural connectivity estimation. Approach. We show that the problem of scheduling nodes to a probe (i.e., stimulate) amounts to a problem of optimal sensor scheduling. Main results. By formulating the evoked network in state-space, we show that the solution to the greedy probing strategy has a convenient form and, under certain conditions, is optimal over a finite horizon. We adopt an expectation maximization technique to update the state-space parameters in an online fashion and demonstrate the efficacy of the overall approach in a series of detailed numerical examples. Significance. The proposed method provides a principled means to actively probe time-varying connections in neuronal networks. The overall method can be implemented in real time and is particularly well-suited to applications in stimulation-based cortical mapping in which the underlying network dynamics are changing over time.

  15. A multipopulation PSO based memetic algorithm for permutation flow shop scheduling.

    PubMed

    Liu, Ruochen; Ma, Chenlin; Ma, Wenping; Li, Yangyang

    2013-01-01

    The permutation flow shop scheduling problem (PFSSP) is part of production scheduling, which belongs to the hardest combinatorial optimization problem. In this paper, a multipopulation particle swarm optimization (PSO) based memetic algorithm (MPSOMA) is proposed in this paper. In the proposed algorithm, the whole particle swarm population is divided into three subpopulations in which each particle evolves itself by the standard PSO and then updates each subpopulation by using different local search schemes such as variable neighborhood search (VNS) and individual improvement scheme (IIS). Then, the best particle of each subpopulation is selected to construct a probabilistic model by using estimation of distribution algorithm (EDA) and three particles are sampled from the probabilistic model to update the worst individual in each subpopulation. The best particle in the entire particle swarm is used to update the global optimal solution. The proposed MPSOMA is compared with two recently proposed algorithms, namely, PSO based memetic algorithm (PSOMA) and hybrid particle swarm optimization with estimation of distribution algorithm (PSOEDA), on 29 well-known PFFSPs taken from OR-library, and the experimental results show that it is an effective approach for the PFFSP.

  16. Deformation mechanism of the Cryostat in the CADS Injector II

    NASA Astrophysics Data System (ADS)

    Yuan, Jiandong; Zhang, Bin; Wan, Yuqin; Sun, Guozhen; Bai, Feng; Zhang, Juihui; He, Yuan

    2018-01-01

    Thermal contraction and expansion of the Cryostat will affect its reliability and stability. To optimize and upgrade the Cryostat, we analyzed the heat transfer in a cryo-vacuum environment from the theoretical point first. The simulation of cryo-vacuum deformation based on a finite element method was implemented respectively. The completed measurement based on a Laser Tracker and a Micro Alignment Telescope was conducted to verify its correctness. The monitored deformations were consistent with the simulated ones. After the predictable deformations in vertical direction have been compensated, the superconducting solenoids and Half Wave Resonator cavities approached the ideal "zero" position under liquid helium conditions. These guaranteed the success of 25 MeV@170 uA continuous wave protons of Chinese accelerator driven subcritical system Injector II. By correlating the vacuum and cryo-deformation, we have demonstrated that the complete deformation was the superposition effect of the atmospheric pressure, gravity and thermal stress during both the process of cooling down and warming up. The results will benefit to an optimization for future Cryostat's design.

  17. An Arrival and Departure Time Predictor for Scheduling Communication in Opportunistic IoT

    PubMed Central

    Pozza, Riccardo; Georgoulas, Stylianos; Moessner, Klaus; Nati, Michele; Gluhak, Alexander; Krco, Srdjan

    2016-01-01

    In this article, an Arrival and Departure Time Predictor (ADTP) for scheduling communication in opportunistic Internet of Things (IoT) is presented. The proposed algorithm learns about temporal patterns of encounters between IoT devices and predicts future arrival and departure times, therefore future contact durations. By relying on such predictions, a neighbour discovery scheduler is proposed, capable of jointly optimizing discovery latency and power consumption in order to maximize communication time when contacts are expected with high probability and, at the same time, saving power when contacts are expected with low probability. A comprehensive performance evaluation with different sets of synthetic and real world traces shows that ADTP performs favourably with respect to previous state of the art. This prediction framework opens opportunities for transmission planners and schedulers optimizing not only neighbour discovery, but the entire communication process. PMID:27827909

  18. An Arrival and Departure Time Predictor for Scheduling Communication in Opportunistic IoT.

    PubMed

    Pozza, Riccardo; Georgoulas, Stylianos; Moessner, Klaus; Nati, Michele; Gluhak, Alexander; Krco, Srdjan

    2016-11-04

    In this article, an Arrival and Departure Time Predictor (ADTP) for scheduling communication in opportunistic Internet of Things (IoT) is presented. The proposed algorithm learns about temporal patterns of encounters between IoT devices and predicts future arrival and departure times, therefore future contact durations. By relying on such predictions, a neighbour discovery scheduler is proposed, capable of jointly optimizing discovery latency and power consumption in order to maximize communication time when contacts are expected with high probability and, at the same time, saving power when contacts are expected with low probability. A comprehensive performance evaluation with different sets of synthetic and real world traces shows that ADTP performs favourably with respect to previous state of the art. This prediction framework opens opportunities for transmission planners and schedulers optimizing not only neighbour discovery, but the entire communication process.

  19. Artificial Immune Algorithm for Subtask Industrial Robot Scheduling in Cloud Manufacturing

    NASA Astrophysics Data System (ADS)

    Suma, T.; Murugesan, R.

    2018-04-01

    The current generation of manufacturing industry requires an intelligent scheduling model to achieve an effective utilization of distributed manufacturing resources, which motivated us to work on an Artificial Immune Algorithm for subtask robot scheduling in cloud manufacturing. This scheduling model enables a collaborative work between the industrial robots in different manufacturing centers. This paper discussed two optimizing objectives which includes minimizing the cost and load balance of industrial robots through scheduling. To solve these scheduling problems, we used the algorithm based on Artificial Immune system. The parameters are simulated with MATLAB and the results compared with the existing algorithms. The result shows better performance than existing.

  20. A statistical-based scheduling algorithm in automated data path synthesis

    NASA Technical Reports Server (NTRS)

    Jeon, Byung Wook; Lursinsap, Chidchanok

    1992-01-01

    In this paper, we propose a new heuristic scheduling algorithm based on the statistical analysis of the cumulative frequency distribution of operations among control steps. It has a tendency of escaping from local minima and therefore reaching a globally optimal solution. The presented algorithm considers the real world constraints such as chained operations, multicycle operations, and pipelined data paths. The result of the experiment shows that it gives optimal solutions, even though it is greedy in nature.

  1. A three-stage heuristic for harvest scheduling with access road network development

    Treesearch

    Mark M. Clark; Russell D. Meller; Timothy P. McDonald

    2000-01-01

    In this article we present a new model for the scheduling of forest harvesting with spatial and temporal constraints. Our approach is unique in that we incorporate access road network development into the harvest scheduling selection process. Due to the difficulty of solving the problem optimally, we develop a heuristic that consists of a solution construction stage...

  2. A Genetic Algorithm for Flow Shop Scheduling with Assembly Operations to Minimize Makespan

    NASA Astrophysics Data System (ADS)

    Bhongade, A. S.; Khodke, P. M.

    2014-04-01

    Manufacturing systems, in which, several parts are processed through machining workstations and later assembled to form final products, is common. Though scheduling of such problems are solved using heuristics, available solution approaches can provide solution for only moderate sized problems due to large computation time required. In this work, scheduling approach is developed for such flow-shop manufacturing system having machining workstations followed by assembly workstations. The initial schedule is generated using Disjunctive method and genetic algorithm (GA) is applied further for generating schedule for large sized problems. GA is found to give near optimal solution based on the deviation of makespan from lower bound. The lower bound of makespan of such problem is estimated and percent deviation of makespan from lower bounds is used as a performance measure to evaluate the schedules. Computational experiments are conducted on problems developed using fractional factorial orthogonal array, varying the number of parts per product, number of products, and number of workstations (ranging upto 1,520 number of operations). A statistical analysis indicated the significance of all the three factors considered. It is concluded that GA method can obtain optimal makespan.

  3. SPORT: An Algorithm for Divisible Load Scheduling with Result Collection on Heterogeneous Systems

    NASA Astrophysics Data System (ADS)

    Ghatpande, Abhay; Nakazato, Hidenori; Beaumont, Olivier; Watanabe, Hiroshi

    Divisible Load Theory (DLT) is an established mathematical framework to study Divisible Load Scheduling (DLS). However, traditional DLT does not address the scheduling of results back to source (i. e., result collection), nor does it comprehensively deal with system heterogeneity. In this paper, the DLSRCHETS (DLS with Result Collection on HET-erogeneous Systems) problem is addressed. The few papers to date that have dealt with DLSRCHETS, proposed simplistic LIFO (Last In, First Out) and FIFO (First In, First Out) type of schedules as solutions to DLSRCHETS. In this paper, a new polynomial time heuristic algorithm, SPORT (System Parameters based Optimized Result Transfer), is proposed as a solution to the DLSRCHETS problem. With the help of simulations, it is proved that the performance of SPORT is significantly better than existing algorithms. The other major contributions of this paper include, for the first time ever, (a) the derivation of the condition to identify the presence of idle time in a FIFO schedule for two processors, (b) the identification of the limiting condition for the optimality of FIFO and LIFO schedules for two processors, and (c) the introduction of the concept of equivalent processor in DLS for heterogeneous systems with result collection.

  4. TRU Waste Management Program cost/schedule optimization analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Detamore, J.A.; Raudenbush, M.H.; Wolaver, R.W.

    1985-10-01

    The cost/schedule optimization task is a necessary function to insure that program goals and plans are optimized from a cost and schedule aspect. Results of this study will offer DOE information with which it can establish, within institutional constraints, the most efficient program for the long-term management and disposal of contact handled transuranic waste (CH-TRU). To this end, a comprehensive review of program cost/schedule tradeoffs has been made, to identify any major cost saving opportunities that may be realized by modification of current program plans. It was decided that all promising scenarios would be explored, and institutional limitations to implementationmore » would be described. Since a virtually limitless number of possible scenarios can be envisioned, it was necessary to distill these possibilities into a manageable number of alternatives. The resultant scenarios were described in the cost/schedule strategy and work plan document. Each scenario was compared with the base case: waste processing at the originating site; transport of CH-TRU wastes in TRUPACT; shipment of drums in 6-Packs; 25 year stored waste workoff; WIPP operational 10/88, with all sites shipping to WIPP beginning 10/88; and no processing at WIPP. Major savings were identified in two alternate scenarios: centralize waste processing at INEL and eliminate rail shipment of TRUPACT. No attempt was made to calculate savings due to combination of scenarios. 1 ref., 5 figs., 1 tab. (MHB)« less

  5. Study on optimization of the short-term operation of cascade hydropower stations by considering output error

    NASA Astrophysics Data System (ADS)

    Wang, Liping; Wang, Boquan; Zhang, Pu; Liu, Minghao; Li, Chuangang

    2017-06-01

    The study of reservoir deterministic optimal operation can improve the utilization rate of water resource and help the hydropower stations develop more reasonable power generation schedules. However, imprecise forecasting inflow may lead to output error and hinder implementation of power generation schedules. In this paper, output error generated by the uncertainty of the forecasting inflow was regarded as a variable to develop a short-term reservoir optimal operation model for reducing operation risk. To accomplish this, the concept of Value at Risk (VaR) was first applied to present the maximum possible loss of power generation schedules, and then an extreme value theory-genetic algorithm (EVT-GA) was proposed to solve the model. The cascade reservoirs of Yalong River Basin in China were selected as a case study to verify the model, according to the results, different assurance rates of schedules can be derived by the model which can present more flexible options for decision makers, and the highest assurance rate can reach 99%, which is much higher than that without considering output error, 48%. In addition, the model can greatly improve the power generation compared with the original reservoir operation scheme under the same confidence level and risk attitude. Therefore, the model proposed in this paper can significantly improve the effectiveness of power generation schedules and provide a more scientific reference for decision makers.

  6. Some Results of Weak Anticipative Concept Applied in Simulation Based Decision Support in Enterprise

    NASA Astrophysics Data System (ADS)

    Kljajić, Miroljub; Kofjač, Davorin; Kljajić Borštnar, Mirjana; Škraba, Andrej

    2010-11-01

    The simulation models are used as for decision support and learning in enterprises and in schools. Tree cases of successful applications demonstrate usefulness of weak anticipative information. Job shop scheduling production with makespan criterion presents a real case customized flexible furniture production optimization. The genetic algorithm for job shop scheduling optimization is presented. Simulation based inventory control for products with stochastic lead time and demand describes inventory optimization for products with stochastic lead time and demand. Dynamic programming and fuzzy control algorithms reduce the total cost without producing stock-outs in most cases. Values of decision making information based on simulation were discussed too. All two cases will be discussed from optimization, modeling and learning point of view.

  7. Agile Acceptance Test–Driven Development of Clinical Decision Support Advisories: Feasibility of Using Open Source Software

    PubMed Central

    Baldwin, Krystal L; Kannan, Vaishnavi; Flahaven, Emily L; Parks, Cassandra J; Ott, Jason M; Willett, Duwayne L

    2018-01-01

    Background Moving to electronic health records (EHRs) confers substantial benefits but risks unintended consequences. Modern EHRs consist of complex software code with extensive local configurability options, which can introduce defects. Defects in clinical decision support (CDS) tools are surprisingly common. Feasible approaches to prevent and detect defects in EHR configuration, including CDS tools, are needed. In complex software systems, use of test–driven development and automated regression testing promotes reliability. Test–driven development encourages modular, testable design and expanding regression test coverage. Automated regression test suites improve software quality, providing a “safety net” for future software modifications. Each automated acceptance test serves multiple purposes, as requirements (prior to build), acceptance testing (on completion of build), regression testing (once live), and “living” design documentation. Rapid-cycle development or “agile” methods are being successfully applied to CDS development. The agile practice of automated test–driven development is not widely adopted, perhaps because most EHR software code is vendor-developed. However, key CDS advisory configuration design decisions and rules stored in the EHR may prove amenable to automated testing as “executable requirements.” Objective We aimed to establish feasibility of acceptance test–driven development of clinical decision support advisories in a commonly used EHR, using an open source automated acceptance testing framework (FitNesse). Methods Acceptance tests were initially constructed as spreadsheet tables to facilitate clinical review. Each table specified one aspect of the CDS advisory’s expected behavior. Table contents were then imported into a test suite in FitNesse, which queried the EHR database to automate testing. Tests and corresponding CDS configuration were migrated together from the development environment to production, with tests becoming part of the production regression test suite. Results We used test–driven development to construct a new CDS tool advising Emergency Department nurses to perform a swallowing assessment prior to administering oral medication to a patient with suspected stroke. Test tables specified desired behavior for (1) applicable clinical settings, (2) triggering action, (3) rule logic, (4) user interface, and (5) system actions in response to user input. Automated test suite results for the “executable requirements” are shown prior to building the CDS alert, during build, and after successful build. Conclusions Automated acceptance test–driven development and continuous regression testing of CDS configuration in a commercial EHR proves feasible with open source software. Automated test–driven development offers one potential contribution to achieving high-reliability EHR configuration. Vetting acceptance tests with clinicians elicits their input on crucial configuration details early during initial CDS design and iteratively during rapid-cycle optimization. PMID:29653922

  8. Agile Acceptance Test-Driven Development of Clinical Decision Support Advisories: Feasibility of Using Open Source Software.

    PubMed

    Basit, Mujeeb A; Baldwin, Krystal L; Kannan, Vaishnavi; Flahaven, Emily L; Parks, Cassandra J; Ott, Jason M; Willett, Duwayne L

    2018-04-13

    Moving to electronic health records (EHRs) confers substantial benefits but risks unintended consequences. Modern EHRs consist of complex software code with extensive local configurability options, which can introduce defects. Defects in clinical decision support (CDS) tools are surprisingly common. Feasible approaches to prevent and detect defects in EHR configuration, including CDS tools, are needed. In complex software systems, use of test-driven development and automated regression testing promotes reliability. Test-driven development encourages modular, testable design and expanding regression test coverage. Automated regression test suites improve software quality, providing a "safety net" for future software modifications. Each automated acceptance test serves multiple purposes, as requirements (prior to build), acceptance testing (on completion of build), regression testing (once live), and "living" design documentation. Rapid-cycle development or "agile" methods are being successfully applied to CDS development. The agile practice of automated test-driven development is not widely adopted, perhaps because most EHR software code is vendor-developed. However, key CDS advisory configuration design decisions and rules stored in the EHR may prove amenable to automated testing as "executable requirements." We aimed to establish feasibility of acceptance test-driven development of clinical decision support advisories in a commonly used EHR, using an open source automated acceptance testing framework (FitNesse). Acceptance tests were initially constructed as spreadsheet tables to facilitate clinical review. Each table specified one aspect of the CDS advisory's expected behavior. Table contents were then imported into a test suite in FitNesse, which queried the EHR database to automate testing. Tests and corresponding CDS configuration were migrated together from the development environment to production, with tests becoming part of the production regression test suite. We used test-driven development to construct a new CDS tool advising Emergency Department nurses to perform a swallowing assessment prior to administering oral medication to a patient with suspected stroke. Test tables specified desired behavior for (1) applicable clinical settings, (2) triggering action, (3) rule logic, (4) user interface, and (5) system actions in response to user input. Automated test suite results for the "executable requirements" are shown prior to building the CDS alert, during build, and after successful build. Automated acceptance test-driven development and continuous regression testing of CDS configuration in a commercial EHR proves feasible with open source software. Automated test-driven development offers one potential contribution to achieving high-reliability EHR configuration. Vetting acceptance tests with clinicians elicits their input on crucial configuration details early during initial CDS design and iteratively during rapid-cycle optimization. ©Mujeeb A Basit, Krystal L Baldwin, Vaishnavi Kannan, Emily L Flahaven, Cassandra J Parks, Jason M Ott, Duwayne L Willett. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 13.04.2018.

  9. Defense AT and L. Volume 43, Number 2. March-April 2014

    DTIC Science & Technology

    2016-09-16

    are in the public domain and may be reprinted or posted on the Internet. When reprint- ing, please credit the author and Defense AT&L. Some photos...driven strategies. The next section describes ex - amples of programs initiated with schedule-driven constraints, while the following section discusses...get to full-rate production (FRP), because those numbers can be very high yet require significant post - production costs to repair or add capability

  10. Critical role of bevacizumab scheduling in combination with pre-surgical chemo-radiotherapy in MRI-defined high-risk locally advanced rectal cancer: Results of the BRANCH trial.

    PubMed

    Avallone, Antonio; Pecori, Biagio; Bianco, Franco; Aloj, Luigi; Tatangelo, Fabiana; Romano, Carmela; Granata, Vincenza; Marone, Pietro; Leone, Alessandra; Botti, Gerardo; Petrillo, Antonella; Caracò, Corradina; Iaffaioli, Vincenzo R; Muto, Paolo; Romano, Giovanni; Comella, Pasquale; Budillon, Alfredo; Delrio, Paolo

    2015-10-06

    We have previously shown that an intensified preoperative regimen including oxaliplatin plus raltitrexed and 5-fluorouracil/folinic acid (OXATOM/FUFA) during preoperative pelvic radiotherapy produced promising results in locally advanced rectal cancer (LARC). Preclinical evidence suggests that the scheduling of bevacizumab may be crucial to optimize its combination with chemo-radiotherapy. This non-randomized, non-comparative, phase II study was conducted in MRI-defined high-risk LARC. Patients received three biweekly cycles of OXATOM/FUFA during RT. Bevacizumab was given 2 weeks before the start of chemo-radiotherapy, and on the same day of chemotherapy for 3 cycles (concomitant-schedule A) or 4 days prior to the first and second cycle of chemotherapy (sequential-schedule B). Primary end point was pathological complete tumor regression (TRG1) rate. The accrual for the concomitant-schedule was early terminated because the number of TRG1 (2 out of 16 patients) was statistically inconsistent with the hypothesis of activity (30%) to be tested. Conversely, the endpoint was reached with the sequential-schedule and the final TRG1 rate among 46 enrolled patients was 50% (95% CI 35%-65%). Neutropenia was the most common grade ≥ 3 toxicity with both schedules, but it was less pronounced with the sequential than concomitant-schedule (30% vs. 44%). Postoperative complications occurred in 8/15 (53%) and 13/46 (28%) patients in schedule A and B, respectively. At 5 year follow-up the probability of PFS and OS was 80% (95%CI, 66%-89%) and 85% (95%CI, 69%-93%), respectively, for the sequential-schedule. These results highlights the relevance of bevacizumab scheduling to optimize its combination with preoperative chemo-radiotherapy in the management of LARC.

  11. Parallel processing of real-time dynamic systems simulation on OSCAR (Optimally SCheduled Advanced multiprocessoR)

    NASA Technical Reports Server (NTRS)

    Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke

    1989-01-01

    Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.

  12. Hybrid Pareto artificial bee colony algorithm for multi-objective single machine group scheduling problem with sequence-dependent setup times and learning effects.

    PubMed

    Yue, Lei; Guan, Zailin; Saif, Ullah; Zhang, Fei; Wang, Hao

    2016-01-01

    Group scheduling is significant for efficient and cost effective production system. However, there exist setup times between the groups, which require to decrease it by sequencing groups in an efficient way. Current research is focused on a sequence dependent group scheduling problem with an aim to minimize the makespan in addition to minimize the total weighted tardiness simultaneously. In most of the production scheduling problems, the processing time of jobs is assumed as fixed. However, the actual processing time of jobs may be reduced due to "learning effect". The integration of sequence dependent group scheduling problem with learning effects has been rarely considered in literature. Therefore, current research considers a single machine group scheduling problem with sequence dependent setup times and learning effects simultaneously. A novel hybrid Pareto artificial bee colony algorithm (HPABC) with some steps of genetic algorithm is proposed for current problem to get Pareto solutions. Furthermore, five different sizes of test problems (small, small medium, medium, large medium, large) are tested using proposed HPABC. Taguchi method is used to tune the effective parameters of the proposed HPABC for each problem category. The performance of HPABC is compared with three famous multi objective optimization algorithms, improved strength Pareto evolutionary algorithm (SPEA2), non-dominated sorting genetic algorithm II (NSGAII) and particle swarm optimization algorithm (PSO). Results indicate that HPABC outperforms SPEA2, NSGAII and PSO and gives better Pareto optimal solutions in terms of diversity and quality for almost all the instances of the different sizes of problems.

  13. Sorption compressor/mechanical expander hybrid refrigeration

    NASA Technical Reports Server (NTRS)

    Jones, J. A.; Britcliffe, M.

    1987-01-01

    Experience with Deep Space Network (DSN) ground-based cryogenic refrigerators has proved the reliability of the basic two-stage Gifford-McMahon helium refrigerator. A very long life cryogenic refrigeration system appears possible by combining this expansion system or a turbo expansion system with a hydride sorption compressor in place of the usual motor driven piston compressor. To test the feasibility of this system, a commercial Gifford-McMahon refrigerator was tested using hydrogen gas as the working fluid. Although no attempt was made to optimize the system for hydrogen operation, the refrigerator developed 1.3 W at 30 K and 6.6 W at 60 K. The results of the test and of theoretical performances of the hybrid compressor coupled to these expansion systems are presented.

  14. Optimal designs for population pharmacokinetic studies of the partner drugs co-administered with artemisinin derivatives in patients with uncomplicated falciparum malaria.

    PubMed

    Jamsen, Kris M; Duffull, Stephen B; Tarning, Joel; Lindegardh, Niklas; White, Nicholas J; Simpson, Julie A

    2012-07-11

    Artemisinin-based combination therapy (ACT) is currently recommended as first-line treatment for uncomplicated malaria, but of concern, it has been observed that the effectiveness of the main artemisinin derivative, artesunate, has been diminished due to parasite resistance. This reduction in effect highlights the importance of the partner drugs in ACT and provides motivation to gain more knowledge of their pharmacokinetic (PK) properties via population PK studies. Optimal design methodology has been developed for population PK studies, which analytically determines a sampling schedule that is clinically feasible and yields precise estimation of model parameters. In this work, optimal design methodology was used to determine sampling designs for typical future population PK studies of the partner drugs (mefloquine, lumefantrine, piperaquine and amodiaquine) co-administered with artemisinin derivatives. The optimal designs were determined using freely available software and were based on structural PK models from the literature and the key specifications of 100 patients with five samples per patient, with one sample taken on the seventh day of treatment. The derived optimal designs were then evaluated via a simulation-estimation procedure. For all partner drugs, designs consisting of two sampling schedules (50 patients per schedule) with five samples per patient resulted in acceptable precision of the model parameter estimates. The sampling schedules proposed in this paper should be considered in future population pharmacokinetic studies where intensive sampling over many days or weeks of follow-up is not possible due to either ethical, logistic or economical reasons.

  15. Investigation of schedules for traffic signal timing optimization.

    DOT National Transportation Integrated Search

    2005-01-01

    Traffic signal optimization is recognized as one of the most cost-effective ways to improve urban mobility; however the extent of the benefits realized could significantly depend on how often traffic signal re-optimization occurs. Using a case study ...

  16. Adaptive critics for dynamic optimization.

    PubMed

    Kulkarni, Raghavendra V; Venayagamoorthy, Ganesh Kumar

    2010-06-01

    A novel action-dependent adaptive critic design (ACD) is developed for dynamic optimization. The proposed combination of a particle swarm optimization-based actor and a neural network critic is demonstrated through dynamic sleep scheduling of wireless sensor motes for wildlife monitoring. The objective of the sleep scheduler is to dynamically adapt the sleep duration to node's battery capacity and movement pattern of animals in its environment in order to obtain snapshots of the animal on its trajectory uniformly. Simulation results show that the sleep time of the node determined by the actor critic yields superior quality of sensory data acquisition and enhanced node longevity. Copyright 2010 Elsevier Ltd. All rights reserved.

  17. On the asymptotic optimality and improved strategies of SPTB heuristic for open-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Bai, Danyu; Zhang, Zhihai

    2014-08-01

    This article investigates the open-shop scheduling problem with the optimal criterion of minimising the sum of quadratic completion times. For this NP-hard problem, the asymptotic optimality of the shortest processing time block (SPTB) heuristic is proven in the sense of limit. Moreover, three different improvements, namely, the job-insert scheme, tabu search and genetic algorithm, are introduced to enhance the quality of the original solution generated by the SPTB heuristic. At the end of the article, a series of numerical experiments demonstrate the convergence of the heuristic, the performance of the improvements and the effectiveness of the quadratic objective.

  18. Optimizing Staffing levels and Schedules for Railroad Dispatching Centers

    DOT National Transportation Integrated Search

    2004-09-01

    This report presents the results of a study to explore approaches to establishing staffing levels and schedules for railroad dispatchers. The : work was conducted as follow-up to a prior study that found fatigue among dispatchers, particularly those ...

  19. A Dynamic Scheduling Method of Earth-Observing Satellites by Employing Rolling Horizon Strategy

    PubMed Central

    Dishan, Qiu; Chuan, He; Jin, Liu; Manhao, Ma

    2013-01-01

    Focused on the dynamic scheduling problem for earth-observing satellites (EOS), an integer programming model is constructed after analyzing the main constraints. The rolling horizon (RH) strategy is proposed according to the independent arriving time and deadline of the imaging tasks. This strategy is designed with a mixed triggering mode composed of periodical triggering and event triggering, and the scheduling horizon is decomposed into a series of static scheduling intervals. By optimizing the scheduling schemes in each interval, the dynamic scheduling of EOS is realized. We also propose three dynamic scheduling algorithms by the combination of the RH strategy and various heuristic algorithms. Finally, the scheduling results of different algorithms are compared and the presented methods in this paper are demonstrated to be efficient by extensive experiments. PMID:23690742

  20. A dynamic scheduling method of Earth-observing satellites by employing rolling horizon strategy.

    PubMed

    Dishan, Qiu; Chuan, He; Jin, Liu; Manhao, Ma

    2013-01-01

    Focused on the dynamic scheduling problem for earth-observing satellites (EOS), an integer programming model is constructed after analyzing the main constraints. The rolling horizon (RH) strategy is proposed according to the independent arriving time and deadline of the imaging tasks. This strategy is designed with a mixed triggering mode composed of periodical triggering and event triggering, and the scheduling horizon is decomposed into a series of static scheduling intervals. By optimizing the scheduling schemes in each interval, the dynamic scheduling of EOS is realized. We also propose three dynamic scheduling algorithms by the combination of the RH strategy and various heuristic algorithms. Finally, the scheduling results of different algorithms are compared and the presented methods in this paper are demonstrated to be efficient by extensive experiments.

  1. User-Preference-Driven Model Predictive Control of Residential Building Loads and Battery Storage for Demand Response: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Xin; Baker, Kyri A.; Christensen, Dane T.

    This paper presents a user-preference-driven home energy management system (HEMS) for demand response (DR) with residential building loads and battery storage. The HEMS is based on a multi-objective model predictive control algorithm, where the objectives include energy cost, thermal comfort, and carbon emission. A multi-criterion decision making method originating from social science is used to quickly determine user preferences based on a brief survey and derive the weights of different objectives used in the optimization process. Besides the residential appliances used in the traditional DR programs, a home battery system is integrated into the HEMS to improve the flexibility andmore » reliability of the DR resources. Simulation studies have been performed on field data from a residential building stock data set. Appliance models and usage patterns were learned from the data to predict the DR resource availability. Results indicate the HEMS was able to provide a significant amount of load reduction with less than 20% prediction error in both heating and cooling cases.« less

  2. User-Preference-Driven Model Predictive Control of Residential Building Loads and Battery Storage for Demand Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Xin; Baker, Kyri A; Isley, Steven C

    This paper presents a user-preference-driven home energy management system (HEMS) for demand response (DR) with residential building loads and battery storage. The HEMS is based on a multi-objective model predictive control algorithm, where the objectives include energy cost, thermal comfort, and carbon emission. A multi-criterion decision making method originating from social science is used to quickly determine user preferences based on a brief survey and derive the weights of different objectives used in the optimization process. Besides the residential appliances used in the traditional DR programs, a home battery system is integrated into the HEMS to improve the flexibility andmore » reliability of the DR resources. Simulation studies have been performed on field data from a residential building stock data set. Appliance models and usage patterns were learned from the data to predict the DR resource availability. Results indicate the HEMS was able to provide a significant amount of load reduction with less than 20% prediction error in both heating and cooling cases.« less

  3. Property-driven functional verification technique for high-speed vision system-on-chip processor

    NASA Astrophysics Data System (ADS)

    Nshunguyimfura, Victor; Yang, Jie; Liu, Liyuan; Wu, Nanjian

    2017-04-01

    The implementation of functional verification in a fast, reliable, and effective manner is a challenging task in a vision chip verification process. The main reason for this challenge is the stepwise nature of existing functional verification techniques. This vision chip verification complexity is also related to the fact that in most vision chip design cycles, extensive efforts are focused on how to optimize chip metrics such as performance, power, and area. Design functional verification is not explicitly considered at an earlier stage at which the most sound decisions are made. In this paper, we propose a semi-automatic property-driven verification technique. The implementation of all verification components is based on design properties. We introduce a low-dimension property space between the specification space and the implementation space. The aim of this technique is to speed up the verification process for high-performance parallel processing vision chips. Our experimentation results show that the proposed technique can effectively improve the verification effort up to 20% for the complex vision chip design while reducing the simulation and debugging overheads.

  4. Optimization of Microelectronic Devices for Sensor Applications

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Klimeck, Gerhard

    2000-01-01

    The NASA/JPL goal to reduce payload in future space missions while increasing mission capability demands miniaturization of active and passive sensors, analytical instruments and communication systems among others. Currently, typical system requirements include the detection of particular spectral lines, associated data processing, and communication of the acquired data to other systems. Advances in lithography and deposition methods result in more advanced devices for space application, while the sub-micron resolution currently available opens a vast design space. Though an experimental exploration of this widening design space-searching for optimized performance by repeated fabrication efforts-is unfeasible, it does motivate the development of reliable software design tools. These tools necessitate models based on fundamental physics and mathematics of the device to accurately model effects such as diffraction and scattering in opto-electronic devices, or bandstructure and scattering in heterostructure devices. The software tools must have convenient turn-around times and interfaces that allow effective usage. The first issue is addressed by the application of high-performance computers and the second by the development of graphical user interfaces driven by properly developed data structures. These tools can then be integrated into an optimization environment, and with the available memory capacity and computational speed of high performance parallel platforms, simulation of optimized components can proceed. In this paper, specific applications of the electromagnetic modeling of infrared filtering, as well as heterostructure device design will be presented using genetic algorithm global optimization methods.

  5. Heuristic-based scheduling algorithm for high level synthesis

    NASA Technical Reports Server (NTRS)

    Mohamed, Gulam; Tan, Han-Ngee; Chng, Chew-Lye

    1992-01-01

    A new scheduling algorithm is proposed which uses a combination of a resource utilization chart, a heuristic algorithm to estimate the minimum number of hardware units based on operator mobilities, and a list-scheduling technique to achieve fast and near optimal schedules. The schedule time of this algorithm is almost independent of the length of mobilities of operators as can be seen from the benchmark example (fifth order digital elliptical wave filter) presented when the cycle time was increased from 17 to 18 and then to 21 cycles. It is implemented in C on a SUN3/60 workstation.

  6. Shift scheduling model considering workload and worker’s preference for security department

    NASA Astrophysics Data System (ADS)

    Herawati, A.; Yuniartha, D. R.; Purnama, I. L. I.; Dewi, LT

    2018-04-01

    Security department operates for 24 hours and applies shift scheduling to organize its workers as well as in hotel industry. This research has been conducted to develop shift scheduling model considering the workers physical workload using rating of perceived exertion (RPE) Borg’s Scale and workers’ preference to accommodate schedule flexibility. The mathematic model is developed in integer linear programming and results optimal solution for simple problem. Resulting shift schedule of the developed model has equally distribution shift allocation among workers to balance the physical workload and give flexibility for workers in working hours arrangement.

  7. Electric power scheduling: A distributed problem-solving approach

    NASA Technical Reports Server (NTRS)

    Mellor, Pamela A.; Dolce, James L.; Krupp, Joseph C.

    1990-01-01

    Space Station Freedom's power system, along with the spacecraft's other subsystems, needs to carefully conserve its resources and yet strive to maximize overall Station productivity. Due to Freedom's distributed design, each subsystem must work cooperatively within the Station community. There is a need for a scheduling tool which will preserve this distributed structure, allow each subsystem the latitude to satisfy its own constraints, and preserve individual value systems while maintaining Station-wide integrity. The value-driven free-market economic model is such a tool.

  8. Kronos Observatory Operations Challenges in a Lean Environment

    NASA Astrophysics Data System (ADS)

    Koratkar, Anuradha; Peterson, Bradley M.; Polidan, Ronald S.

    2003-02-01

    Kronos is a multiwavelength observatory designed to map the accretion disks and environments of supermassive black holes in various environments using the natural intrinsic variability of the accretion-driven sources. Kronos is envisaged as a Medium Explorer mission to NASA Office of Space Science under the Structure and Evolution of the Universe theme. We will achieve the Kronos science objectives by developing cost-effective techniques for obtaining and assimilating data from the research spacecraft and its subsequent work on the ground. The science operations assumptions for the mission are: (1 Need for flexible scheduling due to the variable nature of targets, (2) Large data volumes but minimal ground station contact, (3) Very small staff for operations. Our first assumption implies that we will have to consider an effective strategy to dynamically reprioritize the observing schedule to maximize science data acquisition. The flexibility we seek greatly increases the science return of the mission, because variability events can be properly captured. Our second assumption implies that we will have to develop some basic on-board analysis strategies to determine which data get downloaded. The small size of the operations staff implies that we need to "automate" as many routine processes of science operations as possible. In this paper we will discuss the various solutions that we are considering to optimize our operations and maximize science returns on the observatory.

  9. 77 FR 47680 - Advisory Committee on Reactor Safeguards (ACRS); Meeting of the ACRS Subcommittees on Reliability...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-09

    ... NUCLEAR REGULATORY COMMISSION Advisory Committee on Reactor Safeguards (ACRS); Meeting of the ACRS Subcommittees on Reliability and PRA and Fukushima; Revision to Notice of Meetings The (ACRS) Subcommittee on Fukushima originally scheduled for the afternoon of August 14, 2012, has been moved to the morning of August...

  10. Structural validity and reliability of the Positive and Negative Affect Schedule (PANAS): evidence from a large Brazilian community sample.

    PubMed

    Carvalho, Hudson W de; Andreoli, Sérgio B; Lara, Diogo R; Patrick, Christopher J; Quintana, Maria Inês; Bressan, Rodrigo A; Melo, Marcelo F de; Mari, Jair de J; Jorge, Miguel R

    2013-01-01

    Positive and negative affect are the two psychobiological-dispositional dimensions reflecting proneness to positive and negative activation that influence the extent to which individuals experience life events as joyful or as distressful. The Positive and Negative Affect Schedule (PANAS) is a structured questionnaire that provides independent indexes of positive and negative affect. This study aimed to validate a Brazilian interview-version of the PANAS by means of factor and internal consistency analysis. A representative community sample of 3,728 individuals residing in the cities of São Paulo and Rio de Janeiro, Brazil, voluntarily completed the PANAS. Exploratory structural equation model analysis was based on maximum likelihood estimation and reliability was calculated via Cronbach's alpha coefficient. Our results provide support for the hypothesis that the PANAS reliably measures two distinct dimensions of positive and negative affect. The structure and reliability of the Brazilian version of the PANAS are consistent with those of its original version. Taken together, these results attest the validity of the Brazilian adaptation of the instrument.

  11. Playing Games with Optimal Competitive Scheduling

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Crawford, James; Khatib, Lina; Brafman, Ronen

    2005-01-01

    This paper is concerned with the problem of allocating a unit capacity resource to multiple users within a pre-defined time period. The resource is indivisible, so that at most one user can use it at each time instance. However, different users may use it at different times. The users have independent, selfish preferences for when and for how long they are allocated this resource. Thus, they value different resource access durations differently, and they value different time slots differently. We seek an optimal allocation schedule for this resource.

  12. A ranking algorithm for spacelab crew and experiment scheduling

    NASA Technical Reports Server (NTRS)

    Grone, R. D.; Mathis, F. H.

    1980-01-01

    The problem of obtaining an optimal or near optimal schedule for scientific experiments to be performed on Spacelab missions is addressed. The current capabilities in this regard are examined and a method of ranking experiments in order of difficulty is developed to support the existing software. Experimental data is obtained from applying this method to the sets of experiments corresponding to Spacelab mission 1, 2, and 3. Finally, suggestions are made concerning desirable modifications and features of second generation software being developed for this problem.

  13. Hybrid optimal scheduling for intermittent androgen suppression of prostate cancer

    NASA Astrophysics Data System (ADS)

    Hirata, Yoshito; di Bernardo, Mario; Bruchovsky, Nicholas; Aihara, Kazuyuki

    2010-12-01

    We propose a method for achieving an optimal protocol of intermittent androgen suppression for the treatment of prostate cancer. Since the model that reproduces the dynamical behavior of the surrogate tumor marker, prostate specific antigen, is piecewise linear, we can obtain an analytical solution for the model. Based on this, we derive conditions for either stopping or delaying recurrent disease. The solution also provides a design principle for the most favorable schedule of treatment that minimizes the rate of expansion of the malignant cell population.

  14. Scheduling and Pricing for Expected Ramp Capability in Real-Time Power Markets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ela, Erik; O'Malley, Mark

    2016-05-01

    Higher variable renewable generation penetrations are occurring throughout the world on different power systems. These resources increase the variability and uncertainty on the system which must be accommodated by an increase in the flexibility of the system resources in order to maintain reliability. Many scheduling strategies have been discussed and introduced to ensure that this flexibility is available at multiple timescales. To meet variability, that is, the expected changes in system conditions, two recent strategies have been introduced: time-coupled multi-period market clearing models and the incorporation of ramp capability constraints. To appropriately evaluate these methods, it is important to assessmore » both efficiency and reliability. But it is also important to assess the incentive structure to ensure that resources asked to perform in different ways have the proper incentives to follow these directions, which is a step often ignored in simulation studies. We find that there are advantages and disadvantages to both approaches. We also find that look-ahead horizon length in multi-period market models can impact incentives. This paper proposes scheduling and pricing methods that ensure expected ramps are met reliably, efficiently, and with associated prices based on true marginal costs that incentivize resources to do as directed by the market. Case studies show improvements of the new method.« less

  15. Speech-driven environmental control systems--a qualitative analysis of users' perceptions.

    PubMed

    Judge, Simon; Robertson, Zoë; Hawley, Mark; Enderby, Pam

    2009-05-01

    To explore users' experiences and perceptions of speech-driven environmental control systems (SPECS) as part of a larger project aiming to develop a new SPECS. The motivation for this part of the project was to add to the evidence base for the use of SPECS and to determine the key design specifications for a new speech-driven system from a user's perspective. Semi-structured interviews were conducted with 12 users of SPECS from around the United Kingdom. These interviews were transcribed and analysed using a qualitative method based on framework analysis. Reliability is the main influence on the use of SPECS. All the participants gave examples of occasions when their speech-driven system was unreliable; in some instances, this unreliability was reported as not being a problem (e.g., for changing television channels); however, it was perceived as a problem for more safety critical functions (e.g., opening a door). Reliability was cited by participants as the reason for using a switch-operated system as back up. Benefits of speech-driven systems focused on speech operation enabling access when other methods were not possible; quicker operation and better aesthetic considerations. Overall, there was a perception of increased independence from the use of speech-driven environmental control. In general, speech was considered a useful method of operating environmental controls by the participants interviewed; however, their perceptions regarding reliability often influenced their decision to have backup or alternative systems for certain functions.

  16. Observing strategies for future solar facilities: the ATST test case

    NASA Astrophysics Data System (ADS)

    Uitenbroek, H.; Tritschler, A.

    2012-12-01

    Traditionally solar observations have been scheduled and performed very differently from night time efforts, in particular because we have been observing the Sun for a long time, requiring new combinations of observables to make progress, and because solar physics observations are often event driven on time scales of hours to days. With the proposal pressure that is expected for new large-aperture facilities, we can no longer afford the time spent on custom setups, and will have to rethink our scheduling and operations. We will discuss our efforts at Sac Peak in preparing for this new era, and outline the planned scheduling and operations planning for the ATST in particular.

  17. Evaluation of Patient Handoff Methods on an Inpatient Teaching Service

    PubMed Central

    Craig, Steven R.; Smith, Hayden L.; Downen, A. Matthew; Yost, W. John

    2012-01-01

    Background The patient handoff process can be a highly variable and unstructured period at risk for communication errors. The morning sign-in process used by resident physicians at teaching hospitals typically involves less rigorous handoff protocols than the resident evening sign-out process. Little research has been conducted on best practices for handoffs during morning sign-in exchanges between resident physicians. Research must evaluate optimal protocols for the resident morning sign-in process. Methods Three morning handoff protocols consisting of written, electronic, and face-to-face methods were implemented over 3 study phases during an academic year. Study participants included all interns covering the internal medicine inpatient teaching service at a tertiary hospital. Study measures entailed intern survey-based interviews analyzed for failures in handoff protocols with or without missed pertinent information. Descriptive and comparative analyses examined study phase differences. Results A scheduled face-to-face handoff process had the fewest protocol deviations and demonstrated best communication of essential patient care information between cross-covering teams compared to written and electronic sign-in protocols. Conclusion Intern patient handoffs were more reliable when the sign-in protocol included scheduled face-to-face meetings. This method provided the best communication of patient care information and allowed for open exchanges of information. PMID:23267259

  18. Coordinated Scheduling for Interdependent Electric Power and Natural Gas Infrastructures

    DOE PAGES

    Zlotnik, Anatoly; Roald, Line; Backhaus, Scott; ...

    2016-03-24

    The extensive installation of gas-fired power plants in many parts of the world has led electric systems to depend heavily on reliable gas supplies. The use of gas-fired generators for peak load and reserve provision causes high intraday variability in withdrawals from high-pressure gas transmission systems. Such variability can lead to gas price fluctuations and supply disruptions that affect electric generator dispatch, electricity prices, and threaten the security of power systems and gas pipelines. These infrastructures function on vastly different spatio-temporal scales, which prevents current practices for separate operations and market clearing from being coordinated. Here in this article, wemore » apply new techniques for control of dynamic gas flows on pipeline networks to examine day-ahead scheduling of electric generator dispatch and gas compressor operation for different levels of integration, spanning from separate forecasting, and simulation to combined optimal control. We formulate multiple coordination scenarios and develop tractable physically accurate computational implementations. These scenarios are compared using an integrated model of test networks for power and gas systems with 24 nodes and 24 pipes, respectively, which are coupled through gas-fired generators. The analysis quantifies the economic efficiency and security benefits of gas-electric coordination and dynamic gas system operation.« less

  19. Controlling laser driven protons acceleration using a deformable mirror at a high repetition rate

    NASA Astrophysics Data System (ADS)

    Noaman-ul-Haq, M.; Sokollik, T.; Ahmed, H.; Braenzel, J.; Ehrentraut, L.; Mirzaie, M.; Yu, L.-L.; Sheng, Z. M.; Chen, L. M.; Schnürer, M.; Zhang, J.

    2018-03-01

    We present results from a proof-of-principle experiment to optimize laser driven protons acceleration by directly feeding back its spectral information to a deformable mirror (DM) controlled by evolutionary algorithms (EAs). By irradiating a stable high-repetition rate tape driven target with ultra-intense pulses of intensities ∼1020 W/ cm2, we optimize the maximum energy of the accelerated protons with a stability of less than ∼5% fluctuations near optimum value. Moreover, due to spatio-temporal development of the sheath field, modulations in the spectrum are also observed. Particularly, a prominent narrow peak is observed with a spread of ∼15% (FWHM) at low energy part of the spectrum. These results are helpful to develop high repetition rate optimization techniques required for laser-driven ion accelerators.

  20. Massively Parallel Dantzig-Wolfe Decomposition Applied to Traffic Flow Scheduling

    NASA Technical Reports Server (NTRS)

    Rios, Joseph Lucio; Ross, Kevin

    2009-01-01

    Optimal scheduling of air traffic over the entire National Airspace System is a computationally difficult task. To speed computation, Dantzig-Wolfe decomposition is applied to a known linear integer programming approach for assigning delays to flights. The optimization model is proven to have the block-angular structure necessary for Dantzig-Wolfe decomposition. The subproblems for this decomposition are solved in parallel via independent computation threads. Experimental evidence suggests that as the number of subproblems/threads increases (and their respective sizes decrease), the solution quality, convergence, and runtime improve. A demonstration of this is provided by using one flight per subproblem, which is the finest possible decomposition. This results in thousands of subproblems and associated computation threads. This massively parallel approach is compared to one with few threads and to standard (non-decomposed) approaches in terms of solution quality and runtime. Since this method generally provides a non-integral (relaxed) solution to the original optimization problem, two heuristics are developed to generate an integral solution. Dantzig-Wolfe followed by these heuristics can provide a near-optimal (sometimes optimal) solution to the original problem hundreds of times faster than standard (non-decomposed) approaches. In addition, when massive decomposition is employed, the solution is shown to be more likely integral, which obviates the need for an integerization step. These results indicate that nationwide, real-time, high fidelity, optimal traffic flow scheduling is achievable for (at least) 3 hour planning horizons.

Top