Sample records for predictive scheduling approach

  1. Improving Resource Selection and Scheduling Using Predictions. Chapter 1

    NASA Technical Reports Server (NTRS)

    Smith, Warren

    2003-01-01

    The introduction of computational grids has resulted in several new problems in the area of scheduling that can be addressed using predictions. The first problem is selecting where to run an application on the many resources available in a grid. Our approach to help address this problem is to provide predictions of when an application would start to execute if submitted to specific scheduled computer systems. The second problem is gaining simultaneous access to multiple computer systems so that distributed applications can be executed. We help address this problem by investigating how to support advance reservations in local scheduling systems. Our approaches to both of these problems are based on predictions for the execution time of applications on space- shared parallel computers. As a side effect of this work, we also discuss how predictions of application run times can be used to improve scheduling performance.

  2. Effect of Uncertainty on Deterministic Runway Scheduling

    NASA Technical Reports Server (NTRS)

    Gupta, Gautam; Malik, Waqar; Jung, Yoon C.

    2012-01-01

    Active runway scheduling involves scheduling departures for takeoffs and arrivals for runway crossing subject to numerous constraints. This paper evaluates the effect of uncertainty on a deterministic runway scheduler. The evaluation is done against a first-come- first-serve scheme. In particular, the sequence from a deterministic scheduler is frozen and the times adjusted to satisfy all separation criteria; this approach is tested against FCFS. The comparison is done for both system performance (throughput and system delay) and predictability, and varying levels of congestion are considered. The modeling of uncertainty is done in two ways: as equal uncertainty in availability at the runway as for all aircraft, and as increasing uncertainty for later aircraft. Results indicate that the deterministic approach consistently performs better than first-come-first-serve in both system performance and predictability.

  3. Efficient operation scheduling for adsorption chillers using predictive optimization-based control methods

    NASA Astrophysics Data System (ADS)

    Bürger, Adrian; Sawant, Parantapa; Bohlayer, Markus; Altmann-Dieses, Angelika; Braun, Marco; Diehl, Moritz

    2017-10-01

    Within this work, the benefits of using predictive control methods for the operation of Adsorption Cooling Machines (ACMs) are shown on a simulation study. Since the internal control decisions of series-manufactured ACMs often cannot be influenced, the work focuses on optimized scheduling of an ACM considering its internal functioning as well as forecasts for load and driving energy occurrence. For illustration, an assumed solar thermal climate system is introduced and a system model suitable for use within gradient-based optimization methods is developed. The results of a system simulation using a conventional scheme for ACM scheduling are compared to the results of a predictive, optimization-based scheduling approach for the same exemplary scenario of load and driving energy occurrence. The benefits of the latter approach are shown and future actions for application of these methods for system control are addressed.

  4. Applying mathematical models to predict resident physician performance and alertness on traditional and novel work schedules.

    PubMed

    Klerman, Elizabeth B; Beckett, Scott A; Landrigan, Christopher P

    2016-09-13

    In 2011 the U.S. Accreditation Council for Graduate Medical Education began limiting first year resident physicians (interns) to shifts of ≤16 consecutive hours. Controversy persists regarding the effectiveness of this policy for reducing errors and accidents while promoting education and patient care. Using a mathematical model of the effects of circadian rhythms and length of time awake on objective performance and subjective alertness, we quantitatively compared predictions for traditional intern schedules to those that limit work to ≤ 16 consecutive hours. We simulated two traditional schedules and three novel schedules using the mathematical model. The traditional schedules had extended duration work shifts (≥24 h) with overnight work shifts every second shift (including every third night, Q3) or every third shift (including every fourth night, Q4) night; the novel schedules had two different cross-cover (XC) night team schedules (XC-V1 and XC-V2) and a Rapid Cycle Rotation (RCR) schedule. Predicted objective performance and subjective alertness for each work shift were computed for each individual's schedule within a team and then combined for the team as a whole. Our primary outcome was the amount of time within a work shift during which a team's model-predicted objective performance and subjective alertness were lower than that expected after 16 or 24 h of continuous wake in an otherwise rested individual. The model predicted fewer hours with poor performance and alertness, especially during night-time work hours, for all three novel schedules than for either the traditional Q3 or Q4 schedules. Three proposed schedules that eliminate extended shifts may improve performance and alertness compared with traditional Q3 or Q4 schedules. Predicted times of worse performance and alertness were at night, which is also a time when supervision of trainees is lower. Mathematical modeling provides a quantitative comparison approach with potential to aid residency programs in schedule analysis and redesign.

  5. Patient No-Show Predictive Model Development using Multiple Data Sources for an Effective Overbooking Approach

    PubMed Central

    Hanauer, D.A.

    2014-01-01

    Summary Background Patient no-shows in outpatient delivery systems remain problematic. The negative impacts include underutilized medical resources, increased healthcare costs, decreased access to care, and reduced clinic efficiency and provider productivity. Objective To develop an evidence-based predictive model for patient no-shows, and thus improve overbooking approaches in outpatient settings to reduce the negative impact of no-shows. Methods Ten years of retrospective data were extracted from a scheduling system and an electronic health record system from a single general pediatrics clinic, consisting of 7,988 distinct patients and 104,799 visits along with variables regarding appointment characteristics, patient demographics, and insurance information. Descriptive statistics were used to explore the impact of variables on show or no-show status. Logistic regression was used to develop a no-show predictive model, which was then used to construct an algorithm to determine the no-show threshold that calculates a predicted show/no-show status. This approach aims to overbook an appointment where a scheduled patient is predicted to be a no-show. The approach was compared with two commonly-used overbooking approaches to demonstrate the effectiveness in terms of patient wait time, physician idle time, overtime and total cost. Results From the training dataset, the optimal error rate is 10.6% with a no-show threshold being 0.74. This threshold successfully predicts the validation dataset with an error rate of 13.9%. The proposed overbooking approach demonstrated a significant reduction of at least 6% on patient waiting, 27% on overtime, and 3% on total costs compared to other common flat-overbooking methods. Conclusions This paper demonstrates an alternative way to accommodate overbooking, accounting for the prediction of an individual patient’s show/no-show status. The predictive no-show model leads to a dynamic overbooking policy that could improve patient waiting, overtime, and total costs in a clinic day while maintaining a full scheduling capacity. PMID:25298821

  6. Optimizing Chemotherapy Dose and Schedule by Norton-Simon Mathematical Modeling

    PubMed Central

    Traina, Tiffany A.; Dugan, Ute; Higgins, Brian; Kolinsky, Kenneth; Theodoulou, Maria; Hudis, Clifford A.; Norton, Larry

    2011-01-01

    Background To hasten and improve anticancer drug development, we created a novel approach to generating and analyzing preclinical dose-scheduling data so as to optimize benefit-to-toxicity ratios. Methods We applied mathematical methods based upon Norton-Simon growth kinetic modeling to tumor-volume data from breast cancer xenografts treated with capecitabine (Xeloda®, Roche) at the conventional schedule of 14 days of treatment followed by a 7-day rest (14 - 7). Results The model predicted that 7 days of treatment followed by a 7-day rest (7 - 7) would be superior. Subsequent preclinical studies demonstrated that this biweekly capecitabine schedule allowed for safe delivery of higher daily doses, improved tumor response, and prolonged animal survival. Conclusions We demonstrated that the application of Norton-Simon modeling to the design and analysis of preclinical data predicts an improved capecitabine dosing schedule in xenograft models. This method warrants further investigation and application in clinical drug development. PMID:20519801

  7. Post-Stall Aerodynamic Modeling and Gain-Scheduled Control Design

    NASA Technical Reports Server (NTRS)

    Wu, Fen; Gopalarathnam, Ashok; Kim, Sungwan

    2005-01-01

    A multidisciplinary research e.ort that combines aerodynamic modeling and gain-scheduled control design for aircraft flight at post-stall conditions is described. The aerodynamic modeling uses a decambering approach for rapid prediction of post-stall aerodynamic characteristics of multiple-wing con.gurations using known section data. The approach is successful in bringing to light multiple solutions at post-stall angles of attack right during the iteration process. The predictions agree fairly well with experimental results from wind tunnel tests. The control research was focused on actuator saturation and .ight transition between low and high angles of attack regions for near- and post-stall aircraft using advanced LPV control techniques. The new control approaches maintain adequate control capability to handle high angle of attack aircraft control with stability and performance guarantee.

  8. Automated Platform Management System Scheduling

    NASA Technical Reports Server (NTRS)

    Hull, Larry G.

    1990-01-01

    The Platform Management System was established to coordinate the operation of platform systems and instruments. The management functions are split between ground and space components. Since platforms are to be out of contact with the ground more than the manned base, the on-board functions are required to be more autonomous than those of the manned base. Under this concept, automated replanning and rescheduling, including on-board real-time schedule maintenance and schedule repair, are required to effectively and efficiently meet Space Station Freedom mission goals. In a FY88 study, we developed several promising alternatives for automated platform planning and scheduling. We recommended both a specific alternative and a phased approach to automated platform resource scheduling. Our recommended alternative was based upon use of exactly the same scheduling engine in both ground and space components of the platform management system. Our phased approach recommendation was based upon evolutionary development of the platform. In the past year, we developed platform scheduler requirements and implemented a rapid prototype of a baseline platform scheduler. Presently we are rehosting this platform scheduler rapid prototype and integrating the scheduler prototype into two Goddard Space Flight Center testbeds, as the ground scheduler in the Scheduling Concepts, Architectures, and Networks Testbed and as the on-board scheduler in the Platform Management System Testbed. Using these testbeds, we will investigate rescheduling issues, evaluate operational performance and enhance the platform scheduler prototype to demonstrate our evolutionary approach to automated platform scheduling. The work described in this paper was performed prior to Space Station Freedom rephasing, transfer of platform responsibility to Code E, and other recently discussed changes. We neither speculate on these changes nor attempt to predict the impact of the final decisions. As a consequence some of our work and results may be outdated when this paper is published.

  9. On-the-fly scheduling as a manifestation of partial-order planning and dynamic task values.

    PubMed

    Hannah, Samuel D; Neal, Andrew

    2014-09-01

    The aim of this study was to develop a computational account of the spontaneous task ordering that occurs within jobs as work unfolds ("on-the-fly task scheduling"). Air traffic control is an example of work in which operators have to schedule their tasks as a partially predictable work flow emerges. To date, little attention has been paid to such on-the-fly scheduling situations. We present a series of discrete-event models fit to conflict resolution decision data collected from experienced controllers operating in a high-fidelity simulation. Our simulations reveal air traffic controllers' scheduling decisions as examples of the partial-order planning approach of Hayes-Roth and Hayes-Roth. The most successful model uses opportunistic first-come-first-served scheduling to select tasks from a queue. Tasks with short deadlines are executed immediately. Tasks with long deadlines are evaluated to assess whether they need to be executed immediately or deferred. On-the-fly task scheduling is computationally tractable despite its surface complexity and understandable as an example of both the partial-order planning strategy and the dynamic-value approach to prioritization.

  10. Minimizing conflicts: A heuristic repair method for constraint-satisfaction and scheduling problems

    NASA Technical Reports Server (NTRS)

    Minton, Steve; Johnston, Mark; Philips, Andrew; Laird, Phil

    1992-01-01

    This paper describes a simple heuristic approach to solving large-scale constraint satisfaction and scheduling problems. In this approach one starts with an inconsistent assignment for a set of variables and searches through the space of possible repairs. The search can be guided by a value-ordering heuristic, the min-conflicts heuristic, that attempts to minimize the number of constraint violations after each step. The heuristic can be used with a variety of different search strategies. We demonstrate empirically that on the n-queens problem, a technique based on this approach performs orders of magnitude better than traditional backtracking techniques. We also describe a scheduling application where the approach has been used successfully. A theoretical analysis is presented both to explain why this method works well on certain types of problems and to predict when it is likely to be most effective.

  11. Machine Learning Based Online Performance Prediction for Runtime Parallelization and Task Scheduling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, J; Ma, X; Singh, K

    2008-10-09

    With the emerging many-core paradigm, parallel programming must extend beyond its traditional realm of scientific applications. Converting existing sequential applications as well as developing next-generation software requires assistance from hardware, compilers and runtime systems to exploit parallelism transparently within applications. These systems must decompose applications into tasks that can be executed in parallel and then schedule those tasks to minimize load imbalance. However, many systems lack a priori knowledge about the execution time of all tasks to perform effective load balancing with low scheduling overhead. In this paper, we approach this fundamental problem using machine learning techniques first to generatemore » performance models for all tasks and then applying those models to perform automatic performance prediction across program executions. We also extend an existing scheduling algorithm to use generated task cost estimates for online task partitioning and scheduling. We implement the above techniques in the pR framework, which transparently parallelizes scripts in the popular R language, and evaluate their performance and overhead with both a real-world application and a large number of synthetic representative test scripts. Our experimental results show that our proposed approach significantly improves task partitioning and scheduling, with maximum improvements of 21.8%, 40.3% and 22.1% and average improvements of 15.9%, 16.9% and 4.2% for LMM (a real R application) and synthetic test cases with independent and dependent tasks, respectively.« less

  12. Modeling the dynamics of choice.

    PubMed

    Baum, William M; Davison, Michael

    2009-06-01

    A simple linear-operator model both describes and predicts the dynamics of choice that may underlie the matching relation. We measured inter-food choice within components of a schedule that presented seven different pairs of concurrent variable-interval schedules for 12 food deliveries each with no signals indicating which pair was in force. This measure of local choice was accurately described and predicted as obtained reinforcer sequences shifted it to favor one alternative or the other. The effect of a changeover delay was reflected in one parameter, the asymptote, whereas the effect of a difference in overall rate of food delivery was reflected in the other parameter, rate of approach to the asymptote. The model takes choice as a primary dependent variable, not derived by comparison between alternatives-an approach that agrees with the molar view of behaviour.

  13. A COTS-Based Attitude Dependent Contact Scheduling System

    NASA Technical Reports Server (NTRS)

    DeGumbia, Jonathan D.; Stezelberger, Shane T.; Woodard, Mark

    2006-01-01

    The mission architecture of the Gamma-ray Large Area Space Telescope (GLAST) requires a sophisticated ground system component for scheduling the downlink of science data. Contacts between the ````````````````` satellite and the Tracking and Data Relay Satellite System (TDRSS) are restricted by the limited field-of-view of the science data downlink antenna. In addition, contacts must be scheduled when permitted by the satellite s complex and non-repeating attitude profile. Complicating the matter further, the long lead-time required to schedule TDRSS services, combined with the short duration of the downlink contact opportunities, mandates accurate GLAST orbit and attitude modeling. These circumstances require the development of a scheduling system that is capable of predictively and accurately modeling not only the orbital position of GLAST but also its attitude. This paper details the methods used in the design of a Commercial Off The Shelf (COTS)-based attitude-dependent. TDRSS contact Scheduling system that meets the unique scheduling requirements of the GLAST mission, and it suggests a COTS-based scheduling approach to support future missions. The scheduling system applies filtering and smoothing algorithms to telemetered GPS data to produce high-accuracy predictive GLAST orbit ephemerides. Next, bus pointing commands from the GLAST Science Support Center are used to model the complexities of the two dynamic science gathering attitude modes. Attitude-dependent view periods are then generated between GLAST and each of the supporting TDRSs. Numerous scheduling constraints are then applied to account for various mission specific resource limitations. Next, an optimization engine is used to produce an optimized TDRSS contact schedule request which is sent to TDRSS scheduling for confirmation. Lastly, the confirmed TDRSS contact schedule is rectified with an updated ephemeris and adjusted bus pointing commands to produce a final science downlink contact schedule.

  14. Marching to the beat of Moore's Law

    NASA Astrophysics Data System (ADS)

    Borodovsky, Yan

    2006-03-01

    Area density scaling in integrated circuits, defined as transistor count per unit area, has followed the famous observation-cum-prediction by Gordon Moore for many generations. Known as "Moore's Law" which predicts density doubling every 18-24 month, it has provided all important synchronizing guidance and reference for tools and materials suppliers, IC manufacturers and their customers as to what minimal requirements their products and services need to meet to satisfy technical and financial expectations in support of the infrastructure required for the development and manufacturing of corresponding technology generation nodes. Multiple lithography solutions are usually under considerations for any given node. In general, three broad classes of solutions are considered: evolutionary - technology that is extension of existing technology infrastructure at similar or slightly higher cost and risk to schedule; revolutionary - technology that discards significant parts of the existing infrastructure at similar cost, higher risk to schedule but promises higher capability as compared to the evolutionary approach; and last but not least, disruptive - approach that as a rule promises similar or better capabilities, much lower cost and wholly unpredictable risk to schedule and products yields. This paper examines various lithography approaches, their respective merits against criteria of respective infrastructure availability, affordability and risk to IC manufacturer's schedules and strategy involved in developing and selecting best solution in an attempt to sort out key factors that will impact the decision on the lithography choice for large-scale manufacturing for the future technology nodes.

  15. Departure Queue Prediction for Strategic and Tactical Surface Scheduler Integration

    NASA Technical Reports Server (NTRS)

    Zelinski, Shannon; Windhorst, Robert

    2016-01-01

    A departure metering concept to be demonstrated at Charlotte Douglas International Airport (CLT) will integrate strategic and tactical surface scheduling components to enable the respective collaborative decision making and improved efficiency benefits these two methods of scheduling provide. This study analyzes the effect of tactical scheduling on strategic scheduler predictability. Strategic queue predictions and target gate pushback times to achieve a desired queue length are compared between fast time simulations of CLT surface operations with and without tactical scheduling. The use of variable departure rates as a strategic scheduler input was shown to substantially improve queue predictions over static departure rates. With target queue length calibration, the strategic scheduler can be tuned to produce average delays within one minute of the tactical scheduler. However, root mean square differences between strategic and tactical delays were between 12 and 15 minutes due to the different methods the strategic and tactical schedulers use to predict takeoff times and generate gate pushback clearances. This demonstrates how difficult it is for the strategic scheduler to predict tactical scheduler assigned gate delays on an individual flight basis as the tactical scheduler adjusts departure sequence to accommodate arrival interactions. Strategic/tactical scheduler compatibility may be improved by providing more arrival information to the strategic scheduler and stabilizing tactical scheduler changes to runway sequence in response to arrivals.

  16. The comparison of predictive scheduling algorithms for different sizes of job shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Paprocka, I.; Kempa, W. M.; Grabowik, C.; Kalinowski, K.; Krenczyk, D.

    2016-08-01

    In the paper a survey of predictive and reactive scheduling methods is done in order to evaluate how the ability of prediction of reliability characteristics influences over robustness criteria. The most important reliability characteristics are: Mean Time to Failure, Mean Time of Repair. Survey analysis is done for a job shop scheduling problem. The paper answers the question: what method generates robust schedules in the case of a bottleneck failure occurrence before, at the beginning of planned maintenance actions or after planned maintenance actions? Efficiency of predictive schedules is evaluated using criteria: makespan, total tardiness, flow time, idle time. Efficiency of reactive schedules is evaluated using: solution robustness criterion and quality robustness criterion. This paper is the continuation of the research conducted in the paper [1], where the survey of predictive and reactive scheduling methods is done only for small size scheduling problems.

  17. Agent-Based Simulations for Project Management

    NASA Technical Reports Server (NTRS)

    White, J. Chris; Sholtes, Robert M.

    2011-01-01

    Currently, the most common approach used in project planning tools is the Critical Path Method (CPM). While this method was a great improvement over the basic Gantt chart technique being used at the time, it now suffers from three primary flaws: (1) task duration is an input, (2) productivity impacts are not considered , and (3) management corrective actions are not included. Today, computers have exceptional computational power to handle complex simulations of task e)(eculion and project management activities (e.g ., dynamically changing the number of resources assigned to a task when it is behind schedule). Through research under a Department of Defense contract, the author and the ViaSim team have developed a project simulation tool that enables more realistic cost and schedule estimates by using a resource-based model that literally turns the current duration-based CPM approach "on its head." The approach represents a fundamental paradigm shift in estimating projects, managing schedules, and reducing risk through innovative predictive techniques.

  18. PLAStiCC: Predictive Look-Ahead Scheduling for Continuous dataflows on Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumbhare, Alok; Simmhan, Yogesh; Prasanna, Viktor K.

    2014-05-27

    Scalable stream processing and continuous dataflow systems are gaining traction with the rise of big data due to the need for processing high velocity data in near real time. Unlike batch processing systems such as MapReduce and workflows, static scheduling strategies fall short for continuous dataflows due to the variations in the input data rates and the need for sustained throughput. The elastic resource provisioning of cloud infrastructure is valuable to meet the changing resource needs of such continuous applications. However, multi-tenant cloud resources introduce yet another dimension of performance variability that impacts the application’s throughput. In this paper wemore » propose PLAStiCC, an adaptive scheduling algorithm that balances resource cost and application throughput using a prediction-based look-ahead approach. It not only addresses variations in the input data rates but also the underlying cloud infrastructure. In addition, we also propose several simpler static scheduling heuristics that operate in the absence of accurate performance prediction model. These static and adaptive heuristics are evaluated through extensive simulations using performance traces obtained from public and private IaaS clouds. Our results show an improvement of up to 20% in the overall profit as compared to the reactive adaptation algorithm.« less

  19. Pharmacokinetics and Drug Interactions Determine Optimum Combination Strategies in Computational Models of Cancer Evolution.

    PubMed

    Chakrabarti, Shaon; Michor, Franziska

    2017-07-15

    The identification of optimal drug administration schedules to battle the emergence of resistance is a major challenge in cancer research. The existence of a multitude of resistance mechanisms necessitates administering drugs in combination, significantly complicating the endeavor of predicting the evolutionary dynamics of cancers and optimal intervention strategies. A thorough understanding of the important determinants of cancer evolution under combination therapies is therefore crucial for correctly predicting treatment outcomes. Here we developed the first computational strategy to explore pharmacokinetic and drug interaction effects in evolutionary models of cancer progression, a crucial step towards making clinically relevant predictions. We found that incorporating these phenomena into our multiscale stochastic modeling framework significantly changes the optimum drug administration schedules identified, often predicting nonintuitive strategies for combination therapies. We applied our approach to an ongoing phase Ib clinical trial (TATTON) administering AZD9291 and selumetinib to EGFR-mutant lung cancer patients. Our results suggest that the schedules used in the three trial arms have almost identical efficacies, but slight modifications in the dosing frequencies of the two drugs can significantly increase tumor cell eradication. Interestingly, we also predict that drug concentrations lower than the MTD are as efficacious, suggesting that lowering the total amount of drug administered could lower toxicities while not compromising on the effectiveness of the drugs. Our approach highlights the fact that quantitative knowledge of pharmacokinetic, drug interaction, and evolutionary processes is essential for identifying best intervention strategies. Our method is applicable to diverse cancer and treatment types and allows for a rational design of clinical trials. Cancer Res; 77(14); 3908-21. ©2017 AACR . ©2017 American Association for Cancer Research.

  20. Optimizing the arrival, waiting, and NPO times of children on the day of pediatric endoscopy procedures.

    PubMed

    Smallman, Bettina; Dexter, Franklin

    2010-03-01

    Research in predictive variability of operating room (OR) times has been performed using data from multidisciplinary, tertiary hospitals with mostly adult patients. In this article, we discuss case-duration prediction for children receiving general anesthesia for endoscopy. We critique which of the several types of OR management decisions dependent on accuracy of prediction are relevant to series (lists) of brief pediatric anesthetics. OR information system data were obtained for all children (aged 18 years and younger) undergoing a gastroenterology procedure with an anesthesiologist over 21 months. Summaries of data were used for a qualitative, systematic review of prior studies to learn which apply to brief pediatric cases. Patient arrival times were changed to be based on the statistical method relating actual and scheduled start times (Wachtel and Dexter, Anesth Analg 2007;105:127-40). Even perfect case-duration prediction would not affect whether a brief case was performed on a certain date and/or in a certain OR. There was no evidence of usefulness in calculating the probability that one case would last longer than another or in resequencing cases to influence postanesthesia care unit staffing or patient waiting from scheduled start times. The only decision for which the accuracy of case-duration prediction mattered was for the shortest time that preceding cases in the OR may take. Knowledge of the preceding procedures in the OR was not useful for that purpose because there were hundreds of combinations of preceding procedures and some cases cancelled. Instead, patient ready times were chosen based on 5% lower prediction bounds for ratios of actual to scheduled OR times. The approach was useful based on a 30% reduction in patient waiting times from scheduled start times with corresponding expected reductions in average and peak numbers of patients in the holding area. For brief pediatric OR anesthetics, predictive variability of case durations matters principally to the extent that it affects appropriate patient ready times. Such times should not be chosen by having patients start fasting, arrive, and be ready fixed numbers of hours before their scheduled start times.

  1. Parts and Components Reliability Assessment: A Cost Effective Approach

    NASA Technical Reports Server (NTRS)

    Lee, Lydia

    2009-01-01

    System reliability assessment is a methodology which incorporates reliability analyses performed at parts and components level such as Reliability Prediction, Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) to assess risks, perform design tradeoffs, and therefore, to ensure effective productivity and/or mission success. The system reliability is used to optimize the product design to accommodate today?s mandated budget, manpower, and schedule constraints. Stand ard based reliability assessment is an effective approach consisting of reliability predictions together with other reliability analyses for electronic, electrical, and electro-mechanical (EEE) complex parts and components of large systems based on failure rate estimates published by the United States (U.S.) military or commercial standards and handbooks. Many of these standards are globally accepted and recognized. The reliability assessment is especially useful during the initial stages when the system design is still in the development and hard failure data is not yet available or manufacturers are not contractually obliged by their customers to publish the reliability estimates/predictions for their parts and components. This paper presents a methodology to assess system reliability using parts and components reliability estimates to ensure effective productivity and/or mission success in an efficient manner, low cost, and tight schedule.

  2. Applying dynamic priority scheduling scheme to static systems of pinwheel task model in power-aware scheduling.

    PubMed

    Seol, Ye-In; Kim, Young-Kuk

    2014-01-01

    Power-aware scheduling reduces CPU energy consumption in hard real-time systems through dynamic voltage scaling (DVS). In this paper, we deal with pinwheel task model which is known as static and predictable task model and could be applied to various embedded or ubiquitous systems. In pinwheel task model, each task's priority is static and its execution sequence could be predetermined. There have been many static approaches to power-aware scheduling in pinwheel task model. But, in this paper, we will show that the dynamic priority scheduling results in power-aware scheduling could be applied to pinwheel task model. This method is more effective than adopting the previous static priority scheduling methods in saving energy consumption and, for the system being still static, it is more tractable and applicable to small sized embedded or ubiquitous computing. Also, we introduce a novel power-aware scheduling algorithm which exploits all slacks under preemptive earliest-deadline first scheduling which is optimal in uniprocessor system. The dynamic priority method presented in this paper could be applied directly to static systems of pinwheel task model. The simulation results show that the proposed algorithm with the algorithmic complexity of O(n) reduces the energy consumption by 10-80% over the existing algorithms.

  3. Applying Dynamic Priority Scheduling Scheme to Static Systems of Pinwheel Task Model in Power-Aware Scheduling

    PubMed Central

    2014-01-01

    Power-aware scheduling reduces CPU energy consumption in hard real-time systems through dynamic voltage scaling (DVS). In this paper, we deal with pinwheel task model which is known as static and predictable task model and could be applied to various embedded or ubiquitous systems. In pinwheel task model, each task's priority is static and its execution sequence could be predetermined. There have been many static approaches to power-aware scheduling in pinwheel task model. But, in this paper, we will show that the dynamic priority scheduling results in power-aware scheduling could be applied to pinwheel task model. This method is more effective than adopting the previous static priority scheduling methods in saving energy consumption and, for the system being still static, it is more tractable and applicable to small sized embedded or ubiquitous computing. Also, we introduce a novel power-aware scheduling algorithm which exploits all slacks under preemptive earliest-deadline first scheduling which is optimal in uniprocessor system. The dynamic priority method presented in this paper could be applied directly to static systems of pinwheel task model. The simulation results show that the proposed algorithm with the algorithmic complexity of O(n) reduces the energy consumption by 10–80% over the existing algorithms. PMID:25121126

  4. The MICRO-BOSS scheduling system: Current status and future efforts

    NASA Technical Reports Server (NTRS)

    Sadeh, Norman M.

    1993-01-01

    In this paper, a micro-opportunistic approach to factory scheduling was described that closely monitors the evolution of bottlenecks during the construction of the schedule, and continuously redirects search towards the bottleneck that appears to be most critical. This approach differs from earlier opportunistic approaches, as it does not require scheduling large resource subproblems or large job subproblems before revising the current scheduling strategy. This micro-opportunistic approach was implemented in the context of the MICRO-BOSS factory scheduling system. A study comparing MICRO-BOSS against a macro-opportunistic scheduler suggests that the additional flexibility of the micro-opportunistic approach to scheduling generally yields important reductions in both tardiness and inventory.

  5. Marathon works

    PubMed Central

    Orrantia, Eliseo

    2005-01-01

    PROBLEM BEING ADDRESSED Medical care in rural Canada has long been hampered by insufficient numbers of physicians. How can a rural community’s physicians change the local medical culture and create a new approach to sustaining their practice? OBJECTIVE OF PROGRAM To create a sustainable, collegial family practice group and address one rural community’s chronically underserviced health care needs. PROGRAM DESCRIPTION Elements important to physicians’ well-being were incorporated into the health care group’s functioning to enhance retention and recruitment. The intentional development of a consensus-based approach to decision making has created a supportive team of physicians. Ongoing communication is kept up through regular meetings, retreats, and a Web-based discussion board. Individual physicians retain control of their hours worked each year and their schedules. A novel obstetric call system was introduced to help make schedules more predictable. An internal governance agreement on an alternative payment plan supports varied work schedules, recognizes and funds non-clinical medical work, and pays group members for undertaking health-related projects. CONCLUSION This approach has helped maintain a stable number of physicians in Marathon, Ont, and has increased the number of health care services delivered to the community. PMID:16190174

  6. Integrating prediction, provenance, and optimization into high energy workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schram, M.; Bansal, V.; Friese, R. D.

    We propose a novel approach for efficient execution of workflows on distributed resources. The key components of this framework include: performance modeling to quantitatively predict workflow component behavior; optimization-based scheduling such as choosing an optimal subset of resources to meet demand and assignment of tasks to resources; distributed I/O optimizations such as prefetching; and provenance methods for collecting performance data. In preliminary results, these techniques improve throughput on a small Belle II workflow by 20%.

  7. Range Process Simulation Tool

    NASA Technical Reports Server (NTRS)

    Phillips, Dave; Haas, William; Barth, Tim; Benjamin, Perakath; Graul, Michael; Bagatourova, Olga

    2005-01-01

    Range Process Simulation Tool (RPST) is a computer program that assists managers in rapidly predicting and quantitatively assessing the operational effects of proposed technological additions to, and/or upgrades of, complex facilities and engineering systems such as the Eastern Test Range. Originally designed for application to space transportation systems, RPST is also suitable for assessing effects of proposed changes in industrial facilities and large organizations. RPST follows a model-based approach that includes finite-capacity schedule analysis and discrete-event process simulation. A component-based, scalable, open architecture makes RPST easily and rapidly tailorable for diverse applications. Specific RPST functions include: (1) definition of analysis objectives and performance metrics; (2) selection of process templates from a processtemplate library; (3) configuration of process models for detailed simulation and schedule analysis; (4) design of operations- analysis experiments; (5) schedule and simulation-based process analysis; and (6) optimization of performance by use of genetic algorithms and simulated annealing. The main benefits afforded by RPST are provision of information that can be used to reduce costs of operation and maintenance, and the capability for affordable, accurate, and reliable prediction and exploration of the consequences of many alternative proposed decisions.

  8. A unified approach for process-based hydrologic modeling: Part 2. Model implementation and case studies

    USDA-ARS?s Scientific Manuscript database

    Understanding and prediction of snowmelt-generated streamflow at sub-daily time scales is important for reservoir scheduling and climate change characterization. This is particularly important in the Western U.S. where over 50% of water supply is provided by snowmelt during the melting period. Previ...

  9. Novel Approach for the Recognition and Prediction of Multi-Function Radar Behaviours Based on Predictive State Representations.

    PubMed

    Ou, Jian; Chen, Yongguang; Zhao, Feng; Liu, Jin; Xiao, Shunping

    2017-03-19

    The extensive applications of multi-function radars (MFRs) have presented a great challenge to the technologies of radar countermeasures (RCMs) and electronic intelligence (ELINT). The recently proposed cognitive electronic warfare (CEW) provides a good solution, whose crux is to perceive present and future MFR behaviours, including the operating modes, waveform parameters, scheduling schemes, etc. Due to the variety and complexity of MFR waveforms, the existing approaches have the drawbacks of inefficiency and weak practicability in prediction. A novel method for MFR behaviour recognition and prediction is proposed based on predictive state representation (PSR). With the proposed approach, operating modes of MFR are recognized by accumulating the predictive states, instead of using fixed transition probabilities that are unavailable in the battlefield. It helps to reduce the dependence of MFR on prior information. And MFR signals can be quickly predicted by iteratively using the predicted observation, avoiding the very large computation brought by the uncertainty of future observations. Simulations with a hypothetical MFR signal sequence in a typical scenario are presented, showing that the proposed methods perform well and efficiently, which attests to their validity.

  10. Novel Approach for the Recognition and Prediction of Multi-Function Radar Behaviours Based on Predictive State Representations

    PubMed Central

    Ou, Jian; Chen, Yongguang; Zhao, Feng; Liu, Jin; Xiao, Shunping

    2017-01-01

    The extensive applications of multi-function radars (MFRs) have presented a great challenge to the technologies of radar countermeasures (RCMs) and electronic intelligence (ELINT). The recently proposed cognitive electronic warfare (CEW) provides a good solution, whose crux is to perceive present and future MFR behaviours, including the operating modes, waveform parameters, scheduling schemes, etc. Due to the variety and complexity of MFR waveforms, the existing approaches have the drawbacks of inefficiency and weak practicability in prediction. A novel method for MFR behaviour recognition and prediction is proposed based on predictive state representation (PSR). With the proposed approach, operating modes of MFR are recognized by accumulating the predictive states, instead of using fixed transition probabilities that are unavailable in the battlefield. It helps to reduce the dependence of MFR on prior information. And MFR signals can be quickly predicted by iteratively using the predicted observation, avoiding the very large computation brought by the uncertainty of future observations. Simulations with a hypothetical MFR signal sequence in a typical scenario are presented, showing that the proposed methods perform well and efficiently, which attests to their validity. PMID:28335492

  11. Characterization of Tactical Departure Scheduling in the National Airspace System

    NASA Technical Reports Server (NTRS)

    Capps, Alan; Engelland, Shawn A.

    2011-01-01

    This paper discusses and analyzes current day utilization and performance of the tactical departure scheduling process in the National Airspace System (NAS) to understand the benefits in improving this process. The analysis used operational air traffic data from over 1,082,000 flights during the month of January, 2011. Specific metrics included the frequency of tactical departure scheduling, site specific variances in the technology's utilization, departure time prediction compliance used in the tactical scheduling process and the performance with which the current system can predict the airborne slot that aircraft are being scheduled into from the airport surface. Operational data analysis described in this paper indicates significant room for improvement exists in the current system primarily in the area of reduced departure time prediction uncertainty. Results indicate that a significant number of tactically scheduled aircraft did not meet their scheduled departure slot due to departure time uncertainty. In addition to missed slots, the operational data analysis identified increased controller workload associated with tactical departures which were subject to traffic management manual re-scheduling or controller swaps. An analysis of achievable levels of departure time prediction accuracy as obtained by a new integrated surface and tactical scheduling tool is provided to assess the benefit it may provide as a solution to the identified shortfalls. A list of NAS facilities which are likely to receive the greatest benefit from the integrated surface and tactical scheduling technology are provided.

  12. The MICRO-BOSS scheduling system: Current status and future efforts

    NASA Technical Reports Server (NTRS)

    Sadeh, Norman M.

    1992-01-01

    In this paper, a micro-opportunistic approach to factory scheduling was described that closely monitors the evolution of bottlenecks during the construction of the schedule and continuously redirects search towards the bottleneck that appears to be most critical. This approach differs from earlier opportunistic approaches, as it does not require scheduling large resource subproblems or large job subproblems before revising the current scheduling strategy. This micro-opportunistic approach was implemented in the context of the MICRO-BOSS factory scheduling system. A study comparing MICRO-BOSS against a macro-opportunistic scheduler suggests that the additional flexibility of the micro-opportunistic approach to scheduling generally yields important reductions in both tardiness and inventory. Current research efforts include: adaptation of MICRO-BOSS to deal with sequence-dependent setups and development of micro-opportunistic reactive scheduling techniques that will enable the system to patch the schedule in the presence of contingencies such as machine breakdowns, raw materials arriving late, job cancellations, etc.

  13. Modeling the Violation of Reward Maximization and Invariance in Reinforcement Schedules

    PubMed Central

    La Camera, Giancarlo; Richmond, Barry J.

    2008-01-01

    It is often assumed that animals and people adjust their behavior to maximize reward acquisition. In visually cued reinforcement schedules, monkeys make errors in trials that are not immediately rewarded, despite having to repeat error trials. Here we show that error rates are typically smaller in trials equally distant from reward but belonging to longer schedules (referred to as “schedule length effect”). This violates the principles of reward maximization and invariance and cannot be predicted by the standard methods of Reinforcement Learning, such as the method of temporal differences. We develop a heuristic model that accounts for all of the properties of the behavior in the reinforcement schedule task but whose predictions are not different from those of the standard temporal difference model in choice tasks. In the modification of temporal difference learning introduced here, the effect of schedule length emerges spontaneously from the sensitivity to the immediately preceding trial. We also introduce a policy for general Markov Decision Processes, where the decision made at each node is conditioned on the motivation to perform an instrumental action, and show that the application of our model to the reinforcement schedule task and the choice task are special cases of this general theoretical framework. Within this framework, Reinforcement Learning can approach contextual learning with the mixture of empirical findings and principled assumptions that seem to coexist in the best descriptions of animal behavior. As examples, we discuss two phenomena observed in humans that often derive from the violation of the principle of invariance: “framing,” wherein equivalent options are treated differently depending on the context in which they are presented, and the “sunk cost” effect, the greater tendency to continue an endeavor once an investment in money, effort, or time has been made. The schedule length effect might be a manifestation of these phenomena in monkeys. PMID:18688266

  14. Modeling the violation of reward maximization and invariance in reinforcement schedules.

    PubMed

    La Camera, Giancarlo; Richmond, Barry J

    2008-08-08

    It is often assumed that animals and people adjust their behavior to maximize reward acquisition. In visually cued reinforcement schedules, monkeys make errors in trials that are not immediately rewarded, despite having to repeat error trials. Here we show that error rates are typically smaller in trials equally distant from reward but belonging to longer schedules (referred to as "schedule length effect"). This violates the principles of reward maximization and invariance and cannot be predicted by the standard methods of Reinforcement Learning, such as the method of temporal differences. We develop a heuristic model that accounts for all of the properties of the behavior in the reinforcement schedule task but whose predictions are not different from those of the standard temporal difference model in choice tasks. In the modification of temporal difference learning introduced here, the effect of schedule length emerges spontaneously from the sensitivity to the immediately preceding trial. We also introduce a policy for general Markov Decision Processes, where the decision made at each node is conditioned on the motivation to perform an instrumental action, and show that the application of our model to the reinforcement schedule task and the choice task are special cases of this general theoretical framework. Within this framework, Reinforcement Learning can approach contextual learning with the mixture of empirical findings and principled assumptions that seem to coexist in the best descriptions of animal behavior. As examples, we discuss two phenomena observed in humans that often derive from the violation of the principle of invariance: "framing," wherein equivalent options are treated differently depending on the context in which they are presented, and the "sunk cost" effect, the greater tendency to continue an endeavor once an investment in money, effort, or time has been made. The schedule length effect might be a manifestation of these phenomena in monkeys.

  15. Application of model predictive control for optimal operation of wind turbines

    NASA Astrophysics Data System (ADS)

    Yuan, Yuan; Cao, Pei; Tang, J.

    2017-04-01

    For large-scale wind turbines, reducing maintenance cost is a major challenge. Model predictive control (MPC) is a promising approach to deal with multiple conflicting objectives using the weighed sum approach. In this research, model predictive control method is applied to wind turbine to find an optimal balance between multiple objectives, such as the energy capture, loads on turbine components, and the pitch actuator usage. The actuator constraints are integrated into the objective function at the control design stage. The analysis is carried out in both the partial load region and full load region, and the performances are compared with those of a baseline gain scheduling PID controller. The application of this strategy achieves enhanced balance of component loads, the average power and actuator usages in partial load region.

  16. Predit: A temporal predictive framework for scheduling systems

    NASA Technical Reports Server (NTRS)

    Paolucci, E.; Patriarca, E.; Sem, M.; Gini, G.

    1992-01-01

    Scheduling can be formalized as a Constraint Satisfaction Problem (CSP). Within this framework activities belonging to a plan are interconnected via temporal constraints that account for slack among them. Temporal representation must include methods for constraints propagation and provide a logic for symbolic and numerical deductions. In this paper we describe a support framework for opportunistic reasoning in constraint directed scheduling. In order to focus the attention of an incremental scheduler on critical problem aspects, some discrete temporal indexes are presented. They are also useful for the prediction of the degree of resources contention. The predictive method expressed through our indexes can be seen as a Knowledge Source for an opportunistic scheduler with a blackboard architecture.

  17. Multi-Temporal Decomposed Wind and Load Power Models for Electric Energy Systems

    NASA Astrophysics Data System (ADS)

    Abdel-Karim, Noha

    This thesis is motivated by the recognition that sources of uncertainties in electric power systems are multifold and may have potentially far-reaching effects. In the past, only system load forecast was considered to be the main challenge. More recently, however, the uncertain price of electricity and hard-to-predict power produced by renewable resources, such as wind and solar, are making the operating and planning environment much more challenging. The near-real-time power imbalances are compensated by means of frequency regulation and generally require fast-responding costly resources. Because of this, a more accurate forecast and look-ahead scheduling would result in a reduced need for expensive power balancing. Similarly, long-term planning and seasonal maintenance need to take into account long-term demand forecast as well as how the short-term generation scheduling is done. The better the demand forecast, the more efficient planning will be as well. Moreover, computer algorithms for scheduling and planning are essential in helping the system operators decide what to schedule and planners what to build. This is needed given the overall complexity created by different abilities to adjust the power output of generation technologies, demand uncertainties and by the network delivery constraints. Given the growing presence of major uncertainties, it is likely that the main control applications will use more probabilistic approaches. Today's predominantly deterministic methods will be replaced by methods which account for key uncertainties as decisions are made. It is well-understood that although demand and wind power cannot be predicted at very high accuracy, taking into consideration predictions and scheduling in a look-ahead way over several time horizons generally results in more efficient and reliable utilization, than when decisions are made assuming deterministic, often worst-case scenarios. This change is in approach is going to ultimately require new electricity market rules capable of providing the right incentives to manage uncertainties and of differentiating various technologies according to the rate at which they can respond to ever changing conditions. Given the overall need for modeling uncertainties in electric energy systems, we consider in this thesis the problem of multi-temporal modeling of wind and demand power, in particular. Historic data is used to derive prediction models for several future time horizons. Short-term prediction models derived can be used for look-ahead economic dispatch and unit commitment, while the long-term annual predictive models can be used for investment planning. As expected, the accuracy of such predictive models depends on the time horizons over which the predictions are made, as well as on the nature of uncertain signals. It is shown that predictive models obtained using the same general modeling approaches result in different accuracy for wind than for demand power. In what follows, we introduce several models which have qualitatively different patterns, ranging from hourly to annual. We first transform historic time-stamped data into the Fourier Transform (Fr) representation. The frequency domain data representation is used to decompose the wind and load power signals and to derive predictive models relevant for short-term and long-term predictions using extracted spectral techniques. The short-term results are interpreted next as a Linear Prediction Coding Model (LPC) and its accuracy is analyzed. Next, a new Markov-Based Sensitivity Model (MBSM) for short term prediction has been proposed and the dispatched costs of uncertainties for different predictive models with comparisons have been developed. Moreover, the Discrete Markov Process (DMP) representation is applied to help assess probabilities of most likely short-, medium- and long-term states and the related multi-temporal risks. In addition, this thesis discusses operational impacts of wind power integration in different scenario levels by performing more than 9,000 AC Optimal Power Flow runs. The effects of both wind and load variations on system constraints and costs are presented. The limitations of DC Optimal Power Flow (DCOPF) vs. ACOPF are emphasized by means of system convergence problems due to the effect of wind power on changing line flows and net power injections. By studying the effect of having wind power on line flows, we found that the divergence problem applies in areas with high wind and hydro generation capacity share (cheap generations). (Abstract shortened by UMI.).

  18. An Integrated Constraint Programming Approach to Scheduling Sports Leagues with Divisional and Round-robin Tournaments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlsson, Mats; Johansson, Mikael; Larson, Jeffrey

    Previous approaches for scheduling a league with round-robin and divisional tournaments involved decomposing the problem into easier subproblems. This approach, used to schedule the top Swedish handball league Elitserien, reduces the problem complexity but can result in suboptimal schedules. This paper presents an integrated constraint programming model that allows to perform the scheduling in a single step. Particular attention is given to identifying implied and symmetry-breaking constraints that reduce the computational complexity significantly. The experimental evaluation of the integrated approach takes considerably less computational effort than the previous approach.

  19. An oracle: antituberculosis pharmacokinetics-pharmacodynamics, clinical correlation, and clinical trial simulations to predict the future.

    PubMed

    Pasipanodya, Jotam; Gumbo, Tawanda

    2011-01-01

    Antimicrobial pharmacokinetic-pharmacodynamic (PK/PD) science and clinical trial simulations have not been adequately applied to the design of doses and dose schedules of antituberculosis regimens because many researchers are skeptical about their clinical applicability. We compared findings of preclinical PK/PD studies of current first-line antituberculosis drugs to findings from several clinical publications that included microbiologic outcome and pharmacokinetic data or had a dose-scheduling design. Without exception, the antimicrobial PK/PD parameters linked to optimal effect were similar in preclinical models and in tuberculosis patients. Thus, exposure-effect relationships derived in the preclinical models can be used in the design of optimal antituberculosis doses, by incorporating population pharmacokinetics of the drugs and MIC distributions in Monte Carlo simulations. When this has been performed, doses and dose schedules of rifampin, isoniazid, pyrazinamide, and moxifloxacin with the potential to shorten antituberculosis therapy have been identified. In addition, different susceptibility breakpoints than those in current use have been identified. These steps outline a more rational approach than that of current methods for designing regimens and predicting outcome so that both new and older antituberculosis agents can shorten therapy duration.

  20. A Unified Model of Performance: Validation of its Predictions across Different Sleep/Wake Schedules

    PubMed Central

    Ramakrishnan, Sridhar; Wesensten, Nancy J.; Balkin, Thomas J.; Reifman, Jaques

    2016-01-01

    Study Objectives: Historically, mathematical models of human neurobehavioral performance developed on data from one sleep study were limited to predicting performance in similar studies, restricting their practical utility. We recently developed a unified model of performance (UMP) to predict the effects of the continuum of sleep loss—from chronic sleep restriction (CSR) to total sleep deprivation (TSD) challenges—and validated it using data from two studies of one laboratory. Here, we significantly extended this effort by validating the UMP predictions across a wide range of sleep/wake schedules from different studies and laboratories. Methods: We developed the UMP on psychomotor vigilance task (PVT) lapse data from one study encompassing four different CSR conditions (7 d of 3, 5, 7, and 9 h of sleep/night), and predicted performance in five other studies (from four laboratories), including different combinations of TSD (40 to 88 h), CSR (2 to 6 h of sleep/night), control (8 to 10 h of sleep/night), and nap (nocturnal and diurnal) schedules. Results: The UMP accurately predicted PVT performance trends across 14 different sleep/wake conditions, yielding average prediction errors between 7% and 36%, with the predictions lying within 2 standard errors of the measured data 87% of the time. In addition, the UMP accurately predicted performance impairment (average error of 15%) for schedules (TSD and naps) not used in model development. Conclusions: The unified model of performance can be used as a tool to help design sleep/wake schedules to optimize the extent and duration of neurobehavioral performance and to accelerate recovery after sleep loss. Citation: Ramakrishnan S, Wesensten NJ, Balkin TJ, Reifman J. A unified model of performance: validation of its predictions across different sleep/wake schedules. SLEEP 2016;39(1):249–262. PMID:26518594

  1. Validation of Fatigue Modeling Predictions in Aviation Operations

    NASA Technical Reports Server (NTRS)

    Gregory, Kevin; Martinez, Siera; Flynn-Evans, Erin

    2017-01-01

    Bio-mathematical fatigue models that predict levels of alertness and performance are one potential tool for use within integrated fatigue risk management approaches. A number of models have been developed that provide predictions based on acute and chronic sleep loss, circadian desynchronization, and sleep inertia. Some are publicly available and gaining traction in settings such as commercial aviation as a means of evaluating flight crew schedules for potential fatigue-related risks. Yet, most models have not been rigorously evaluated and independently validated for the operations to which they are being applied and many users are not fully aware of the limitations in which model results should be interpreted and applied.

  2. Improving Hospital-wide Patient Scheduling Decisions by Clinical Pathway Mining.

    PubMed

    Gartner, Daniel; Arnolds, Ines V; Nickel, Stefan

    2015-01-01

    Recent research has highlighted the need for solving hospital-wide patient scheduling problems. Inpatient scheduling, patient activities have to be scheduled on scarce hospital resources such that temporal relations between activities (e.g. for recovery times) are ensured. Common objectives are, among others, the minimization of the length of stay (LOS). In this paper, we consider a hospital-wide patient scheduling problem with LOS minimization based on uncertain clinical pathways. We approach the problem in three stages: First, we learn most likely clinical pathways using a sequential pattern mining approach. Second, we provide a mathematical model for patient scheduling and finally, we combine the two approaches. In an experimental study carried out using real-world data, we show that our approach outperforms baseline approaches on two metrics.

  3. Production Planning and Planting Pattern Scheduling Information System for Horticulture

    NASA Astrophysics Data System (ADS)

    Vitadiar, Tanhella Zein; Farikhin; Surarso, Bayu

    2018-02-01

    This paper present the production of planning and planting pattern scheduling faced by horticulture farmer using two methods. Fuzzy time series method use to predict demand on based on sales amount, while linear programming is used to assist horticulture farmers in making production planning decisions and determining the schedule of cropping patterns in accordance with demand predictions of the fuzzy time series method, variable use in this paper is size of areas, production advantage, amount of seeds and age of the plants. This research result production planning and planting patterns scheduling information system with the output is recommendations planting schedule, harvest schedule and the number of seeds will be plant.

  4. Characterization, performance, and prediction of a lead-acid battery under simulated electric vehicle driving requirements

    NASA Technical Reports Server (NTRS)

    Ewashinka, J. G.; Bozek, J. M.

    1981-01-01

    A state-of-the-art 6-V battery module in current use by the electric vehicle industry was tested at the NASA Lewis Research Center to determine its performance characteristics under the SAE J227a driving schedules B, C, and D. The primary objective of the tests was to determine the effects of periods of recuperation and long and short periods of electrical regeneration in improving the performance of the battery module and hence extendng the vehicle range. A secondary objective was to formulate a computer program that would predict the performance of this battery module for the above driving schedules. The results show excellent correlation between the laboratory tests and predicted results. The predicted performance compared with laboratory tests was within +2.4 to -3.7 percent for the D schedule, +0.5 to -7.1 percent for the C schedule, and better than -11.4 percent for the B schedule.

  5. Characterization, performance, and prediction of a lead-acid battery under simulated electric vehicle driving requirements

    NASA Astrophysics Data System (ADS)

    Ewashinka, J. G.; Bozek, J. M.

    1981-05-01

    A state-of-the-art 6-V battery module in current use by the electric vehicle industry was tested at the NASA Lewis Research Center to determine its performance characteristics under the SAE J227a driving schedules B, C, and D. The primary objective of the tests was to determine the effects of periods of recuperation and long and short periods of electrical regeneration in improving the performance of the battery module and hence extendng the vehicle range. A secondary objective was to formulate a computer program that would predict the performance of this battery module for the above driving schedules. The results show excellent correlation between the laboratory tests and predicted results. The predicted performance compared with laboratory tests was within +2.4 to -3.7 percent for the D schedule, +0.5 to -7.1 percent for the C schedule, and better than -11.4 percent for the B schedule.

  6. A heuristic approach to incremental and reactive scheduling

    NASA Technical Reports Server (NTRS)

    Odubiyi, Jide B.; Zoch, David R.

    1989-01-01

    An heuristic approach to incremental and reactive scheduling is described. Incremental scheduling is the process of modifying an existing schedule if the initial schedule does not meet its stated initial goals. Reactive scheduling occurs in near real-time in response to changes in available resources or the occurrence of targets of opportunity. Only minor changes are made during both incremental and reactive scheduling because a goal of re-scheduling procedures is to minimally impact the schedule. The described heuristic search techniques, which are employed by the Request Oriented Scheduling Engine (ROSE), a prototype generic scheduler, efficiently approximate the cost of reaching a goal from a given state and effective mechanisms for controlling search.

  7. Prediction-based Dynamic Energy Management in Wireless Sensor Networks

    PubMed Central

    Wang, Xue; Ma, Jun-Jie; Wang, Sheng; Bi, Dao-Wei

    2007-01-01

    Energy consumption is a critical constraint in wireless sensor networks. Focusing on the energy efficiency problem of wireless sensor networks, this paper proposes a method of prediction-based dynamic energy management. A particle filter was introduced to predict a target state, which was adopted to awaken wireless sensor nodes so that their sleep time was prolonged. With the distributed computing capability of nodes, an optimization approach of distributed genetic algorithm and simulated annealing was proposed to minimize the energy consumption of measurement. Considering the application of target tracking, we implemented target position prediction, node sleep scheduling and optimal sensing node selection. Moreover, a routing scheme of forwarding nodes was presented to achieve extra energy conservation. Experimental results of target tracking verified that energy-efficiency is enhanced by prediction-based dynamic energy management.

  8. Forecasting Construction Cost Index based on visibility graph: A network approach

    NASA Astrophysics Data System (ADS)

    Zhang, Rong; Ashuri, Baabak; Shyr, Yu; Deng, Yong

    2018-03-01

    Engineering News-Record (ENR), a professional magazine in the field of global construction engineering, publishes Construction Cost Index (CCI) every month. Cost estimators and contractors assess projects, arrange budgets and prepare bids by forecasting CCI. However, fluctuations and uncertainties of CCI cause irrational estimations now and then. This paper aims at achieving more accurate predictions of CCI based on a network approach in which time series is firstly converted into a visibility graph and future values are forecasted relied on link prediction. According to the experimental results, the proposed method shows satisfactory performance since the error measures are acceptable. Compared with other methods, the proposed method is easier to implement and is able to forecast CCI with less errors. It is convinced that the proposed method is efficient to provide considerably accurate CCI predictions, which will make contributions to the construction engineering by assisting individuals and organizations in reducing costs and making project schedules.

  9. Job Design and Ethnic Differences in Working Women’s Physical Activity

    PubMed Central

    Grzywacz, Joseph G.; Crain, A. Lauren; Martinson, Brian C.; Quandt, Sara A.

    2014-01-01

    Objective To document the role job control and schedule control play in shaping women’s physical activity, and how it delineates educational and racial variability in associations of job and social control with physical activity. Methods Prospective data were obtained from a community-based sample of working women (N = 302). Validated instruments measured job control and schedule control. Steps per day were assessed using New Lifestyles 800 activity monitors. Results Greater job control predicted more steps per day, whereas greater schedule control predicted fewer steps. Small indirect associations between ethnicity and physical activity were observed among women with a trade school degree or less but not for women with a college degree. Conclusions Low job control created barriers to physical activity among working women with a trade school degree or less. Greater schedule control predicted less physical activity, suggesting women do not use time “created” by schedule flexibility for personal health enhancement. PMID:24034681

  10. Job design and ethnic differences in working women's physical activity.

    PubMed

    Grzywacz, Joseph G; Crain, A Lauren; Martinson, Brian C; Quandt, Sara A

    2014-01-01

    To document the role job control and schedule control play in shaping women's physical activity, and how it delineates educational and racial variability in associations of job and social control with physical activity. Prospective data were obtained from a community-based sample of working women (N = 302). Validated instruments measured job control and schedule control. Steps per day were assessed using New Lifestyles 800 activity monitors. Greater job control predicted more steps per day, whereas greater schedule control predicted fewer steps. Small indirect associations between ethnicity and physical activity were observed among women with a trade school degree or less but not for women with a college degree. Low job control created barriers to physical activity among working women with a trade school degree or less. Greater schedule control predicted less physical activity, suggesting women do not use time "created" by schedule flexibility for personal health enhancement.

  11. Surgical demand scheduling: a review.

    PubMed Central

    Magerlein, J M; Martin, J B

    1978-01-01

    This article reviews the literature on scheduling of patient demand for surgery and outlines an approach to improving overall performance of hospital surgical suites. Reported scheduling systems are categorized into those that schedule patients in advance of the surgical date and those that schedule available patients on the day of surgery. Approaches to estimating surgical procedure times are also reviewed, and the article concludes with a discussion of the failure to implement the majority of reported scheduling schemes. PMID:367987

  12. 2B-Alert Web: An Open-Access Tool for Predicting the Effects of Sleep/Wake Schedules and Caffeine Consumption on Neurobehavioral Performance.

    PubMed

    Reifman, Jaques; Kumar, Kamal; Wesensten, Nancy J; Tountas, Nikolaos A; Balkin, Thomas J; Ramakrishnan, Sridhar

    2016-12-01

    Computational tools that predict the effects of daily sleep/wake amounts on neurobehavioral performance are critical components of fatigue management systems, allowing for the identification of periods during which individuals are at increased risk for performance errors. However, none of the existing computational tools is publicly available, and the commercially available tools do not account for the beneficial effects of caffeine on performance, limiting their practical utility. Here, we introduce 2B-Alert Web, an open-access tool for predicting neurobehavioral performance, which accounts for the effects of sleep/wake schedules, time of day, and caffeine consumption, while incorporating the latest scientific findings in sleep restriction, sleep extension, and recovery sleep. We combined our validated Unified Model of Performance and our validated caffeine model to form a single, integrated modeling framework instantiated as a Web-enabled tool. 2B-Alert Web allows users to input daily sleep/wake schedules and caffeine consumption (dosage and time) to obtain group-average predictions of neurobehavioral performance based on psychomotor vigilance tasks. 2B-Alert Web is accessible at: https://2b-alert-web.bhsai.org. The 2B-Alert Web tool allows users to obtain predictions for mean response time, mean reciprocal response time, and number of lapses. The graphing tool allows for simultaneous display of up to seven different sleep/wake and caffeine schedules. The schedules and corresponding predicted outputs can be saved as a Microsoft Excel file; the corresponding plots can be saved as an image file. The schedules and predictions are erased when the user logs off, thereby maintaining privacy and confidentiality. The publicly accessible 2B-Alert Web tool is available for operators, schedulers, and neurobehavioral scientists as well as the general public to determine the impact of any given sleep/wake schedule, caffeine consumption, and time of day on performance of a group of individuals. This evidence-based tool can be used as a decision aid to design effective work schedules, guide the design of future sleep restriction and caffeine studies, and increase public awareness of the effects of sleep amounts, time of day, and caffeine on alertness. © 2016 Associated Professional Sleep Societies, LLC.

  13. A Unified Model of Performance: Validation of its Predictions across Different Sleep/Wake Schedules.

    PubMed

    Ramakrishnan, Sridhar; Wesensten, Nancy J; Balkin, Thomas J; Reifman, Jaques

    2016-01-01

    Historically, mathematical models of human neurobehavioral performance developed on data from one sleep study were limited to predicting performance in similar studies, restricting their practical utility. We recently developed a unified model of performance (UMP) to predict the effects of the continuum of sleep loss-from chronic sleep restriction (CSR) to total sleep deprivation (TSD) challenges-and validated it using data from two studies of one laboratory. Here, we significantly extended this effort by validating the UMP predictions across a wide range of sleep/wake schedules from different studies and laboratories. We developed the UMP on psychomotor vigilance task (PVT) lapse data from one study encompassing four different CSR conditions (7 d of 3, 5, 7, and 9 h of sleep/night), and predicted performance in five other studies (from four laboratories), including different combinations of TSD (40 to 88 h), CSR (2 to 6 h of sleep/night), control (8 to 10 h of sleep/night), and nap (nocturnal and diurnal) schedules. The UMP accurately predicted PVT performance trends across 14 different sleep/wake conditions, yielding average prediction errors between 7% and 36%, with the predictions lying within 2 standard errors of the measured data 87% of the time. In addition, the UMP accurately predicted performance impairment (average error of 15%) for schedules (TSD and naps) not used in model development. The unified model of performance can be used as a tool to help design sleep/wake schedules to optimize the extent and duration of neurobehavioral performance and to accelerate recovery after sleep loss. © 2016 Associated Professional Sleep Societies, LLC.

  14. Design of a final approach spacing tool for TRACON air traffic control

    NASA Technical Reports Server (NTRS)

    Davis, Thomas J.; Erzberger, Heinz; Bergeron, Hugh

    1989-01-01

    This paper describes an automation tool that assists air traffic controllers in the Terminal Radar Approach Control (TRACON) Facilities in providing safe and efficient sequencing and spacing of arrival traffic. The automation tool, referred to as the Final Approach Spacing Tool (FAST), allows the controller to interactively choose various levels of automation and advisory information ranging from predicted time errors to speed and heading advisories for controlling time error. FAST also uses a timeline to display current scheduling and sequencing information for all aircraft in the TRACON airspace. FAST combines accurate predictive algorithms and state-of-the-art mouse and graphical interface technology to present advisory information to the controller. Furthermore, FAST exchanges various types of traffic information and communicates with automation tools being developed for the Air Route Traffic Control Center. Thus it is part of an integrated traffic management system for arrival traffic at major terminal areas.

  15. Assessment of CTAS ETA prediction capabilities

    NASA Astrophysics Data System (ADS)

    Bolender, Michael A.

    1994-11-01

    This report summarizes the work done to date in assessing the trajectory fidelity and estimated time of arrival (ETA) prediction capability of the NASA Ames Center TRACON Automation System (CTAS) software. The CTAS software suite is a series of computer programs designed to aid air traffic controllers in their tasks of safely scheduling the landing sequence of approaching aircraft. in particular, this report concerns the accuracy of the available measurements (e.g., position, altitude, etc.) that are input to the software, as well as the accuracy of the final data that is made available to the air traffic controllers.

  16. A Novel Particle Swarm Optimization Approach for Grid Job Scheduling

    NASA Astrophysics Data System (ADS)

    Izakian, Hesam; Tork Ladani, Behrouz; Zamanifar, Kamran; Abraham, Ajith

    This paper represents a Particle Swarm Optimization (PSO) algorithm, for grid job scheduling. PSO is a population-based search algorithm based on the simulation of the social behavior of bird flocking and fish schooling. Particles fly in problem search space to find optimal or near-optimal solutions. In this paper we used a PSO approach for grid job scheduling. The scheduler aims at minimizing makespan and flowtime simultaneously. Experimental studies show that the proposed novel approach is more efficient than the PSO approach reported in the literature.

  17. Novel Hybrid Scheduling Technique for Sensor Nodes with Mixed Criticality Tasks.

    PubMed

    Micea, Mihai-Victor; Stangaciu, Cristina-Sorina; Stangaciu, Valentin; Curiac, Daniel-Ioan

    2017-06-26

    Sensor networks become increasingly a key technology for complex control applications. Their potential use in safety- and time-critical domains has raised the need for task scheduling mechanisms specially adapted to sensor node specific requirements, often materialized in predictable jitter-less execution of tasks characterized by different criticality levels. This paper offers an efficient scheduling solution, named Hybrid Hard Real-Time Scheduling (H²RTS), which combines a static, clock driven method with a dynamic, event driven scheduling technique, in order to provide high execution predictability, while keeping a high node Central Processing Unit (CPU) utilization factor. From the detailed, integrated schedulability analysis of the H²RTS, a set of sufficiency tests are introduced and demonstrated based on the processor demand and linear upper bound metrics. The performance and correct behavior of the proposed hybrid scheduling technique have been extensively evaluated and validated both on a simulator and on a sensor mote equipped with ARM7 microcontroller.

  18. The Automated Conflict Resolution System (ACRS)

    NASA Technical Reports Server (NTRS)

    Kaplan, Ted; Musliner, Andrew; Wampler, David

    1993-01-01

    The Automated Conflict Resolution System (ACRS) is a mission-current scheduling aid that predicts periods of mutual interference when two or more orbiting spacecraft are scheduled to communicate with the same Tracking and Data Relay Satellite (TDRS) at the same time. The mutual interference predicted has the potential to degrade or prevent communications. Thus the ACRS system is a useful tool for aiding in the scheduling of Space Network (SN) communications.

  19. The Automated Conflict Resolution System (ACRS)

    NASA Astrophysics Data System (ADS)

    Kaplan, Ted; Musliner, Andrew; Wampler, David

    1993-11-01

    The Automated Conflict Resolution System (ACRS) is a mission-current scheduling aid that predicts periods of mutual interference when two or more orbiting spacecraft are scheduled to communicate with the same Tracking and Data Relay Satellite (TDRS) at the same time. The mutual interference predicted has the potential to degrade or prevent communications. Thus the ACRS system is a useful tool for aiding in the scheduling of Space Network (SN) communications.

  20. Completable scheduling: An integrated approach to planning and scheduling

    NASA Technical Reports Server (NTRS)

    Gervasio, Melinda T.; Dejong, Gerald F.

    1992-01-01

    The planning problem has traditionally been treated separately from the scheduling problem. However, as more realistic domains are tackled, it becomes evident that the problem of deciding on an ordered set of tasks to achieve a set of goals cannot be treated independently of the problem of actually allocating resources to the tasks. Doing so would result in losing the robustness and flexibility needed to deal with imperfectly modeled domains. Completable scheduling is an approach which integrates the two problems by allowing an a priori planning module to defer particular planning decisions, and consequently the associated scheduling decisions, until execution time. This allows a completable scheduling system to maximize plan flexibility by allowing runtime information to be taken into consideration when making planning and scheduling decision. Furthermore, through the criteria of achievability placed on deferred decision, a completable scheduling system is able to retain much of the goal-directedness and guarantees of achievement afforded by a priori planning. The completable scheduling approach is further enhanced by the use of contingent explanation-based learning, which enables a completable scheduling system to learn general completable plans from example and improve its performance through experience. Initial experimental results show that completable scheduling outperforms classical scheduling as well as pure reactive scheduling in a simple scheduling domain.

  1. The effects of a split sleep-wake schedule on neurobehavioural performance and predictions of performance under conditions of forced desynchrony.

    PubMed

    Kosmadopoulos, Anastasi; Sargent, Charli; Darwent, David; Zhou, Xuan; Dawson, Drew; Roach, Gregory D

    2014-12-01

    Extended wakefulness, sleep loss, and circadian misalignment are factors associated with an increased accident risk in shiftwork. Splitting shifts into multiple shorter periods per day may mitigate these risks by alleviating prior wake. However, the effect of splitting the sleep-wake schedule on the homeostatic and circadian contributions to neurobehavioural performance and subjective assessments of one's ability to perform are not known. Twenty-nine male participants lived in a time isolation laboratory for 13 d, assigned to one of two 28-h forced desynchrony (FD) schedules. Depending on the assigned schedule, participants were provided the same total time in bed (TIB) each FD cycle, either consolidated into a single period (9.33 h TIB) or split into two equal halves (2 × 4.67 h TIB). Neurobehavioural performance was regularly assessed with a psychomotor vigilance task (PVT) and subjectively-assessed ability was measured with a prediction of performance on a visual analogue scale. Polysomnography was used to assess sleep, and core body temperature was recorded to assess circadian phase. On average, participants obtained the same amount of sleep in both schedules, but those in the split schedule obtained more slow wave sleep (SWS) on FD days. Mixed-effects ANOVAs indicated no overall difference between the standard and split schedules in neurobehavioural performance or predictions of performance. Main effects of circadian phase and prior wake were present for both schedules, such that performance and subjective ratings of ability were best around the circadian acrophase, worst around the nadir, and declined with increasing prior wake. There was a schedule by circadian phase interaction for all neurobehavioural performance metrics such that performance was better in the split schedule than the standard schedule around the nadir. There was no such interaction for predictions of performance. Performance during the standard schedule was significantly better than the split schedule at 2 h of prior wake, but declined at a steeper rate such that the schedules converged by 4.5-7 h of prior wake. Overall, the results indicate that when the total opportunity for sleep per day is satisfactory, a split sleep-wake schedule is not detrimental to sleep or performance. Indeed, though not reflected in subjective assessments of performance capacity, splitting the schedule may be of some benefit, given its reduction of neurobehavioural impairment at night and its association with increased SWS. Therefore, for some industries that require operations to be sustained around the clock, implementing a split work-rest schedule may be of assistance.

  2. Simulation based energy-resource efficient manufacturing integrated with in-process virtual management

    NASA Astrophysics Data System (ADS)

    Katchasuwanmanee, Kanet; Cheng, Kai; Bateman, Richard

    2016-09-01

    As energy efficiency is one of the key essentials towards sustainability, the development of an energy-resource efficient manufacturing system is among the great challenges facing the current industry. Meanwhile, the availability of advanced technological innovation has created more complex manufacturing systems that involve a large variety of processes and machines serving different functions. To extend the limited knowledge on energy-efficient scheduling, the research presented in this paper attempts to model the production schedule at an operation process by considering the balance of energy consumption reduction in production, production work flow (productivity) and quality. An innovative systematic approach to manufacturing energy-resource efficiency is proposed with the virtual simulation as a predictive modelling enabler, which provides real-time manufacturing monitoring, virtual displays and decision-makings and consequentially an analytical and multidimensional correlation analysis on interdependent relationships among energy consumption, work flow and quality errors. The regression analysis results demonstrate positive relationships between the work flow and quality errors and the work flow and energy consumption. When production scheduling is controlled through optimization of work flow, quality errors and overall energy consumption, the energy-resource efficiency can be achieved in the production. Together, this proposed multidimensional modelling and analysis approach provides optimal conditions for the production scheduling at the manufacturing system by taking account of production quality, energy consumption and resource efficiency, which can lead to the key competitive advantages and sustainability of the system operations in the industry.

  3. Interactive computer aided shift scheduling.

    PubMed

    Gaertner, J

    2001-12-01

    This paper starts with a discussion of computer aided shift scheduling. After a brief review of earlier approaches, two conceptualizations of this field are introduced: First, shift scheduling as a field that ranges from extremely stable rosters at one pole to rather market-like approaches on the other pole. Unfortunately, already small alterations of a scheduling problem (e.g., the number of groups, the number of shifts) may call for rather different approaches and tools. Second, their environment shapes scheduling problems and scheduling has to be done within idiosyncratic organizational settings. This calls for the amalgamation of scheduling with other tasks (e.g., accounting) and for reflections whether better solutions might become possible by changes in the problem definition (e.g., other service levels, organizational changes). Therefore shift scheduling should be understood as a highly connected problem. Building upon these two conceptualizations, a few examples of software that ease scheduling in some areas of this field are given and future research questions are outlined.

  4. A Market-Based Approach to Multi-factory Scheduling

    NASA Astrophysics Data System (ADS)

    Vytelingum, Perukrishnen; Rogers, Alex; MacBeth, Douglas K.; Dutta, Partha; Stranjak, Armin; Jennings, Nicholas R.

    In this paper, we report on the design of a novel market-based approach for decentralised scheduling across multiple factories. Specifically, because of the limitations of scheduling in a centralised manner - which requires a center to have complete and perfect information for optimality and the truthful revelation of potentially commercially private preferences to that center - we advocate an informationally decentralised approach that is both agile and dynamic. In particular, this work adopts a market-based approach for decentralised scheduling by considering the different stakeholders representing different factories as self-interested, profit-motivated economic agents that trade resources for the scheduling of jobs. The overall schedule of these jobs is then an emergent behaviour of the strategic interaction of these trading agents bidding for resources in a market based on limited information and their own preferences. Using a simple (zero-intelligence) bidding strategy, we empirically demonstrate that our market-based approach achieves a lower bound efficiency of 84%. This represents a trade-off between a reasonable level of efficiency (compared to a centralised approach) and the desirable benefits of a decentralised solution.

  5. A review on prognostics approaches for remaining useful life of lithium-ion battery

    NASA Astrophysics Data System (ADS)

    Su, C.; Chen, H. J.

    2017-11-01

    Lithium-ion (Li-ion) battery is a core component for various industrial systems, including satellite, spacecraft and electric vehicle, etc. The mechanism of performance degradation and remaining useful life (RUL) estimation correlate closely to the operating state and reliability of the aforementioned systems. Furthermore, RUL prediction of Li-ion battery is crucial for the operation scheduling, spare parts management and maintenance decision for such kinds of systems. In recent years, performance degradation prognostics and RUL estimation approaches have become a focus of the research concerning with Li-ion battery. This paper summarizes the approaches used in Li-ion battery RUL estimation. Three categories are classified accordingly, i.e. model-based approach, data-based approach and hybrid approach. The key issues and future trends for battery RUL estimation are also discussed.

  6. The North American Multi-Model Ensemble (NMME): Phase-1 Seasonal to Interannual Prediction, Phase-2 Toward Developing Intra-Seasonal Prediction

    NASA Technical Reports Server (NTRS)

    Kirtman, Ben P.; Min, Dughong; Infanti, Johnna M.; Kinter, James L., III; Paolino, Daniel A.; Zhang, Qin; vandenDool, Huug; Saha, Suranjana; Mendez, Malaquias Pena; Becker, Emily; hide

    2013-01-01

    The recent US National Academies report "Assessment of Intraseasonal to Interannual Climate Prediction and Predictability" was unequivocal in recommending the need for the development of a North American Multi-Model Ensemble (NMME) operational predictive capability. Indeed, this effort is required to meet the specific tailored regional prediction and decision support needs of a large community of climate information users. The multi-model ensemble approach has proven extremely effective at quantifying prediction uncertainty due to uncertainty in model formulation, and has proven to produce better prediction quality (on average) then any single model ensemble. This multi-model approach is the basis for several international collaborative prediction research efforts, an operational European system and there are numerous examples of how this multi-model ensemble approach yields superior forecasts compared to any single model. Based on two NOAA Climate Test Bed (CTB) NMME workshops (February 18, and April 8, 2011) a collaborative and coordinated implementation strategy for a NMME prediction system has been developed and is currently delivering real-time seasonal-to-interannual predictions on the NOAA Climate Prediction Center (CPC) operational schedule. The hindcast and real-time prediction data is readily available (e.g., http://iridl.ldeo.columbia.edu/SOURCES/.Models/.NMME/) and in graphical format from CPC (http://origin.cpc.ncep.noaa.gov/products/people/wd51yf/NMME/index.html). Moreover, the NMME forecast are already currently being used as guidance for operational forecasters. This paper describes the new NMME effort, presents an overview of the multi-model forecast quality, and the complementary skill associated with individual models.

  7. New VLBI2010 scheduling strategies and implications on the terrestrial reference frames.

    PubMed

    Sun, Jing; Böhm, Johannes; Nilsson, Tobias; Krásná, Hana; Böhm, Sigrid; Schuh, Harald

    In connection with the work for the next generation VLBI2010 Global Observing System (VGOS) of the International VLBI Service for Geodesy and Astrometry, a new scheduling package (Vie_Sched) has been developed at the Vienna University of Technology as a part of the Vienna VLBI Software. In addition to the classical station-based approach it is equipped with a new scheduling strategy based on the radio sources to be observed. We introduce different configurations of source-based scheduling options and investigate the implications on present and future VLBI2010 geodetic schedules. By comparison to existing VLBI schedules of the continuous campaign CONT11, we find that the source-based approach with two sources has a performance similar to the station-based approach in terms of number of observations, sky coverage, and geodetic parameters. For an artificial 16 station VLBI2010 network, the source-based approach with four sources provides an improved distribution of source observations on the celestial sphere. Monte Carlo simulations yield slightly better repeatabilities of station coordinates with the source-based approach with two sources or four sources than the classical strategy. The new VLBI scheduling software with its alternative scheduling strategy offers a promising option with respect to applications of the VGOS.

  8. New VLBI2010 scheduling strategies and implications on the terrestrial reference frames

    NASA Astrophysics Data System (ADS)

    Sun, Jing; Böhm, Johannes; Nilsson, Tobias; Krásná, Hana; Böhm, Sigrid; Schuh, Harald

    2014-05-01

    In connection with the work for the next generation VLBI2010 Global Observing System (VGOS) of the International VLBI Service for Geodesy and Astrometry, a new scheduling package (Vie_Sched) has been developed at the Vienna University of Technology as a part of the Vienna VLBI Software. In addition to the classical station-based approach it is equipped with a new scheduling strategy based on the radio sources to be observed. We introduce different configurations of source-based scheduling options and investigate the implications on present and future VLBI2010 geodetic schedules. By comparison to existing VLBI schedules of the continuous campaign CONT11, we find that the source-based approach with two sources has a performance similar to the station-based approach in terms of number of observations, sky coverage, and geodetic parameters. For an artificial 16 station VLBI2010 network, the source-based approach with four sources provides an improved distribution of source observations on the celestial sphere. Monte Carlo simulations yield slightly better repeatabilities of station coordinates with the source-based approach with two sources or four sources than the classical strategy. The new VLBI scheduling software with its alternative scheduling strategy offers a promising option with respect to applications of the VGOS.

  9. Predicting No-Shows in Radiology Using Regression Modeling of Data Available in the Electronic Medical Record.

    PubMed

    Harvey, H Benjamin; Liu, Catherine; Ai, Jing; Jaworsky, Cristina; Guerrier, Claude Emmanuel; Flores, Efren; Pianykh, Oleg

    2017-10-01

    To test whether data elements available in the electronic medical record (EMR) can be effectively leveraged to predict failure to attend a scheduled radiology examination. Using data from a large academic medical center, we identified all patients with a diagnostic imaging examination scheduled from January 1, 2016, to April 1, 2016, and determined whether the patient successfully attended the examination. Demographic, clinical, and health services utilization variables available in the EMR potentially relevant to examination attendance were recorded for each patient. We used descriptive statistics and logistic regression models to test whether these data elements could predict failure to attend a scheduled radiology examination. The predictive accuracy of the regression models were determined by calculating the area under the receiver operator curve. Among the 54,652 patient appointments with radiology examinations scheduled during the study period, 6.5% were no-shows. No-show rates were highest for the modalities of mammography and CT and lowest for PET and MRI. Logistic regression indicated that 16 of the 27 demographic, clinical, and health services utilization factors were significantly associated with failure to attend a scheduled radiology examination (P ≤ .05). Stepwise logistic regression analysis demonstrated that previous no-shows, days between scheduling and appointments, modality type, and insurance type were most strongly predictive of no-show. A model considering all 16 data elements had good ability to predict radiology no-shows (area under the receiver operator curve = 0.753). The predictive ability was similar or improved when these models were analyzed by modality. Patient and examination information readily available in the EMR can be successfully used to predict radiology no-shows. Moving forward, this information can be proactively leveraged to identify patients who might benefit from additional patient engagement through appointment reminders or other targeted interventions to avoid no-shows. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  10. The interference of flexible working times with the utility of time: a predictor of social impairment?

    PubMed

    Wirtz, Anna; Giebel, Ole; Schomann, Carsten; Nachreiner, Friedhelm

    2008-04-01

    Periodic components inherent in actual schedules of flexible working hours and their interference with social rhythms were measured using spectrum analysis. The resulting indicators of periodicity and interference were then related to the reported social impairments of workers. The results show that a suppression of the 24 and the 168 h (seven-day) components (absence of periodicity) in the work schedules predicts reported social impairment. However, even if there are relatively strong 24 and 168 h components left in the work schedules, their interference with the social rhythm (using the phase difference between working hours and the utility of time) further predicts impairment. The results thus indicate that the periodicity of working hours and the amount of (social) desynchronization induced by flexible work schedules can be used both for predicting the impairing effects of the specific work schedules on social well-being as well as for the design of socially acceptable flexible work hours.

  11. The R-Shell approach - Using scheduling agents in complex distributed real-time systems

    NASA Technical Reports Server (NTRS)

    Natarajan, Swaminathan; Zhao, Wei; Goforth, Andre

    1993-01-01

    Large, complex real-time systems such as space and avionics systems are extremely demanding in their scheduling requirements. The current OS design approaches are quite limited in the capabilities they provide for task scheduling. Typically, they simply implement a particular uniprocessor scheduling strategy and do not provide any special support for network scheduling, overload handling, fault tolerance, distributed processing, etc. Our design of the R-Shell real-time environment fcilitates the implementation of a variety of sophisticated but efficient scheduling strategies, including incorporation of all these capabilities. This is accomplished by the use of scheduling agents which reside in the application run-time environment and are responsible for coordinating the scheduling of the application.

  12. Better Redd than Dead: Optimizing Reservoir Operations for Wild Fish Survival During Drought

    NASA Astrophysics Data System (ADS)

    Adams, L. E.; Lund, J. R.; Quiñones, R.

    2014-12-01

    Extreme droughts are difficult to predict and may incur large economic and ecological costs. Dam operations in drought usually consider minimizing economic costs. However, dam operations also offer an opportunity to increase wild fish survival under difficult conditions. Here, we develop a probabilistic optimization approach to developing reservoir release schedules to maximize fish survival in regulated rivers. A case study applies the approach to wild Fall-run Chinook Salmon below Folsom Dam on California's American River. Our results indicate that releasing more water early in the drought will, on average, save more wild fish over the long term.

  13. Decomposability and scalability in space-based observatory scheduling

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Smith, Stephen F.

    1992-01-01

    In this paper, we discuss issues of problem and model decomposition within the HSTS scheduling framework. HSTS was developed and originally applied in the context of the Hubble Space Telescope (HST) scheduling problem, motivated by the limitations of the current solution and, more generally, the insufficiency of classical planning and scheduling approaches in this problem context. We first summarize the salient architectural characteristics of HSTS and their relationship to previous scheduling and AI planning research. Then, we describe some key problem decomposition techniques supported by HSTS and underlying our integrated planning and scheduling approach, and we discuss the leverage they provide in solving space-based observatory scheduling problems.

  14. Exploring a QoS Driven Scheduling Approach for Peer-to-Peer Live Streaming Systems with Network Coding

    PubMed Central

    Cui, Laizhong; Lu, Nan; Chen, Fu

    2014-01-01

    Most large-scale peer-to-peer (P2P) live streaming systems use mesh to organize peers and leverage pull scheduling to transmit packets for providing robustness in dynamic environment. The pull scheduling brings large packet delay. Network coding makes the push scheduling feasible in mesh P2P live streaming and improves the efficiency. However, it may also introduce some extra delays and coding computational overhead. To improve the packet delay, streaming quality, and coding overhead, in this paper are as follows. we propose a QoS driven push scheduling approach. The main contributions of this paper are: (i) We introduce a new network coding method to increase the content diversity and reduce the complexity of scheduling; (ii) we formulate the push scheduling as an optimization problem and transform it to a min-cost flow problem for solving it in polynomial time; (iii) we propose a push scheduling algorithm to reduce the coding overhead and do extensive experiments to validate the effectiveness of our approach. Compared with previous approaches, the simulation results demonstrate that packet delay, continuity index, and coding ratio of our system can be significantly improved, especially in dynamic environments. PMID:25114968

  15. Request-Driven Schedule Automation for the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Johnston, Mark D.; Tran, Daniel; Arroyo, Belinda; Call, Jared; Mercado, Marisol

    2010-01-01

    The DSN Scheduling Engine (DSE) has been developed to increase the level of automated scheduling support available to users of NASA s Deep Space Network (DSN). We have adopted a request-driven approach to DSN scheduling, in contrast to the activity-oriented approach used up to now. Scheduling requests allow users to declaratively specify patterns and conditions on their DSN service allocations, including timing, resource requirements, gaps, overlaps, time linkages among services, repetition, priorities, and a wide range of additional factors and preferences. The DSE incorporates a model of the key constraints and preferences of the DSN scheduling domain, along with algorithms to expand scheduling requests into valid resource allocations, to resolve schedule conflicts, and to repair unsatisfied requests. We use time-bounded systematic search with constraint relaxation to return nearby solutions if exact ones cannot be found, where the relaxation options and order are under user control. To explore the usability aspects of our approach we have developed a graphical user interface incorporating some crucial features to make it easier to work with complex scheduling requests. Among these are: progressive revelation of relevant detail, immediate propagation and visual feedback from a user s decisions, and a meeting calendar metaphor for repeated patterns of requests. Even as a prototype, the DSE has been deployed and adopted as the initial step in building the operational DSN schedule, thus representing an important initial validation of our overall approach. The DSE is a core element of the DSN Service Scheduling Software (S(sup 3)), a web-based collaborative scheduling system now under development for deployment to all DSN users.

  16. Electric Vehicles Charging Scheduling Strategy Considering the Uncertainty of Photovoltaic Output

    NASA Astrophysics Data System (ADS)

    Wei, Xiangxiang; Su, Su; Yue, Yunli; Wang, Wei; He, Luobin; Li, Hao; Ota, Yutaka

    2017-05-01

    The rapid development of electric vehicles and distributed generation bring new challenges to security and economic operation of the power system, so the collaborative research of the EVs and the distributed generation have important significance in distribution network. Under this background, an EVs charging scheduling strategy considering the uncertainty of photovoltaic(PV) output is proposed. The characteristics of EVs charging are analysed first. A PV output prediction method is proposed with a PV database then. On this basis, an EVs charging scheduling strategy is proposed with the goal to satisfy EVs users’ charging willingness and decrease the power loss in distribution network. The case study proves that the proposed PV output prediction method can predict the PV output accurately and the EVs charging scheduling strategy can reduce the power loss and stabilize the fluctuation of the load in distributed network.

  17. Magnetostrictive direct drive motors

    NASA Technical Reports Server (NTRS)

    Naik, Dipak; Dehoff, P. H.

    1990-01-01

    Developing magnetostrictive direct drive research motors to power robot joints is discussed. These type motors are expected to produce extraordinary torque density, to be able to perform microradian incremental steps and to be self-braking and safe with the power off. Several types of motor designs have been attempted using magnetostrictive materials. One of the candidate approaches (the magnetostrictive roller drive) is described. The method in which the design will function is described as is the reason why this approach is inherently superior to the other approaches. Following this, the design will be modelled and its expected performance predicted. This particular candidate design is currently undergoing detailed engineering with prototype construction and testing scheduled for mid 1991.

  18. Novel Hybrid Scheduling Technique for Sensor Nodes with Mixed Criticality Tasks

    PubMed Central

    Micea, Mihai-Victor; Stangaciu, Cristina-Sorina; Stangaciu, Valentin; Curiac, Daniel-Ioan

    2017-01-01

    Sensor networks become increasingly a key technology for complex control applications. Their potential use in safety- and time-critical domains has raised the need for task scheduling mechanisms specially adapted to sensor node specific requirements, often materialized in predictable jitter-less execution of tasks characterized by different criticality levels. This paper offers an efficient scheduling solution, named Hybrid Hard Real-Time Scheduling (H2RTS), which combines a static, clock driven method with a dynamic, event driven scheduling technique, in order to provide high execution predictability, while keeping a high node Central Processing Unit (CPU) utilization factor. From the detailed, integrated schedulability analysis of the H2RTS, a set of sufficiency tests are introduced and demonstrated based on the processor demand and linear upper bound metrics. The performance and correct behavior of the proposed hybrid scheduling technique have been extensively evaluated and validated both on a simulator and on a sensor mote equipped with ARM7 microcontroller. PMID:28672856

  19. Stochastic Modeling of Airlines' Scheduled Services Revenue

    NASA Technical Reports Server (NTRS)

    Hamed, M. M.

    1999-01-01

    Airlines' revenue generated from scheduled services account for the major share in the total revenue. As such, predicting airlines' total scheduled services revenue is of great importance both to the governments (in case of national airlines) and private airlines. This importance stems from the need to formulate future airline strategic management policies, determine government subsidy levels, and formulate governmental air transportation policies. The prediction of the airlines' total scheduled services revenue is dealt with in this paper. Four key components of airline's scheduled services are considered. These include revenues generated from passenger, cargo, mail, and excess baggage. By addressing the revenue generated from each schedule service separately, air transportation planners and designers arc able to enhance their ability to formulate specific strategies for each component. Estimation results clearly indicate that the four stochastic processes (scheduled services components) are represented by different Box-Jenkins ARIMA models. The results demonstrate the appropriateness of the developed models and their ability to provide air transportation planners with future information vital to the planning and design processes.

  20. Stochastic Modeling of Airlines' Scheduled Services Revenue

    NASA Technical Reports Server (NTRS)

    Hamed, M. M.

    1999-01-01

    Airlines' revenue generated from scheduled services account for the major share in the total revenue. As such, predicting airlines' total scheduled services revenue is of great importance both to the governments (in case of national airlines) and private airlines. This importance stems from the need to formulate future airline strategic management policies, determine government subsidy levels, and formulate governmental air transportation policies. The prediction of the airlines' total scheduled services revenue is dealt with in this paper. Four key components of airline's scheduled services are considered. These include revenues generated from passenger, cargo, mail, and excess baggage. By addressing the revenue generated from each schedule service separately, air transportation planners and designers are able to enhance their ability to formulate specific strategies for each component. Estimation results clearly indicate that the four stochastic processes (scheduled services components) are represented by different Box-Jenkins ARIMA models. The results demonstrate the appropriateness of the developed models and their ability to provide air transportation planners with future information vital to the planning and design processes.

  1. Predicting Scheduling and Attending for an Oral Cancer Examination

    PubMed Central

    Shepperd, James A.; Emanuel, Amber S.; Howell, Jennifer L.; Logan, Henrietta L.

    2015-01-01

    Background Oral and pharyngeal cancer is highly treatable if diagnosed early, yet late diagnosis is commonplace apparently because of delays in undergoing an oral cancer examination. Purpose We explored predictors of scheduling and attending an oral cancer examination among a sample of Black and White men who were at high risk for oral cancer because they smoked. Methods During an in-person interview, participants (N = 315) from rural Florida learned about oral and pharyngeal cancer, completed survey measures, and were offered a free examination in the next week. Later, participants received a follow-up phone call to explore why they did or did not attend their examination. Results Consistent with the notion that scheduling and attending an oral cancer exam represent distinct decisions, we found that the two outcomes had different predictors. Defensive avoidance and exam efficacy predicted scheduling an examination; exam efficacy and having coping resources, time, and transportation predicted attending the examination. Open-ended responses revealed that the dominant reasons participants offered for missing a scheduled examination was conflicting obligations, forgetting, and confusion or misunderstanding about the examination. Conclusions The results suggest interventions to increase scheduling and attending an oral cancer examination. PMID:26152644

  2. Family risk as a predictor of initial engagement and follow-through in a universal nurse home visiting program to prevent child maltreatment.

    PubMed

    Alonso-Marsden, Shelley; Dodge, Kenneth A; O'Donnell, Karen J; Murphy, Robert A; Sato, Jeannine M; Christopoulos, Christina

    2013-08-01

    As nurse home visiting to prevent child maltreatment grows in popularity with both program administrators and legislators, it is important to understand engagement in such programs in order to improve their community-wide effects. This report examines family demographic and infant health risk factors that predict engagement and follow-through in a universal home-based maltreatment prevention program for new mothers in Durham County, North Carolina. Trained staff members attempted to schedule home visits for all new mothers during the birthing hospital stay, and then nurses completed scheduled visits three to five weeks later. Medical record data was used to identify family demographic and infant health risk factors for maltreatment. These variables were used to predict program engagement (scheduling a visit) and follow-through (completing a scheduled visit). Program staff members were successful in scheduling 78% of eligible families for a visit and completing 85% of scheduled visits. Overall, 66% of eligible families completed at least one visit. Structural equation modeling (SEM) analyses indicated that high demographic risk and low infant health risk were predictive of scheduling a visit. Both low demographic and infant health risk were predictive of visit completion. Findings suggest that while higher demographic risk increases families' initial engagement, it might also inhibit their follow-through. Additionally, parents of medically at-risk infants may be particularly difficult to engage in universal home visiting interventions. Implications for recruitment strategies of home visiting programs are discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Cascading Delay Risk of Airline Workforce Deployments with Crew Pairing and Schedule Optimization.

    PubMed

    Chung, Sai Ho; Ma, Hoi Lam; Chan, Hing Kai

    2017-08-01

    This article concerns the assignment of buffer time between two connected flights and the number of reserve crews in crew pairing to mitigate flight disruption due to flight arrival delay. Insufficient crew members for a flight will lead to flight disruptions such as delays or cancellations. In reality, most of these disruption cases are due to arrival delays of the previous flights. To tackle this problem, many research studies have examined the assignment method based on the historical flight arrival delay data of the concerned flights. However, flight arrival delays can be triggered by numerous factors. Accordingly, this article proposes a new forecasting approach using a cascade neural network, which considers a massive amount of historical flight arrival and departure data. The approach also incorporates learning ability so that unknown relationships behind the data can be revealed. Based on the expected flight arrival delay, the buffer time can be determined and a new dynamic reserve crew strategy can then be used to determine the required number of reserve crews. Numerical experiments are carried out based on one year of flight data obtained from 112 airports around the world. The results demonstrate that by predicting the flight departure delay as the input for the prediction of the flight arrival delay, the prediction accuracy can be increased. Moreover, by using the new dynamic reserve crew strategy, the total crew cost can be reduced. This significantly benefits airlines in flight schedule stability and cost saving in the current big data era. © 2016 Society for Risk Analysis.

  4. Estimating the cost of major ongoing cost plus hardware development programs

    NASA Technical Reports Server (NTRS)

    Bush, J. C.

    1990-01-01

    Approaches are developed for forecasting the cost of major hardware development programs while these programs are in the design and development C/D phase. Three approaches are developed: a schedule assessment technique for bottom-line summary cost estimation, a detailed cost estimation approach, and an intermediate cost element analysis procedure. The schedule assessment technique was developed using historical cost/schedule performance data.

  5. Scheduling from the perspective of the application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berman, F.; Wolski, R.

    1996-12-31

    Metacomputing is the aggregation of distributed and high-performance resources on coordinated networks. With careful scheduling, resource-intensive applications can be implemented efficiently on metacomputing systems at the sizes of interest to developers and users. In this paper we focus on the problem of scheduling applications on metacomputing systems. We introduce the concept of application-centric scheduling in which everything about the system is evaluated in terms of its impact on the application. Application-centric scheduling is used by virtually all metacomputer programmers to achieve performance on metacomputing systems. We describe two successful metacomputing applications to illustrate this approach, and describe AppLeS scheduling agentsmore » which generalize the application-centric scheduling approach. Finally, we show preliminary results which compare AppLeS-derived schedules with conventional strip and blocked schedules for a two-dimensional Jacobi code.« less

  6. Planner-Based Control of Advanced Life Support Systems

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Kortenkamp, David; Fry, Chuck; Bell, Scott

    2005-01-01

    The paper describes an approach to the integration of qualitative and quantitative modeling techniques for advanced life support (ALS) systems. Developing reliable control strategies that scale up to fully integrated life support systems requires augmenting quantitative models and control algorithms with the abstractions provided by qualitative, symbolic models and their associated high-level control strategies. This will allow for effective management of the combinatorics due to the integration of a large number of ALS subsystems. By focusing control actions at different levels of detail and reactivity we can use faster: simpler responses at the lowest level and predictive but complex responses at the higher levels of abstraction. In particular, methods from model-based planning and scheduling can provide effective resource management over long time periods. We describe reference implementation of an advanced control system using the IDEA control architecture developed at NASA Ames Research Center. IDEA uses planning/scheduling as the sole reasoning method for predictive and reactive closed loop control. We describe preliminary experiments in planner-based control of ALS carried out on an integrated ALS simulation developed at NASA Johnson Space Center.

  7. Dynamic scheduling and planning parallel observations on large Radio Telescope Arrays with the Square Kilometre Array in mind

    NASA Astrophysics Data System (ADS)

    Buchner, Johannes

    2011-12-01

    Scheduling, the task of producing a time table for resources and tasks, is well-known to be a difficult problem the more resources are involved (a NP-hard problem). This is about to become an issue in Radio astronomy as observatories consisting of hundreds to thousands of telescopes are planned and operated. The Square Kilometre Array (SKA), which Australia and New Zealand bid to host, is aiming for scales where current approaches -- in construction, operation but also scheduling -- are insufficent. Although manual scheduling is common today, the problem is becoming complicated by the demand for (1) independent sub-arrays doing simultaneous observations, which requires the scheduler to plan parallel observations and (2) dynamic re-scheduling on changed conditions. Both of these requirements apply to the SKA, especially in the construction phase. We review the scheduling approaches taken in the astronomy literature, as well as investigate techniques from human schedulers and today's observatories. The scheduling problem is specified in general for scientific observations and in particular on radio telescope arrays. Also taken into account is the fact that the observatory may be oversubscribed, requiring the scheduling problem to be integrated with a planning process. We solve this long-term scheduling problem using a time-based encoding that works in the very general case of observation scheduling. This research then compares algorithms from various approaches, including fast heuristics from CPU scheduling, Linear Integer Programming and Genetic algorithms, Branch-and-Bound enumeration schemes. Measures include not only goodness of the solution, but also scalability and re-scheduling capabilities. In conclusion, we have identified a fast and good scheduling approach that allows (re-)scheduling difficult and changing problems by combining heuristics with a Genetic algorithm using block-wise mutation operations. We are able to explain and eradicate two problems in the literature: The inability of a GA to properly improve schedules and the generation of schedules with frequent interruptions. Finally, we demonstrate the scheduling framework for several operating telescopes: (1) Dynamic re-scheduling with the AUT Warkworth 12m telescope, (2) Scheduling for the Australian Mopra 22m telescope and scheduling for the Allen Telescope Array. Furthermore, we discuss the applicability of the presented scheduling framework to the Atacama Large Millimeter/submillimeter Array (ALMA, in construction) and the SKA. In particular, during the development phase of the SKA, this dynamic, scalable scheduling framework can accommodate changing conditions.

  8. An Arrival and Departure Time Predictor for Scheduling Communication in Opportunistic IoT

    PubMed Central

    Pozza, Riccardo; Georgoulas, Stylianos; Moessner, Klaus; Nati, Michele; Gluhak, Alexander; Krco, Srdjan

    2016-01-01

    In this article, an Arrival and Departure Time Predictor (ADTP) for scheduling communication in opportunistic Internet of Things (IoT) is presented. The proposed algorithm learns about temporal patterns of encounters between IoT devices and predicts future arrival and departure times, therefore future contact durations. By relying on such predictions, a neighbour discovery scheduler is proposed, capable of jointly optimizing discovery latency and power consumption in order to maximize communication time when contacts are expected with high probability and, at the same time, saving power when contacts are expected with low probability. A comprehensive performance evaluation with different sets of synthetic and real world traces shows that ADTP performs favourably with respect to previous state of the art. This prediction framework opens opportunities for transmission planners and schedulers optimizing not only neighbour discovery, but the entire communication process. PMID:27827909

  9. An Arrival and Departure Time Predictor for Scheduling Communication in Opportunistic IoT.

    PubMed

    Pozza, Riccardo; Georgoulas, Stylianos; Moessner, Klaus; Nati, Michele; Gluhak, Alexander; Krco, Srdjan

    2016-11-04

    In this article, an Arrival and Departure Time Predictor (ADTP) for scheduling communication in opportunistic Internet of Things (IoT) is presented. The proposed algorithm learns about temporal patterns of encounters between IoT devices and predicts future arrival and departure times, therefore future contact durations. By relying on such predictions, a neighbour discovery scheduler is proposed, capable of jointly optimizing discovery latency and power consumption in order to maximize communication time when contacts are expected with high probability and, at the same time, saving power when contacts are expected with low probability. A comprehensive performance evaluation with different sets of synthetic and real world traces shows that ADTP performs favourably with respect to previous state of the art. This prediction framework opens opportunities for transmission planners and schedulers optimizing not only neighbour discovery, but the entire communication process.

  10. A Model-based Approach to Controlling the ST-5 Constellation Lights-Out Using the GMSEC Message Bus and Simulink

    NASA Technical Reports Server (NTRS)

    Witt, Kenneth J.; Stanley, Jason; Shendock, Robert; Mandl, Daniel

    2005-01-01

    Space Technology 5 (ST-5) is a three-satellite constellation, technology validation mission under the New Millennium Program at NASA to be launched in March 2006. One of the key technologies to be validated is a lights-out, model-based operations approach to be used for one week to control the ST-5 constellation with no manual intervention. The ground architecture features the GSFC Mission Services Evolution Center (GMSEC) middleware, which allows easy plugging in of software components and a standardized messaging protocol over a software bus. A predictive modeling tool built on MatLab's Simulink software package makes use of the GMSEC standard messaging protocol to interface to the Advanced Mission Planning System (AMPS) Scenario Scheduler which controls all activities, resource allocation and real-time re-profiling of constellation resources when non-nominal events occur. The key features of this system, which we refer to as the ST-5 Simulink system, are as follows: Original daily plan is checked to make sure that predicted resources needed are available by comparing the plan against the model. As the plan is run in real-time, the system re-profiles future activities in real-time if planned activities do not occur in the predicted timeframe or fashion. Alert messages are sent out on the GMSEC bus by the system if future predicted problems are detected. This will allow the Scenario Scheduler to correct the situation before the problem happens. The predictive model is evolved automatically over time via telemetry updates thus reducing the cost of implementing and maintaining the models by an order of magnitude from previous efforts at GSFC such as the model-based system built for MAP in the mid-1990's. This paper will describe the key features, lessons learned and implications for future missions once this system is successfully validated on-orbit in 2006.

  11. The interference of flexible working times with the circadian temperature rhythm--a predictor of impairment to health and well-being?

    PubMed

    Giebel, Ole; Wirtz, Anna; Nachreiner, Friedhelm

    2008-04-01

    In order to analyze whether impairments to health and well-being under flexible working hours can be predicted from specific characteristics of the work schedules, periodic components in flexible working hours and their interference with the circadian temperature rhythm were analyzed applying univariate and bivariate spectrum analyses to both time series. The resulting indicators of spectral power and phase shift of these components were then related to reported health impairments using regression analysis. The results show that a suppression of both the 24 and the 168 h components in the work schedules (i.e., a lack of periodicity) can be used to predict reported health impairments, and that if there are relatively strong 24 and 168 h components left in the work schedules, their phase difference with the temperature rhythm (as an indicator of the interference between working time and the circadian rhythm) further predicts impairment. The results indicate that the periodicity of working hours and the amount of (circadian) desynchronization induced by flexible work schedules can be used for predicting the impairing effects of flexible work schedules on health and well-being. The results can thus be used for evaluating and designing flexible shift rosters.

  12. Constraint-based integration of planning and scheduling for space-based observatory management

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Smith, Steven F.

    1994-01-01

    Progress toward the development of effective, practical solutions to space-based observatory scheduling problems within the HSTS scheduling framework is reported. HSTS was developed and originally applied in the context of the Hubble Space Telescope (HST) short-term observation scheduling problem. The work was motivated by the limitations of the current solution and, more generally, by the insufficiency of classical planning and scheduling approaches in this problem context. HSTS has subsequently been used to develop improved heuristic solution techniques in related scheduling domains and is currently being applied to develop a scheduling tool for the upcoming Submillimeter Wave Astronomy Satellite (SWAS) mission. The salient architectural characteristics of HSTS and their relationship to previous scheduling and AI planning research are summarized. Then, some key problem decomposition techniques underlying the integrated planning and scheduling approach to the HST problem are described; research results indicate that these techniques provide leverage in solving space-based observatory scheduling problems. Finally, more recently developed constraint-posting scheduling procedures and the current SWAS application focus are summarized.

  13. Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments

    PubMed Central

    Kadima, Hubert; Granado, Bertrand

    2013-01-01

    We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach. PMID:24319361

  14. Multi-objective approach for energy-aware workflow scheduling in cloud computing environments.

    PubMed

    Yassa, Sonia; Chelouah, Rachid; Kadima, Hubert; Granado, Bertrand

    2013-01-01

    We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach.

  15. Preliminary Evaluation of BIM-based Approaches for Schedule Delay Analysis

    NASA Astrophysics Data System (ADS)

    Chou, Hui-Yu; Yang, Jyh-Bin

    2017-10-01

    The problem of schedule delay commonly occurs in construction projects. The quality of delay analysis depends on the availability of schedule-related information and delay evidence. More information used in delay analysis usually produces more accurate and fair analytical results. How to use innovative techniques to improve the quality of schedule delay analysis results have received much attention recently. As Building Information Modeling (BIM) technique has been quickly developed, using BIM and 4D simulation techniques have been proposed and implemented. Obvious benefits have been achieved especially in identifying and solving construction consequence problems in advance of construction. This study preforms an intensive literature review to discuss the problems encountered in schedule delay analysis and the possibility of using BIM as a tool in developing a BIM-based approach for schedule delay analysis. This study believes that most of the identified problems can be dealt with by BIM technique. Research results could be a fundamental of developing new approaches for resolving schedule delay disputes.

  16. 'It is Time to Prepare the Next patient' Real-Time Prediction of Procedure Duration in Laparoscopic Cholecystectomies.

    PubMed

    Guédon, Annetje C P; Paalvast, M; Meeuwsen, F C; Tax, D M J; van Dijke, A P; Wauben, L S G L; van der Elst, M; Dankelman, J; van den Dobbelsteen, J J

    2016-12-01

    Operating Room (OR) scheduling is crucial to allow efficient use of ORs. Currently, the predicted durations of surgical procedures are unreliable and the OR schedulers have to follow the progress of the procedures in order to update the daily planning accordingly. The OR schedulers often acquire the needed information through verbal communication with the OR staff, which causes undesired interruptions of the surgical process. The aim of this study was to develop a system that predicts in real-time the remaining procedure duration and to test this prediction system for reliability and usability in an OR. The prediction system was based on the activation pattern of one single piece of equipment, the electrosurgical device. The prediction system was tested during 21 laparoscopic cholecystectomies, in which the activation of the electrosurgical device was recorded and processed in real-time using pattern recognition methods. The remaining surgical procedure duration was estimated and the optimal timing to prepare the next patient for surgery was communicated to the OR staff. The mean absolute error was smaller for the prediction system (14 min) than for the OR staff (19 min). The OR staff doubted whether the prediction system could take all relevant factors into account but were positive about its potential to shorten waiting times for patients. The prediction system is a promising tool to automatically and objectively predict the remaining procedure duration, and thereby achieve optimal OR scheduling and streamline the patient flow from the nursing department to the OR.

  17. Assessment of Manual Operation Time for the Manufacturing of Thin Film Transistor Liquid Crystal Display: A Bayesian Approach

    NASA Astrophysics Data System (ADS)

    Shen, Chien-wen

    2009-01-01

    During the processes of TFT-LCD manufacturing, steps like visual inspection of panel surface defects still heavily rely on manual operations. As the manual inspection time of TFT-LCD manufacturing could range from 4 hours to 1 day, the reliability of time forecasting is thus important for production planning, scheduling and customer response. This study would like to propose a practical and easy-to-implement prediction model through the approach of Bayesian networks for time estimation of manual operated procedures in TFT-LCD manufacturing. Given the lack of prior knowledge about manual operation time, algorithms of necessary path condition and expectation-maximization are used for structural learning and estimation of conditional probability distributions respectively. This study also applied Bayesian inference to evaluate the relationships between explanatory variables and manual operation time. With the empirical applications of this proposed forecasting model, approach of Bayesian networks demonstrates its practicability and prediction accountability.

  18. Coordination between Generation and Transmission Maintenance Scheduling by Means of Multi-agent Technique

    NASA Astrophysics Data System (ADS)

    Nagata, Takeshi; Tao, Yasuhiro; Utatani, Masahiro; Sasaki, Hiroshi; Fujita, Hideki

    This paper proposes a multi-agent approach to maintenance scheduling in restructured power systems. The restructuring of electric power industry has resulted in market-based approaches for unbundling a multitude of service provided by self-interested entities such as power generating companies (GENCOs), transmission providers (TRANSCOs) and distribution companies (DISCOs). The Independent System Operator (ISO) is responsible for the security of the system operation. The schedule submitted to ISO by GENCOs and TRANSCOs should satisfy security and reliability constraints. The proposed method consists of several GENCO Agents (GAGs), TARNSCO Agents (TAGs) and a ISO Agent(IAG). The IAG’s role in maintenance scheduling is limited to ensuring that the submitted schedules do not cause transmission congestion or endanger the system reliability. From the simulation results, it can be seen the proposed multi-agent approach could coordinate between generation and transmission maintenance schedules.

  19. Scheduling nursing personnel on a microcomputer.

    PubMed

    Liao, C J; Kao, C Y

    1997-01-01

    Suggests that with the shortage of nursing personnel, hospital administrators have to pay more attention to the needs of nurses to retain and recruit them. Also asserts that improving nurses' schedules is one of the most economic ways for the hospital administration to create a better working environment for nurses. Develops an algorithm for scheduling nursing personnel. Contrary to the current hospital approach, which schedules nurses on a person-by-person basis, the proposed algorithm constructs schedules on a day-by-day basis. The algorithm has inherent flexibility in handling a variety of possible constraints and goals, similar to other non-cyclical approaches. But, unlike most other non-cyclical approaches, it can also generate a quality schedule in a short time on a microcomputer. The algorithm was coded in C language and run on a microcomputer. The developed software is currently implemented at a leading hospital in Taiwan. The response to the initial implementation is quite promising.

  20. The GBT Dynamic Scheduling System: A New Scheduling Paradigm

    NASA Astrophysics Data System (ADS)

    O'Neil, K.; Balser, D.; Bignell, C.; Clark, M.; Condon, J.; McCarty, M.; Marganian, P.; Shelton, A.; Braatz, J.; Harnett, J.; Maddalena, R.; Mello, M.; Sessoms, E.

    2009-09-01

    The Robert C. Byrd Green Bank Telescope (GBT) is implementing a new Dynamic Scheduling System (DSS) designed to maximize the observing efficiency of the telescope while ensuring that none of the flexibility and ease of use of the GBT is harmed and that the data quality of observations is not adversely affected. To accomplish this, the GBT DSS is implementing a dynamic scheduling system which schedules observers, rather than running scripts. The DSS works by breaking each project into one or more sessions which have associated observing criteria such as RA, Dec, and frequency. Potential observers may also enter dates when members of their team will not be available for either on-site or remote observing. The scheduling algorithm uses those data, along with the predicted weather, to determine the most efficient schedule for the GBT. The DSS provides all observers at least 24 hours notice of their upcoming observing. In the uncommon (< 20%) case where the actual weather does not match the predictions, a backup project, chosen from the database, is run instead. Here we give an overview of the GBT DSS project, including the ranking and scheduling algorithms for the sessions, the scheduling probabilities generation, the web framework for the system, and an overview of the results from the beta testing which were held from June - September, 2008.

  1. The min-conflicts heuristic: Experimental and theoretical results

    NASA Technical Reports Server (NTRS)

    Minton, Steven; Philips, Andrew B.; Johnston, Mark D.; Laird, Philip

    1991-01-01

    This paper describes a simple heuristic method for solving large-scale constraint satisfaction and scheduling problems. Given an initial assignment for the variables in a problem, the method operates by searching through the space of possible repairs. The search is guided by an ordering heuristic, the min-conflicts heuristic, that attempts to minimize the number of constraint violations after each step. We demonstrate empirically that the method performs orders of magnitude better than traditional backtracking techniques on certain standard problems. For example, the one million queens problem can be solved rapidly using our approach. We also describe practical scheduling applications where the method has been successfully applied. A theoretical analysis is presented to explain why the method works so well on certain types of problems and to predict when it is likely to be most effective.

  2. Optimal de novo design of MRM experiments for rapid assay development in targeted proteomics.

    PubMed

    Bertsch, Andreas; Jung, Stephan; Zerck, Alexandra; Pfeifer, Nico; Nahnsen, Sven; Henneges, Carsten; Nordheim, Alfred; Kohlbacher, Oliver

    2010-05-07

    Targeted proteomic approaches such as multiple reaction monitoring (MRM) overcome problems associated with classical shotgun mass spectrometry experiments. Developing MRM quantitation assays can be time consuming, because relevant peptide representatives of the proteins must be found and their retention time and the product ions must be determined. Given the transitions, hundreds to thousands of them can be scheduled into one experiment run. However, it is difficult to select which of the transitions should be included into a measurement. We present a novel algorithm that allows the construction of MRM assays from the sequence of the targeted proteins alone. This enables the rapid development of targeted MRM experiments without large libraries of transitions or peptide spectra. The approach relies on combinatorial optimization in combination with machine learning techniques to predict proteotypicity, retention time, and fragmentation of peptides. The resulting potential transitions are scheduled optimally by solving an integer linear program. We demonstrate that fully automated construction of MRM experiments from protein sequences alone is possible and over 80% coverage of the targeted proteins can be achieved without further optimization of the assay.

  3. An AI Approach to Ground Station Autonomy for Deep Space Communications

    NASA Technical Reports Server (NTRS)

    Fisher, Forest; Estlin, Tara; Mutz, Darren; Paal, Leslie; Law, Emily; Stockett, Mike; Golshan, Nasser; Chien, Steve

    1998-01-01

    This paper describes an architecture for an autonomous deep space tracking station (DS-T). The architecture targets fully automated routine operations encompassing scheduling and resource allocation, antenna and receiver predict generation. track procedure generation from service requests, and closed loop control and error recovery for the station subsystems. This architecture has been validated by the construction of a prototype DS-T station, which has performed a series of demonstrations of autonomous ground station control for downlink services with NASA's Mars Global Surveyor (MGS).

  4. Scheduling techniques in the Request Oriented Scheduling Engine (ROSE)

    NASA Technical Reports Server (NTRS)

    Zoch, David R.

    1991-01-01

    Scheduling techniques in the ROSE are presented in the form of the viewgraphs. The following subject areas are covered: agenda; ROSE summary and history; NCC-ROSE task goals; accomplishments; ROSE timeline manager; scheduling concerns; current and ROSE approaches; initial scheduling; BFSSE overview and example; and summary.

  5. Compiling Planning into Scheduling: A Sketch

    NASA Technical Reports Server (NTRS)

    Bedrax-Weiss, Tania; Crawford, James M.; Smith, David E.

    2004-01-01

    Although there are many approaches for compiling a planning problem into a static CSP or a scheduling problem, current approaches essentially preserve the structure of the planning problem in the encoding. In this pape: we present a fundamentally different encoding that more accurately resembles a scheduling problem. We sketch the approach and argue, based on an example, that it is possible to automate the generation of such an encoding for problems with certain properties and thus produce a compiler of planning into scheduling problems. Furthermore we argue that many NASA problems exhibit these properties and that such a compiler would provide benefits to both theory and practice.

  6. Relative Persistence as a Function of Order of Reinforcement Schedules

    ERIC Educational Resources Information Center

    Dyal, James A.; Sytsma, Donald

    1976-01-01

    Stimulus analyzer theory as proposed by Sutherland and Mackintosh (1971) makes the unique prediction that the first-experienced reinforcement schedule will influence resistance to extinction more than subsequent schedules. Results presently reported of runaway acquisition and extinction indicate the opposite: C-P consistently produce substantially…

  7. Collaborative Distributed Scheduling Approaches for Wireless Sensor Network

    PubMed Central

    Niu, Jianjun; Deng, Zhidong

    2009-01-01

    Energy constraints restrict the lifetime of wireless sensor networks (WSNs) with battery-powered nodes, which poses great challenges for their large scale application. In this paper, we propose a family of collaborative distributed scheduling approaches (CDSAs) based on the Markov process to reduce the energy consumption of a WSN. The family of CDSAs comprises of two approaches: a one-step collaborative distributed approach and a two-step collaborative distributed approach. The approaches enable nodes to learn the behavior information of its environment collaboratively and integrate sleep scheduling with transmission scheduling to reduce the energy consumption. We analyze the adaptability and practicality features of the CDSAs. The simulation results show that the two proposed approaches can effectively reduce nodes' energy consumption. Some other characteristics of the CDSAs like buffer occupation and packet delay are also analyzed in this paper. We evaluate CDSAs extensively on a 15-node WSN testbed. The test results show that the CDSAs conserve the energy effectively and are feasible for real WSNs. PMID:22408491

  8. Scheduling Future Water Supply Investments Under Uncertainty

    NASA Astrophysics Data System (ADS)

    Huskova, I.; Matrosov, E. S.; Harou, J. J.; Kasprzyk, J. R.; Reed, P. M.

    2014-12-01

    Uncertain hydrological impacts of climate change, population growth and institutional changes pose a major challenge to planning of water supply systems. Planners seek optimal portfolios of supply and demand management schemes but also when to activate assets whilst considering many system goals and plausible futures. Incorporation of scheduling into the planning under uncertainty problem strongly increases its complexity. We investigate some approaches to scheduling with many-objective heuristic search. We apply a multi-scenario many-objective scheduling approach to the Thames River basin water supply system planning problem in the UK. Decisions include which new supply and demand schemes to implement, at what capacity and when. The impact of different system uncertainties on scheme implementation schedules are explored, i.e. how the choice of future scenarios affects the search process and its outcomes. The activation of schemes is influenced by the occurrence of extreme hydrological events in the ensemble of plausible scenarios and other factors. The approach and results are compared with a previous study where only the portfolio problem is addressed (without scheduling).

  9. Approach to transaction management for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Easton, C. R.; Cressy, Phil; Ohnesorge, T. E.; Hector, Garland

    1989-01-01

    An approach to managing the operations of the Space Station Freedom based on their external effects is described. It is assumed that there is a conflict-free schedule that, if followed, will allow only appropriate operations to occur. The problem is then reduced to that of ensuring that the operations initiated are within the limits allowed by the schedule, or that the external effects of such operations are within those allowed by the schedule. The main features of the currently adopted transaction management approach are discussed.

  10. Power Aware Distributed Systems

    DTIC Science & Technology

    2004-01-01

    detection or threshold functions to trigger the main CPU. The main processor can sleep and either wakeup on a schedule or by a positive threshold event...the RTOS must determine if wake-up latency can be tolerated (or, if it could be hidden by pre- wakeup ). The prediction accuracy for scheduling ...and processor shutdown/ wakeup . This analysis can be used to accurately analyze the schedulability of non-concrete periodic task sets, scheduled using

  11. An on-time power-aware scheduling scheme for medical sensor SoC-based WBAN systems.

    PubMed

    Hwang, Tae-Ho; Kim, Dong-Sun; Kim, Jung-Guk

    2012-12-27

    The focus of many leading technologies in the field of medical sensor systems is on low power consumption and robust data transmission. For example, the implantable cardioverter-defibrillator (ICD), which is used to maintain the heart in a healthy state, requires a reliable wireless communication scheme with an extremely low duty-cycle, high bit rate, and energy-efficient media access protocols. Because such devices must be sustained for over 5 years without access to battery replacement, they must be designed to have extremely low power consumption in sleep mode. Here, an on-time, energy-efficient scheduling scheme is proposed that performs power adjustments to minimize the sleep-mode current. The novelty of this scheduler is that it increases the determinacy of power adjustment and the predictability of scheduling by employing non-pre-emptible dual priority scheduling. This predictable scheduling also guarantees the punctuality of important periodic tasks based on their serialization, by using their worst case execution time) and the power consumption optimization. The scheduler was embedded into a system on chip (SoC) developed to support the wireless body area network-a wakeup-radio and wakeup-timer for implantable medical devices. This scheduling system is validated by the experimental results of its performance when used with life-time extensions of ICD devices.

  12. An On-Time Power-Aware Scheduling Scheme for Medical Sensor SoC-Based WBAN Systems

    PubMed Central

    Hwang, Tae-Ho; Kim, Dong-Sun; Kim, Jung-Guk

    2013-01-01

    The focus of many leading technologies in the field of medical sensor systems is on low power consumption and robust data transmission. For example, the implantable cardioverter-defibrillator (ICD), which is used to maintain the heart in a healthy state, requires a reliable wireless communication scheme with an extremely low duty-cycle, high bit rate, and energy-efficient media access protocols. Because such devices must be sustained for over 5 years without access to battery replacement, they must be designed to have extremely low power consumption in sleep mode. Here, an on-time, energy-efficient scheduling scheme is proposed that performs power adjustments to minimize the sleep-mode current. The novelty of this scheduler is that it increases the determinacy of power adjustment and the predictability of scheduling by employing non-pre-emptible dual priority scheduling. This predictable scheduling also guarantees the punctuality of important periodic tasks based on their serialization, by using their worst case execution time) and the power consumption optimization. The scheduler was embedded into a system on chip (SoC) developed to support the wireless body area network—a wakeup-radio and wakeup-timer for implantable medical devices. This scheduling system is validated by the experimental results of its performance when used with life-time extensions of ICD devices. PMID:23271602

  13. "The stone which the builders rejected...": Delay of reinforcement and response rate on fixed-interval and related schedules.

    PubMed

    Wearden, J H; Lejeune, Helga

    2006-02-28

    The article deals with response rates (mainly running and peak or terminal rates) on simple and on some mixed-FI schedules and explores the idea that these rates are determined by the average delay of reinforcement for responses occurring during the response periods that the schedules generate. The effects of reinforcement delay are assumed to be mediated by a hyperbolic delay of reinforcement gradient. The account predicts that (a) running rates on simple FI schedules should increase with increasing rate of reinforcement, in a manner close to that required by Herrnstein's equation, (b) improving temporal control during acquisition should be associated with increasing running rates, (c) two-valued mixed-FI schedules with equiprobable components should produce complex results, with peak rates sometimes being higher on the longer component schedule, and (d) that effects of reinforcement probability on mixed-FI should affect the response rate at the time of the shorter component only. All these predictions were confirmed by data, although effects in some experiments remain outside the scope of the model. In general, delay of reinforcement as a determinant of response rate on FI and related schedules (rather than temporal control on such schedules) seems a useful starting point for a more thorough analysis of some neglected questions about performance on FI and related schedules.

  14. A genetic algorithm-based approach to flexible flow-line scheduling with variable lot sizes.

    PubMed

    Lee, I; Sikora, R; Shaw, M J

    1997-01-01

    Genetic algorithms (GAs) have been used widely for such combinatorial optimization problems as the traveling salesman problem (TSP), the quadratic assignment problem (QAP), and job shop scheduling. In all of these problems there is usually a well defined representation which GA's use to solve the problem. We present a novel approach for solving two related problems-lot sizing and sequencing-concurrently using GAs. The essence of our approach lies in the concept of using a unified representation for the information about both the lot sizes and the sequence and enabling GAs to evolve the chromosome by replacing primitive genes with good building blocks. In addition, a simulated annealing procedure is incorporated to further improve the performance. We evaluate the performance of applying the above approach to flexible flow line scheduling with variable lot sizes for an actual manufacturing facility, comparing it to such alternative approaches as pair wise exchange improvement, tabu search, and simulated annealing procedures. The results show the efficacy of this approach for flexible flow line scheduling.

  15. Energy-efficient approach to minimizing the energy consumption in an extended job-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Tang, Dunbing; Dai, Min

    2015-09-01

    The traditional production planning and scheduling problems consider performance indicators like time, cost and quality as optimization objectives in manufacturing processes. However, environmentally-friendly factors like energy consumption of production have not been completely taken into consideration. Against this background, this paper addresses an approach to modify a given schedule generated by a production planning and scheduling system in a job shop floor, where machine tools can work at different cutting speeds. It can adjust the cutting speeds of the operations while keeping the original assignment and processing sequence of operations of each job fixed in order to obtain energy savings. First, the proposed approach, based on a mixed integer programming mathematical model, changes the total idle time of the given schedule to minimize energy consumption in the job shop floor while accepting the optimal solution of the scheduling objective, makespan. Then, a genetic-simulated annealing algorithm is used to explore the optimal solution due to the fact that the problem is strongly NP-hard. Finally, the effectiveness of the approach is performed smalland large-size instances, respectively. The experimental results show that the approach can save 5%-10% of the average energy consumption while accepting the optimal solution of the makespan in small-size instances. In addition, the average maximum energy saving ratio can reach to 13%. And it can save approximately 1%-4% of the average energy consumption and approximately 2.4% of the average maximum energy while accepting the near-optimal solution of the makespan in large-size instances. The proposed research provides an interesting point to explore an energy-aware schedule optimization for a traditional production planning and scheduling problem.

  16. Using Formal Grammars to Predict I/O Behaviors in HPC: The Omnisc'IO Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorier, Matthieu; Ibrahim, Shadi; Antoniu, Gabriel

    2016-08-01

    The increasing gap between the computation performance of post-petascale machines and the performance of their I/O subsystem has motivated many I/O optimizations including prefetching, caching, and scheduling. In order to further improve these techniques, modeling and predicting spatial and temporal I/O patterns of HPC applications as they run has become crucial. In this paper we present Omnisc'IO, an approach that builds a grammar-based model of the I/O behavior of HPC applications and uses it to predict when future I/O operations will occur, and where and how much data will be accessed. To infer grammars, Omnisc'IO is based on StarSequitur, amore » novel algorithm extending Nevill-Manning's Sequitur algorithm. Omnisc'IO is transparently integrated into the POSIX and MPI I/O stacks and does not require any modification in applications or higher-level I/O libraries. It works without any prior knowledge of the application and converges to accurate predictions of any N future I/O operations within a couple of iterations. Its implementation is efficient in both computation time and memory footprint.« less

  17. Multi-time Scale Joint Scheduling Method Considering the Grid of Renewable Energy

    NASA Astrophysics Data System (ADS)

    Zhijun, E.; Wang, Weichen; Cao, Jin; Wang, Xin; Kong, Xiangyu; Quan, Shuping

    2018-01-01

    Renewable new energy power generation prediction error like wind and light, brings difficulties to dispatch the power system. In this paper, a multi-time scale robust scheduling method is set to solve this problem. It reduces the impact of clean energy prediction bias to the power grid by using multi-time scale (day-ahead, intraday, real time) and coordinating the dispatching power output of various power supplies such as hydropower, thermal power, wind power, gas power and. The method adopts the robust scheduling method to ensure the robustness of the scheduling scheme. By calculating the cost of the abandon wind and the load, it transforms the robustness into the risk cost and optimizes the optimal uncertainty set for the smallest integrative costs. The validity of the method is verified by simulation.

  18. Automated Scheduling Via Artificial Intelligence

    NASA Technical Reports Server (NTRS)

    Biefeld, Eric W.; Cooper, Lynne P.

    1991-01-01

    Artificial-intelligence software that automates scheduling developed in Operations Mission Planner (OMP) research project. Software used in both generation of new schedules and modification of existing schedules in view of changes in tasks and/or available resources. Approach based on iterative refinement. Although project focused upon scheduling of operations of scientific instruments and other equipment aboard spacecraft, also applicable to such terrestrial problems as scheduling production in factory.

  19. Development of an irrigation scheduling software based on model predicted crop water stress

    USDA-ARS?s Scientific Manuscript database

    Modern irrigation scheduling methods are generally based on sensor-monitored soil moisture regimes rather than crop water stress which is difficult to measure in real-time, but can be computed using agricultural system models. In this study, an irrigation scheduling software based on RZWQM2 model pr...

  20. Taxi Time Prediction at Charlotte Airport Using Fast-Time Simulation and Machine Learning Techniques

    NASA Technical Reports Server (NTRS)

    Lee, Hanbong

    2016-01-01

    Accurate taxi time prediction is required for enabling efficient runway scheduling that can increase runway throughput and reduce taxi times and fuel consumptions on the airport surface. Currently NASA and American Airlines are jointly developing a decision-support tool called Spot and Runway Departure Advisor (SARDA) that assists airport ramp controllers to make gate pushback decisions and improve the overall efficiency of airport surface traffic. In this presentation, we propose to use Linear Optimized Sequencing (LINOS), a discrete-event fast-time simulation tool, to predict taxi times and provide the estimates to the runway scheduler in real-time airport operations. To assess its prediction accuracy, we also introduce a data-driven analytical method using machine learning techniques. These two taxi time prediction methods are evaluated with actual taxi time data obtained from the SARDA human-in-the-loop (HITL) simulation for Charlotte Douglas International Airport (CLT) using various performance measurement metrics. Based on the taxi time prediction results, we also discuss how the prediction accuracy can be affected by the operational complexity at this airport and how we can improve the fast time simulation model before implementing it with an airport scheduling algorithm in a real-time environment.

  1. Rescheduling with iterative repair

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael

    1992-01-01

    This paper presents a new approach to rescheduling called constraint-based iterative repair. This approach gives our system the ability to satisfy domain constraints, address optimization concerns, minimize perturbation to the original schedule, and produce modified schedules quickly. The system begins with an initial, flawed schedule and then iteratively repairs constraint violations until a conflict-free schedule is produced. In an empirical demonstration, we vary the importance of minimizing perturbation and report how fast the system is able to resolve conflicts in a given time bound. These experiments were performed within the domain of Space Shuttle ground processing.

  2. Predicting appointment breaking.

    PubMed

    Bean, A G; Talaga, J

    1995-01-01

    The goal of physician referral services is to schedule appointments, but if too many patients fail to show up, the value of the service will be compromised. The authors found that appointment breaking can be predicted by the number of days to the scheduled appointment, the doctor's specialty, and the patient's age and gender. They also offer specific suggestions for modifying the marketing mix to reduce the incidence of no-shows.

  3. Route Optimization for Offloading Congested Meter Fixes

    NASA Technical Reports Server (NTRS)

    Xue, Min; Zelinski, Shannon

    2016-01-01

    The Optimized Route Capability (ORC) concept proposed by the FAA facilitates traffic managers to identify and resolve arrival flight delays caused by bottlenecks formed at arrival meter fixes when there exists imbalance between arrival fixes and runways. ORC makes use of the prediction capability of existing automation tools, monitors the traffic delays based on these predictions, and searches the best reroutes upstream of the meter fixes based on the predictions and estimated arrival schedules when delays are over a predefined threshold. Initial implementation and evaluation of the ORC concept considered only reroutes available at the time arrival congestion was first predicted. This work extends previous work by introducing an additional dimension in reroute options such that ORC can find the best time to reroute and overcome the 'firstcome- first-reroute' phenomenon. To deal with the enlarged reroute solution space, a genetic algorithm was developed to solve this problem. Experiments were conducted using the same traffic scenario used in previous work, when an arrival rush was created for one of the four arrival meter fixes at George Bush Intercontinental Houston Airport. Results showed the new approach further improved delay savings. The suggested route changes from the new approach were on average 30 minutes later than those using other approaches, and fewer numbers of reroutes were required. Fewer numbers of reroutes reduce operational complexity and later reroutes help decision makers deal with uncertain situations.

  4. Learning to improve iterative repair scheduling

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene

    1992-01-01

    This paper presents a general learning method for dynamically selecting between repair heuristics in an iterative repair scheduling system. The system employs a version of explanation-based learning called Plausible Explanation-Based Learning (PEBL) that uses multiple examples to confirm conjectured explanations. The basic approach is to conjecture contradictions between a heuristic and statistics that measure the quality of the heuristic. When these contradictions are confirmed, a different heuristic is selected. To motivate the utility of this approach we present an empirical evaluation of the performance of a scheduling system with respect to two different repair strategies. We show that the scheduler that learns to choose between the heuristics outperforms the same scheduler with any one of two heuristics alone.

  5. Approximation algorithms for scheduling unrelated parallel machines with release dates

    NASA Astrophysics Data System (ADS)

    Avdeenko, T. V.; Mesentsev, Y. A.; Estraykh, I. V.

    2017-01-01

    In this paper we propose approaches to optimal scheduling of unrelated parallel machines with release dates. One approach is based on the scheme of dynamic programming modified with adaptive narrowing of search domain ensuring its computational effectiveness. We discussed complexity of the exact schedules synthesis and compared it with approximate, close to optimal, solutions. Also we explain how the algorithm works for the example of two unrelated parallel machines and five jobs with release dates. Performance results that show the efficiency of the proposed approach have been given.

  6. The Effects of the Uncertainty of Departures on Multi-Center Traffic Management Advisor (TMA) Scheduling

    NASA Technical Reports Server (NTRS)

    Thipphavong, Jane; Landry, Steven J.

    2005-01-01

    The Multi-center Traffic Management Advisor (McTMA) provides a platform for regional or national traffic flow management, by allowing long-range cooperative time-based metering to constrained resources, such as airports or air traffic control center boundaries. Part of the demand for resources is made up of proposed departures, whose actual departure time is difficult to predict. For this reason, McTMA does not schedule the departures in advance, but rather relies on traffic managers to input their requested departure time. Because this happens only a short while before the aircraft's actual departure, McTMA is unable to accurately predict the amount of delay airborne aircraft will need to take in order to accommodate the departures. The proportion of demand which is made up by such proposed departures increases as the horizon over which metering occurs gets larger. This study provides an initial analysis of the severity of this problem in a 400-500 nautical mile metering horizon and discusses potential solutions to accommodate these departures. The challenge is to smoothly incorporate departures with the airborne stream while not excessively delaying the departures.' In particular, three solutions are reviewed: (1) scheduling the departures at their proposed departure time; (2) not scheduling the departures in advance; and (3) scheduling the departures at some time in the future based on an estimated error in their proposed time. The first solution is to have McTMA to automatically schedule the departures at their proposed departure times. Since the proposed departure times are indicated in their flight times in advance, this method is the simplest, but studies have shown that these proposed times are often incorrect2 The second option is the current practice, which avoids these inaccuracies by only scheduling aircraft when a confirmed prediction of departure time is obtained from the tower of the departure airport. Lastly, McTMA can schedule the departures at a predicted departure time based on statistical data of past departure time performance. It has been found that departures usually have a wheels-up time after their indicated proposed departure time, as shown in Figure 1. Hence, the departures were scheduled at a time in the future based on the mean error in proposed departure times for their airport.

  7. Scheduling double round-robin tournaments with divisional play using constraint programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlsson, Mats; Johansson, Mikael; Larson, Jeffrey

    We study a tournament format that extends a traditional double round-robin format with divisional single round-robin tournaments. Elitserien, the top Swedish handball league, uses such a format for its league schedule. We present a constraint programming model that characterizes the general double round-robin plus divisional single round-robin format. This integrated model allows scheduling to be performed in a single step, as opposed to common multistep approaches that decompose scheduling into smaller problems and possibly miss optimal solutions. In addition to general constraints, we introduce Elitserien-specific requirements for its tournament. These general and league-specific constraints allow us to identify implicit andmore » symmetry-breaking properties that reduce the time to solution from hours to seconds. A scalability study of the number of teams shows that our approach is reasonably fast for even larger league sizes. The experimental evaluation of the integrated approach takes considerably less computational effort to schedule Elitserien than does the previous decomposed approach. (C) 2016 Elsevier B.V. All rights reserved« less

  8. Estimation of Teacher Salary Schedules. Educational Planning Occasional Papers No. 6/72.

    ERIC Educational Resources Information Center

    Burtnyk, W. A.

    This paper describes the method used by Tracz and Burtnyk for the estimation of future salary schedules in the Ontario secondary school system. The application of the algorithm to the Ontario secondary school system predicts a possible breakdown in the fixed step salary schedule at about 1980. This situation results primarily because of the…

  9. An Aircraft Vortex Spacing System (AVOSS) for Dynamical Wake Vortex Spacing Criteria

    NASA Technical Reports Server (NTRS)

    Hinton, D. A.

    1996-01-01

    A concept is presented for the development and implementation of a prototype Aircraft Vortex Spacing System (AVOSS). The purpose of the AVOSS is to use current and short-term predictions of the atmospheric state in approach and departure corridors to provide, to ATC facilities, dynamical weather dependent separation criteria with adequate stability and lead time for use in establishing arrival scheduling. The AVOSS will accomplish this task through a combination of wake vortex transport and decay predictions, weather state knowledge, defined aircraft operational procedures and corridors, and wake vortex safety sensors. Work is currently underway to address the critical disciplines and knowledge needs so as to implement and demonstrate a prototype AVOSS in the 1999/2000 time frame.

  10. Investigations into Generalization of Constraint-Based Scheduling Theories with Applications to Space Telescope Observation Scheduling

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Smith, Steven S.

    1996-01-01

    This final report summarizes research performed under NASA contract NCC 2-531 toward generalization of constraint-based scheduling theories and techniques for application to space telescope observation scheduling problems. Our work into theories and techniques for solution of this class of problems has led to the development of the Heuristic Scheduling Testbed System (HSTS), a software system for integrated planning and scheduling. Within HSTS, planning and scheduling are treated as two complementary aspects of the more general process of constructing a feasible set of behaviors of a target system. We have validated the HSTS approach by applying it to the generation of observation schedules for the Hubble Space Telescope. This report summarizes the HSTS framework and its application to the Hubble Space Telescope domain. First, the HSTS software architecture is described, indicating (1) how the structure and dynamics of a system is modeled in HSTS, (2) how schedules are represented at multiple levels of abstraction, and (3) the problem solving machinery that is provided. Next, the specific scheduler developed within this software architecture for detailed management of Hubble Space Telescope operations is presented. Finally, experimental performance results are given that confirm the utility and practicality of the approach.

  11. Analysis of crimes committed against scheduled tribes

    NASA Astrophysics Data System (ADS)

    Khadse, Vivek P.; Akhil, P.; Anto, Christopher; Gnanasigamani, Lydia J.

    2017-11-01

    One of the curses to the society is a crime which has a deep impact on the society. Victims of crimes are the one who is impacted the most. All communities in the world are affected by crime and the criminal justice system, but largely impacted communities are the backward classes. There are many cases reported of crime committed against scheduled tribes from the year 2005 till date. This paper states the analysis of Crimes Committed against Scheduled Tribes in the year 2015 in various states and union territories in India. In this study, Multiple Linear regression techniques have been used to analyze the crimes committed against scheduled tribes’ community in India. This study compares the number of cases reported to the police station and rate of crime committed in different states in India. It also states the future prediction of the crime that would happen. It will also predict the number of cases of crime committed against the scheduled tribe that can be reported in future. The dataset which has been used in this study is taken from official Indian government repository for crimes which include different information of crimes committed against scheduled tribes in different states and union territories measured under the population census of the year 2011. This study will help different Indian states and union territory government to analyze and predict the future crimes that may occur and take appropriate measures against it before the actual crime would occur.

  12. Artificial intelligence approaches to astronomical observation scheduling

    NASA Technical Reports Server (NTRS)

    Johnston, Mark D.; Miller, Glenn

    1988-01-01

    Automated scheduling will play an increasing role in future ground- and space-based observatory operations. Due to the complexity of the problem, artificial intelligence technology currently offers the greatest potential for the development of scheduling tools with sufficient power and flexibility to handle realistic scheduling situations. Summarized here are the main features of the observatory scheduling problem, how artificial intelligence (AI) techniques can be applied, and recent progress in AI scheduling for Hubble Space Telescope.

  13. Compound-Schedules Approaches to Noncompliance: Teaching Children When to Ask and When to Work

    ERIC Educational Resources Information Center

    Lambert, Joseph M.; Clohisy, Anne M.; Blair Barrows, S.; Houchins-Juarez, Nealetta J.

    2017-01-01

    Researchers have demonstrated for practitioners how to use multiple-schedules preparations to thin initially dense schedules of reinforcement during functional communication training, without sacrificing benefits associated with dense schedules of reinforcement for manding. However, special considerations may be required for practitioners to…

  14. Automated System Checkout to Support Predictive Maintenance for the Reusable Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, Ann; Deb, Somnath; Kulkarni, Deepak; Wang, Yao; Lau, Sonie (Technical Monitor)

    1998-01-01

    The Propulsion Checkout and Control System (PCCS) is a predictive maintenance software system. The real-time checkout procedures and diagnostics are designed to detect components that need maintenance based on their condition, rather than using more conventional approaches such as scheduled or reliability centered maintenance. Predictive maintenance can reduce turn-around time and cost and increase safety as compared to conventional maintenance approaches. Real-time sensor validation, limit checking, statistical anomaly detection, and failure prediction based on simulation models are employed. Multi-signal models, useful for testability analysis during system design, are used during the operational phase to detect and isolate degraded or failed components. The TEAMS-RT real-time diagnostic engine was developed to utilize the multi-signal models by Qualtech Systems, Inc. Capability of predicting the maintenance condition was successfully demonstrated with a variety of data, from simulation to actual operation on the Integrated Propulsion Technology Demonstrator (IPTD) at Marshall Space Flight Center (MSFC). Playback of IPTD valve actuations for feature recognition updates identified an otherwise undetectable Main Propulsion System 12 inch prevalve degradation. The algorithms were loaded into the Propulsion Checkout and Control System for further development and are the first known application of predictive Integrated Vehicle Health Management to an operational cryogenic testbed. The software performed successfully in real-time, meeting the required performance goal of 1 second cycle time.

  15. A Comparison of Synoptic Classification Methods for Application to Wind Power Prediction

    NASA Astrophysics Data System (ADS)

    Fowler, P.; Basu, S.

    2008-12-01

    Wind energy is a highly variable resource. To make it competitive with other sources of energy for integration on the power grid, at the very least, a day-ahead forecast of power output must be available. In many grid operations worldwide, next-day power output is scheduled in 30 minute intervals and grid management routinely occurs at real time. Maintenance and repairs require costly time to complete and must be scheduled along with normal operations. Revenue is dependent on the reliability of the entire system. In other words, there is financial and managerial benefit to short-term prediction of wind power. One approach to short-term forecasting is to combine a data centric method such as an artificial neural network with a physically based approach like numerical weather prediction (NWP). The key is in associating high-dimensional NWP model output with the most appropriately trained neural network. Because neural networks perform the best in the situations they are designed for, one can hypothesize that if one can identify similar recurring states in historical weather data, this data can be used to train multiple custom designed neural networks to be used when called upon by numerical prediction. Identifying similar recurring states may offer insight to how a neural network forecast can be improved, but amassing the knowledge and utilizing it efficiently in the time required for power prediction would be difficult for a human to master, thus showing the advantage of classification. Classification methods are important tools for short-term forecasting because they can be unsupervised, objective, and computationally quick. They primarily involve categorizing data sets in to dominant weather classes, but there are numerous ways to define a class and a great variety in interpretation of the results. In the present study a collection of classification methods are used on a sampling of atmospheric variables from the North American Regional Reanalysis data set. The results will be discussed in relation to their use for short-term wind power forecasting by neural networks.

  16. Channel Acquisition for Massive MIMO-OFDM With Adjustable Phase Shift Pilots

    NASA Astrophysics Data System (ADS)

    You, Li; Gao, Xiqi; Swindlehurst, A. Lee; Zhong, Wen

    2016-03-01

    We propose adjustable phase shift pilots (APSPs) for channel acquisition in wideband massive multiple-input multiple-output (MIMO) systems employing orthogonal frequency division multiplexing (OFDM) to reduce the pilot overhead. Based on a physically motivated channel model, we first establish a relationship between channel space-frequency correlations and the channel power angle-delay spectrum in the massive antenna array regime, which reveals the channel sparsity in massive MIMO-OFDM. With this channel model, we then investigate channel acquisition, including channel estimation and channel prediction, for massive MIMO-OFDM with APSPs. We show that channel acquisition performance in terms of sum mean square error can be minimized if the user terminals' channel power distributions in the angle-delay domain can be made non-overlapping with proper phase shift scheduling. A simplified pilot phase shift scheduling algorithm is developed based on this optimal channel acquisition condition. The performance of APSPs is investigated for both one symbol and multiple symbol data models. Simulations demonstrate that the proposed APSP approach can provide substantial performance gains in terms of achievable spectral efficiency over the conventional phase shift orthogonal pilot approach in typical mobility scenarios.

  17. Information Requirements Analyses for Transatmospheric Vehicles

    DTIC Science & Technology

    1992-06-01

    include takeoff inclination, Mach number/fuel burn schedule , planned headings, planned altitudes, threat types/locations, communications satel- lite...network availability schedules , and P/L/target-specific mission events. The mission materials are transferred to the TAV by means of magnetic media...constraints. It also monitors actual fuel consumption, compares it against the mission fuel schedule , predicts rest-of-mission fuel consumption, and

  18. A Systems Engineering Approach for Global Fleet Station Alternatives in the Gulf of Guinea

    DTIC Science & Technology

    2007-12-01

    Understanding that many types of risk lie within categories such as cost, funding, management, political, production, and schedule , we may apply the... schedule , to the Gulf of Guinea beginning in October of 2007. USS FORT MCHENRY, an amphibious Landing Ship Dock (LSD), affords greater storage...Kerzner, Project Management: A Systems Approach to Planning, Scheduling , and Controlling (New Jersey: John Wiley & Sons, Inc., 2006), 724. 103 5

  19. Perceptions of randomized security schedules.

    PubMed

    Scurich, Nicholas; John, Richard S

    2014-04-01

    Security of infrastructure is a major concern. Traditional security schedules are unable to provide omnipresent coverage; consequently, adversaries can exploit predictable vulnerabilities to their advantage. Randomized security schedules, which randomly deploy security measures, overcome these limitations, but public perceptions of such schedules have not been examined. In this experiment, participants were asked to make a choice between attending a venue that employed a traditional (i.e., search everyone) or a random (i.e., a probability of being searched) security schedule. The absolute probability of detecting contraband was manipulated (i.e., 1/10, 1/4, 1/2) but equivalent between the two schedule types. In general, participants were indifferent to either security schedule, regardless of the probability of detection. The randomized schedule was deemed more convenient, but the traditional schedule was considered fairer and safer. There were no differences between traditional and random schedule in terms of perceived effectiveness or deterrence. Policy implications for the implementation and utilization of randomized schedules are discussed. © 2013 Society for Risk Analysis.

  20. Three-Stage Production Cost Modeling Approach for Evaluating the Benefits of Intra-Hour Scheduling between Balancing Authorities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samaan, Nader A.; Milligan, Michael; Hunsaker, Matthew

    This paper introduces a Production Cost Modeling (PCM) approach to evaluate the benefits of intra-hour scheduling between Balancing Authorities (BAs). The system operation is modeled in a three-stage sequential manner: day ahead (DA)-hour ahead (HA)-real time (RT). In addition to contingency reserve, each BA will need to carry out “up” and “down” load following and regulation reserve capacity requirements in the DA and HA time frames. In the real-time simulation, only contingency and regulation reserves are carried out as load following is deployed. To model current real-time operation with hourly schedules, a new constraint was introduced to force each BAmore » net exchange schedule deviation from HA schedules to be within NERC ACE limits. Case studies that investigate the benefits of moving from hourly exchange schedules between WECC BAs into 10-min exchange schedules under two different levels of wind and solar penetration (11% and 33%) are presented.« less

  1. Three-Stage Production Cost Modeling Approach for Evaluating the Benefits of Intra-Hour Scheduling Between Balancing Authorities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samaan, Nader; Milligan, Michael; Hunsaker, Matt

    This paper introduces a production cost modeling approach for evaluating the benefits of intra-hour scheduling among Balancing Authorities (BAs). System operation is modeled in a three-stage sequential manner: day ahead (DA)-hour ahead (HA) real time (RT). In addition to contingency reserve, each BA will need to carry out 'up' and 'down' load following and regulation reserve capacity requirements in the DA and HA time frames. In the RT simulation, only contingency and regulation reserves are carried out as load following is deployed. To model current RT operation with hourly schedules, a new constraint was introduced to force each BA netmore » exchange schedule deviation from HA schedules to be within North American Electric Reliability Corporation (NERC) area control error (ACE) limits. Case studies that investigate the benefits of moving from hourly exchange schedules between Western Electricity Coordinating Council (WECC) BAs into 10-minute exchange schedules under two different levels of wind and solar penetration (11% and 33%) are presented.« less

  2. A NEW APPROACH TO CLASS SCHEDULING. FINAL REPORT.

    ERIC Educational Resources Information Center

    CANTER, JOHN; AND OTHERS

    AN INVESTIGATION OF THE USE OF A PROTOTYPE DEVICE FOR CLASS SCHEDULING WAS MADE. THE BEEKLEY INSITE DEVICE THAT WAS STUDIED USES THE "PEEK-A-BOO" PRINCIPLE OF MATCHING COMPUTER TAPES. A TEST GROUP OF 149 GRADUATE STUDENTS WAS USED. THEIR DESIRED SCHEDULES WERE MATCHED AUTOMATICALLY AGAINST A PROPOSED MASTER SCHEDULE TO EVALUATE THE…

  3. Distributed intelligent scheduling of FMS

    NASA Astrophysics Data System (ADS)

    Wu, Zuobao; Cheng, Yaodong; Pan, Xiaohong

    1995-08-01

    In this paper, a distributed scheduling approach of a flexible manufacturing system (FMS) is presented. A new class of Petri nets called networked time Petri nets (NTPN) for system modeling of networking environment is proposed. The distributed intelligent scheduling is implemented by three schedulers which combine NTPN models with expert system techniques. The simulation results are shown.

  4. Schedule-Aware Workflow Management Systems

    NASA Astrophysics Data System (ADS)

    Mans, Ronny S.; Russell, Nick C.; van der Aalst, Wil M. P.; Moleman, Arnold J.; Bakker, Piet J. M.

    Contemporary workflow management systems offer work-items to users through specific work-lists. Users select the work-items they will perform without having a specific schedule in mind. However, in many environments work needs to be scheduled and performed at particular times. For example, in hospitals many work-items are linked to appointments, e.g., a doctor cannot perform surgery without reserving an operating theater and making sure that the patient is present. One of the problems when applying workflow technology in such domains is the lack of calendar-based scheduling support. In this paper, we present an approach that supports the seamless integration of unscheduled (flow) and scheduled (schedule) tasks. Using CPN Tools we have developed a specification and simulation model for schedule-aware workflow management systems. Based on this a system has been realized that uses YAWL, Microsoft Exchange Server 2007, Outlook, and a dedicated scheduling service. The approach is illustrated using a real-life case study at the AMC hospital in the Netherlands. In addition, we elaborate on the experiences obtained when developing and implementing a system of this scale using formal techniques.

  5. Using predicated execution to improve the performance of a dynamically scheduled machine with speculative execution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, P.Y.; Hao, E.; Patt, Y.

    Conditional branches incur a severe performance penalty in wide-issue, deeply pipelined processors. Speculative execution and predicated execution are two mechanisms that have been proposed for reducing this penalty. Speculative execution can completely eliminate the penalty associated with a particular branch, but requires accurate branch prediction to be effective. Predicated execution does not require accurate branch prediction to eliminate the branch penalty, but is not applicable to all branches and can increase the latencies within the program. This paper examines the performance benefit of using both mechanisms to reduce the branch execution penalty. Predicated execution is used to handle the hard-to-protectmore » branches and speculative execution is used to handle the remaining branches. The hard-to-predict branches within the program are determined by profiling. We show that this approach can significantly reduce the branch execution penalty suffered by wide-issue processors.« less

  6. A Simulation Based Approach to Optimize Berth Throughput Under Uncertainty at Marine Container Terminals

    NASA Technical Reports Server (NTRS)

    Golias, Mihalis M.

    2011-01-01

    Berth scheduling is a critical function at marine container terminals and determining the best berth schedule depends on several factors including the type and function of the port, size of the port, location, nearby competition, and type of contractual agreement between the terminal and the carriers. In this paper we formulate the berth scheduling problem as a bi-objective mixed-integer problem with the objective to maximize customer satisfaction and reliability of the berth schedule under the assumption that vessel handling times are stochastic parameters following a discrete and known probability distribution. A combination of an exact algorithm, a Genetic Algorithms based heuristic and a simulation post-Pareto analysis is proposed as the solution approach to the resulting problem. Based on a number of experiments it is concluded that the proposed berth scheduling policy outperforms the berth scheduling policy where reliability is not considered.

  7. Optimizing donor scheduling before recruitment: An effective approach to increasing apheresis platelet collections.

    PubMed

    Lokhandwala, Parvez M; Shike, Hiroko; Wang, Ming; Domen, Ronald E; George, Melissa R

    2018-01-01

    Typical approach for increasing apheresis platelet collections is to recruit new donors. Here, we investigated the effectiveness of an alternative strategy: optimizing donor scheduling, prior to recruitment, at a hospital-based blood donor center. Analysis of collections, during the 89 consecutive months since opening of donor center, was performed. Linear regression and segmented time-series analyses were performed to calculate growth rates of collections and to test for statistical differences, respectively. Pre-intervention donor scheduling capacity was 39/month. In the absence of active donor recruitment, during the first 29 months, the number of collections rose gradually to 24/month (growth-rate of 0.70/month). However, between month-30 and -55, collections exhibited a plateau at 25.6 ± 3.0 (growth-rate of -0.09/month) (p<0.0001). This plateau-phase coincided with donor schedule approaching saturation (65.6 ± 7.6% schedule booked). Scheduling capacity was increased by following two interventions: adding an apheresis instrument (month-56) and adding two more collection days/week (month-72). Consequently, the scheduling capacity increased to 130/month. Post-interventions, apheresis platelet collections between month-56 and -81 exhibited a spontaneous renewed growth at a rate of 0.62/month (p<0.0001), in absence of active donor recruitment. Active donor recruitment in month-82 and -86, when the donor schedule had been optimized to accommodate further growth, resulted in a dramatic but transient surge in collections. Apheresis platelet collections plateau at nearly 2/3rd of the scheduling capacity. Optimizing the scheduling capacity prior to active donor recruitment is an effective strategy to increase platelet collections at a hospital-based donor center.

  8. Decision-theoretic control of EUVE telescope scheduling

    NASA Technical Reports Server (NTRS)

    Hansson, Othar; Mayer, Andrew

    1993-01-01

    This paper describes a decision theoretic scheduler (DTS) designed to employ state-of-the-art probabilistic inference technology to speed the search for efficient solutions to constraint-satisfaction problems. Our approach involves assessing the performance of heuristic control strategies that are normally hard-coded into scheduling systems and using probabilistic inference to aggregate this information in light of the features of a given problem. The Bayesian Problem-Solver (BPS) introduced a similar approach to solving single agent and adversarial graph search patterns yielding orders-of-magnitude improvement over traditional techniques. Initial efforts suggest that similar improvements will be realizable when applied to typical constraint-satisfaction scheduling problems.

  9. Experiments with a decision-theoretic scheduler

    NASA Technical Reports Server (NTRS)

    Hansson, Othar; Holt, Gerhard; Mayer, Andrew

    1992-01-01

    This paper describes DTS, a decision-theoretic scheduler designed to employ state-of-the-art probabilistic inference technology to speed the search for efficient solutions to constraint-satisfaction problems. Our approach involves assessing the performance of heuristic control strategies that are normally hard-coded into scheduling systems, and using probabilistic inference to aggregate this information in light of features of a given problem. BPS, the Bayesian Problem-Solver, introduced a similar approach to solving single-agent and adversarial graph search problems, yielding orders-of-magnitude improvement over traditional techniques. Initial efforts suggest that similar improvements will be realizable when applied to typical constraint-satisfaction scheduling problems.

  10. Rescheduling with iterative repair

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael

    1992-01-01

    This paper presents a new approach to rescheduling called constraint-based iterative repair. This approach gives our system the ability to satisfy domain constraints, address optimization concerns, minimize perturbation to the original schedule, produce modified schedules, quickly, and exhibits 'anytime' behavior. The system begins with an initial, flawed schedule and then iteratively repairs constraint violations until a conflict-free schedule is produced. In an empirical demonstration, we vary the importance of minimizing perturbation and report how fast the system is able to resolve conflicts in a given time bound. We also show the anytime characteristics of the system. These experiments were performed within the domain of Space Shuttle ground processing.

  11. Producing Satisfactory Solutions to Scheduling Problems: An Iterative Constraint Relaxation Approach

    NASA Technical Reports Server (NTRS)

    Chien, S.; Gratch, J.

    1994-01-01

    One drawback to using constraint-propagation in planning and scheduling systems is that when a problem has an unsatisfiable set of constraints such algorithms typically only show that no solution exists. While, technically correct, in practical situations, it is desirable in these cases to produce a satisficing solution that satisfies the most important constraints (typically defined in terms of maximizing a utility function). This paper describes an iterative constraint relaxation approach in which the scheduler uses heuristics to progressively relax problem constraints until the problem becomes satisfiable. We present empirical results of applying these techniques to the problem of scheduling spacecraft communications for JPL/NASA antenna resources.

  12. Application of a hybrid generation/utility assessment heuristic to a class of scheduling problems

    NASA Technical Reports Server (NTRS)

    Heyward, Ann O.

    1989-01-01

    A two-stage heuristic solution approach for a class of multiobjective, n-job, 1-machine scheduling problems is described. Minimization of job-to-job interference for n jobs is sought. The first stage generates alternative schedule sequences by interchanging pairs of schedule elements. The set of alternative sequences can represent nodes of a decision tree; each node is reached via decision to interchange job elements. The second stage selects the parent node for the next generation of alternative sequences through automated paired comparison of objective performance for all current nodes. An application of the heuristic approach to communications satellite systems planning is presented.

  13. Observational Assessment of Preschool Disruptive Behavior, Part II: validity of the Disruptive Behavior Diagnostic Observation Schedule (DB-DOS).

    PubMed

    Wakschlag, Lauren S; Briggs-Gowan, Margaret J; Hill, Carri; Danis, Barbara; Leventhal, Bennett L; Keenan, Kate; Egger, Helen L; Cicchetti, Domenic; Burns, James; Carter, Alice S

    2008-06-01

    To examine the validity of the Disruptive Behavior Diagnostic Observation Schedule (DB-DOS), a new observational method for assessing preschool disruptive behavior. A total of 327 behaviorally heterogeneous preschoolers from low-income environments comprised the validation sample. Parent and teacher reports were used to identify children with clinically significant disruptive behavior. The DB-DOS assessed observed disruptive behavior in two domains, problems in Behavioral Regulation and Anger Modulation, across three interactional contexts: Examiner Engaged, Examiner Busy, and Parent. Convergent and divergent validity of the DB-DOS were tested in relation to parent and teacher reports and independently observed behavior. Clinical validity was tested in terms of criterion and incremental validity of the DB-DOS for discriminating disruptive behavior status and impairment, concurrently and longitudinally. DB-DOS scores were significantly associated with reported and independently observed behavior in a theoretically meaningful fashion. Scores from both DB-DOS domains and each of the three DB-DOS contexts contributed uniquely to discrimination of disruptive behavior status, concurrently and predictively. Observed behavior on the DB-DOS also contributed incrementally to prediction of impairment over time, beyond variance explained by meeting DSM-IV disruptive behavior disorder symptom criteria based on parent/teacher report. The multidomain, multicontext approach of the DB-DOS is a valid method for direct assessment of preschool disruptive behavior. This approach shows promise for enhancing accurate identification of clinically significant disruptive behavior in young children and for characterizing subtypes in a manner that can directly inform etiological and intervention research.

  14. Range Scheduling Aid (RSA)

    NASA Technical Reports Server (NTRS)

    Logan, J. R.; Pulvermacher, M. K.

    1991-01-01

    Range Scheduling Aid (RSA) is presented in the form of the viewgraphs. The following subject areas are covered: satellite control network; current and new approaches to range scheduling; MITRE tasking; RSA features; RSA display; constraint based analytic capability; RSA architecture; and RSA benefits.

  15. Scenario-based, closed-loop model predictive control with application to emergency vehicle scheduling

    NASA Astrophysics Data System (ADS)

    Goodwin, Graham. C.; Medioli, Adrian. M.

    2013-08-01

    Model predictive control has been a major success story in process control. More recently, the methodology has been used in other contexts, including automotive engine control, power electronics and telecommunications. Most applications focus on set-point tracking and use single-sequence optimisation. Here we consider an alternative class of problems motivated by the scheduling of emergency vehicles. Here disturbances are the dominant feature. We develop a novel closed-loop model predictive control strategy aimed at this class of problems. We motivate, and illustrate, the ideas via the problem of fluid deployment of ambulance resources.

  16. Intercell scheduling: A negotiation approach using multi-agent coalitions

    NASA Astrophysics Data System (ADS)

    Tian, Yunna; Li, Dongni; Zheng, Dan; Jia, Yunde

    2016-10-01

    Intercell scheduling problems arise as a result of intercell transfers in cellular manufacturing systems. Flexible intercell routes are considered in this article, and a coalition-based scheduling (CBS) approach using distributed multi-agent negotiation is developed. Taking advantage of the extended vision of the coalition agents, the global optimization is improved and the communication cost is reduced. The objective of the addressed problem is to minimize mean tardiness. Computational results show that, compared with the widely used combinatorial rules, CBS provides better performance not only in minimizing the objective, i.e. mean tardiness, but also in minimizing auxiliary measures such as maximum completion time, mean flow time and the ratio of tardy parts. Moreover, CBS is better than the existing intercell scheduling approach for the same problem with respect to the solution quality and computational costs.

  17. Revisiting Bevacizumab + Cytotoxics Scheduling Using Mathematical Modeling: Proof of Concept Study in Experimental Non-Small Cell Lung Carcinoma.

    PubMed

    Imbs, Diane-Charlotte; El Cheikh, Raouf; Boyer, Arnaud; Ciccolini, Joseph; Mascaux, Céline; Lacarelle, Bruno; Barlesi, Fabrice; Barbolosi, Dominique; Benzekry, Sébastien

    2018-01-01

    Concomitant administration of bevacizumab and pemetrexed-cisplatin is a common treatment for advanced nonsquamous non-small cell lung cancer (NSCLC). Vascular normalization following bevacizumab administration may transiently enhance drug delivery, suggesting improved efficacy with sequential administration. To investigate optimal scheduling, we conducted a study in NSCLC-bearing mice. First, experiments demonstrated improved efficacy when using sequential vs. concomitant scheduling of bevacizumab and chemotherapy. Combining this data with a mathematical model of tumor growth under therapy accounting for the normalization effect, we predicted an optimal delay of 2.8 days between bevacizumab and chemotherapy. This prediction was confirmed experimentally, with reduced tumor growth of 38% as compared to concomitant scheduling, and prolonged survival (74 vs. 70 days). Alternate sequencing of 8 days failed in achieving a similar increase in efficacy, thus emphasizing the utility of modeling support to identify optimal scheduling. The model could also be a useful tool in the clinic to personally tailor regimen sequences. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  18. Hybrid approaches to clinical trial monitoring: Practical alternatives to 100% source data verification.

    PubMed

    De, Sourabh

    2011-07-01

    For years, a vast majority of clinical trial industry has followed the tenet of 100% source data verification (SDV). This has been driven partly by the overcautious approach to linking quality of data to the extent of monitoring and SDV and partly by being on the safer side of regulations. The regulations however, do not state any upper or lower limits of SDV. What it expects from researchers and the sponsors is methodologies which ensure data quality. How the industry does it is open to innovation and application of statistical methods, targeted and remote monitoring, real time reporting, adaptive monitoring schedules, etc. In short, hybrid approaches to monitoring. Coupled with concepts of optimum monitoring and SDV at site and off-site monitoring techniques, it should be possible to save time required to conduct SDV leading to more available time for other productive activities. Organizations stand to gain directly or indirectly from such savings, whether by diverting the funds back to the R&D pipeline; investing more in technology infrastructure to support large trials; or simply increasing sample size of trials. Whether it also affects the work-life balance of monitors who may then need to travel with a less hectic schedule for the same level of quality and productivity can be predicted only when there is more evidence from field.

  19. Energy latency tradeoffs for medium access and sleep scheduling in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Gang, Lu

    Wireless sensor networks are expected to be used in a wide range of applications from environment monitoring to event detection. The key challenge is to provide energy efficient communication; however, latency remains an important concern for many applications that require fast response. The central thesis of this work is that energy efficient medium access and sleep scheduling mechanisms can be designed without necessarily sacrificing application-specific latency performance. We validate this thesis through results from four case studies that cover various aspects of medium access and sleep scheduling design in wireless sensor networks. Our first effort, DMAC, is to design an adaptive low latency and energy efficient MAC for data gathering to reduce the sleep latency. We propose staggered schedule, duty cycle adaptation, data prediction and the use of more-to-send packets to enable seamless packet forwarding under varying traffic load and channel contentions. Simulation and experimental results show significant energy savings and latency reduction while ensuring high data reliability. The second research effort, DESS, investigates the problem of designing sleep schedules in arbitrary network communication topologies to minimize the worst case end-to-end latency (referred to as delay diameter). We develop a novel graph-theoretical formulation, derive and analyze optimal solutions for the tree and ring topologies and heuristics for arbitrary topologies. The third study addresses the problem of minimum latency joint scheduling and routing (MLSR). By constructing a novel delay graph, the optimal joint scheduling and routing can be solved by M node-disjoint paths algorithm under multiple channel model. We further extended the algorithm to handle dynamic traffic changes and topology changes. A heuristic solution is proposed for MLSR under single channel interference. In the fourth study, EEJSPC, we first formulate a fundamental optimization problem that provides tunable energy-latency-throughput tradeoffs with joint scheduling and power control and present both exponential and polynomial complexity solutions. Then we investigate the problem of minimizing total transmission energy while satisfying transmission requests within a latency bound, and present an iterative approach which converges rapidly to the optimal parameter settings.

  20. On Reducing Delay in Mesh-Based P2P Streaming: A Mesh-Push Approach

    NASA Astrophysics Data System (ADS)

    Liu, Zheng; Xue, Kaiping; Hong, Peilin

    The peer-assisted streaming paradigm has been widely employed to distribute live video data on the internet recently. In general, the mesh-based pull approach is more robust and efficient than the tree-based push approach. However, pull protocol brings about longer streaming delay, which is caused by the handshaking process of advertising buffer map message, sending request message and scheduling of the data block. In this paper, we propose a new approach, mesh-push, to address this issue. Different from the traditional pull approach, mesh-push implements block scheduling algorithm at sender side, where the block transmission is initiated by the sender rather than by the receiver. We first formulate the optimal upload bandwidth utilization problem, then present the mesh-push approach, in which a token protocol is designed to avoid block redundancy; a min-cost flow model is employed to derive the optimal scheduling for the push peer; and a push peer selection algorithm is introduced to reduce control overhead. Finally, we evaluate mesh-push through simulation, the results of which show mesh-push outperforms the pull scheduling in streaming delay, and achieves comparable delivery ratio at the same time.

  1. Validating and Verifying Biomathematical Models of Human Fatigue

    NASA Technical Reports Server (NTRS)

    Martinez, Siera Brooke; Quintero, Luis Ortiz; Flynn-Evans, Erin

    2015-01-01

    Airline pilots experience acute and chronic sleep deprivation, sleep inertia, and circadian desynchrony due to the need to schedule flight operations around the clock. This sleep loss and circadian desynchrony gives rise to cognitive impairments, reduced vigilance and inconsistent performance. Several biomathematical models, based principally on patterns observed in circadian rhythms and homeostatic drive, have been developed to predict a pilots levels of fatigue or alertness. These models allow for the Federal Aviation Administration (FAA) and commercial airlines to make decisions about pilot capabilities and flight schedules. Although these models have been validated in a laboratory setting, they have not been thoroughly tested in operational environments where uncontrolled factors, such as environmental sleep disrupters, caffeine use and napping, may impact actual pilot alertness and performance. We will compare the predictions of three prominent biomathematical fatigue models (McCauley Model, Harvard Model, and the privately-sold SAFTE-FAST Model) to actual measures of alertness and performance. We collected sleep logs, movement and light recordings, psychomotor vigilance task (PVT), and urinary melatonin (a marker of circadian phase) from 44 pilots in a short-haul commercial airline over one month. We will statistically compare with the model predictions to lapses on the PVT and circadian phase. We will calculate the sensitivity and specificity of each model prediction under different scheduling conditions. Our findings will aid operational decision-makers in determining the reliability of each model under real-world scheduling situations.

  2. KSC-2014-3290

    NASA Image and Video Library

    2014-07-23

    VANDENBERG AIR FORCE BASE, Calif. – The first stage of the United Launch Alliance Delta II rocket for NASA's Soil Moisture Active Passive mission, or SMAP, accomplishes some tight turns on its approach to the Horizontal Processing Facility at Space Launch Complex 2 on Vandenberg Air Force Base in California. SMAP will provide global measurements of soil moisture and its freeze/thaw state. These measurements will be used to enhance understanding of processes that link the water, energy and carbon cycles, and to extend the capabilities of weather and climate prediction models. SMAP data also will be used to quantify net carbon flux in boreal landscapes and to develop improved flood prediction and drought monitoring capabilities. Launch is scheduled for November 2014. To learn more about SMAP, visit http://smap.jpl.nasa.gov. Photo credit: NASA/Randy Beaudoin

  3. Aperiodic Robust Model Predictive Control for Constrained Continuous-Time Nonlinear Systems: An Event-Triggered Approach.

    PubMed

    Liu, Changxin; Gao, Jian; Li, Huiping; Xu, Demin

    2018-05-01

    The event-triggered control is a promising solution to cyber-physical systems, such as networked control systems, multiagent systems, and large-scale intelligent systems. In this paper, we propose an event-triggered model predictive control (MPC) scheme for constrained continuous-time nonlinear systems with bounded disturbances. First, a time-varying tightened state constraint is computed to achieve robust constraint satisfaction, and an event-triggered scheduling strategy is designed in the framework of dual-mode MPC. Second, the sufficient conditions for ensuring feasibility and closed-loop robust stability are developed, respectively. We show that robust stability can be ensured and communication load can be reduced with the proposed MPC algorithm. Finally, numerical simulations and comparison studies are performed to verify the theoretical results.

  4. Robust control for spacecraft rendezvous system with actuator unsymmetrical saturation: a gain scheduling approach

    NASA Astrophysics Data System (ADS)

    Wang, Qian; Xue, Anke

    2018-06-01

    This paper has proposed a robust control for the spacecraft rendezvous system by considering the parameter uncertainties and actuator unsymmetrical saturation based on the discrete gain scheduling approach. By changing of variables, we transform the actuator unsymmetrical saturation control problem into a symmetrical one. The main advantage of the proposed method is improving the dynamic performance of the closed-loop system with a region of attraction as large as possible. By the Lyapunov approach and the scheduling technology, the existence conditions for the admissible controller are formulated in the form of linear matrix inequalities. The numerical simulation illustrates the effectiveness of the proposed method.

  5. Power-constrained supercomputing

    NASA Astrophysics Data System (ADS)

    Bailey, Peter E.

    As we approach exascale systems, power is turning from an optimization goal to a critical operating constraint. With power bounds imposed by both stakeholders and the limitations of existing infrastructure, achieving practical exascale computing will therefore rely on optimizing performance subject to a power constraint. However, this requirement should not add to the burden of application developers; optimizing the runtime environment given restricted power will primarily be the job of high-performance system software. In this dissertation, we explore this area and develop new techniques that extract maximum performance subject to a particular power constraint. These techniques include a method to find theoretical optimal performance, a runtime system that shifts power in real time to improve performance, and a node-level prediction model for selecting power-efficient operating points. We use a linear programming (LP) formulation to optimize application schedules under various power constraints, where a schedule consists of a DVFS state and number of OpenMP threads for each section of computation between consecutive message passing events. We also provide a more flexible mixed integer-linear (ILP) formulation and show that the resulting schedules closely match schedules from the LP formulation. Across four applications, we use our LP-derived upper bounds to show that current approaches trail optimal, power-constrained performance by up to 41%. This demonstrates limitations of current systems, and our LP formulation provides future optimization approaches with a quantitative optimization target. We also introduce Conductor, a run-time system that intelligently distributes available power to nodes and cores to improve performance. The key techniques used are configuration space exploration and adaptive power balancing. Configuration exploration dynamically selects the optimal thread concurrency level and DVFS state subject to a hardware-enforced power bound. Adaptive power balancing efficiently predicts where critical paths are likely to occur and distributes power to those paths. Greater power, in turn, allows increased thread concurrency levels, CPU frequency/voltage, or both. We describe these techniques in detail and show that, compared to the state-of-the-art technique of using statically predetermined, per-node power caps, Conductor leads to a best-case performance improvement of up to 30%, and an average improvement of 19.1%. At the node level, an accurate power/performance model will aid in selecting the right configuration from a large set of available configurations. We present a novel approach to generate such a model offline using kernel clustering and multivariate linear regression. Our model requires only two iterations to select a configuration, which provides a significant advantage over exhaustive search-based strategies. We apply our model to predict power and performance for different applications using arbitrary configurations, and show that our model, when used with hardware frequency-limiting in a runtime system, selects configurations with significantly higher performance at a given power limit than those chosen by frequency-limiting alone. When applied to a set of 36 computational kernels from a range of applications, our model accurately predicts power and performance; our runtime system based on the model maintains 91% of optimal performance while meeting power constraints 88% of the time. When the runtime system violates a power constraint, it exceeds the constraint by only 6% in the average case, while simultaneously achieving 54% more performance than an oracle. Through the combination of the above contributions, we hope to provide guidance and inspiration to research practitioners working on runtime systems for power-constrained environments. We also hope this dissertation will draw attention to the need for software and runtime-controlled power management under power constraints at various levels, from the processor level to the cluster level.

  6. Taking the Lag out of Jet Lag through Model-Based Schedule Design

    PubMed Central

    Dean, Dennis A.; Forger, Daniel B.; Klerman, Elizabeth B.

    2009-01-01

    Travel across multiple time zones results in desynchronization of environmental time cues and the sleep–wake schedule from their normal phase relationships with the endogenous circadian system. Circadian misalignment can result in poor neurobehavioral performance, decreased sleep efficiency, and inappropriately timed physiological signals including gastrointestinal activity and hormone release. Frequent and repeated transmeridian travel is associated with long-term cognitive deficits, and rodents experimentally exposed to repeated schedule shifts have increased death rates. One approach to reduce the short-term circadian, sleep–wake, and performance problems is to use mathematical models of the circadian pacemaker to design countermeasures that rapidly shift the circadian pacemaker to align with the new schedule. In this paper, the use of mathematical models to design sleep–wake and countermeasure schedules for improved performance is demonstrated. We present an approach to designing interventions that combines an algorithm for optimal placement of countermeasures with a novel mode of schedule representation. With these methods, rapid circadian resynchrony and the resulting improvement in neurobehavioral performance can be quickly achieved even after moderate to large shifts in the sleep–wake schedule. The key schedule design inputs are endogenous circadian period length, desired sleep–wake schedule, length of intervention, background light level, and countermeasure strength. The new schedule representation facilitates schedule design, simulation studies, and experiment design and significantly decreases the amount of time to design an appropriate intervention. The method presented in this paper has direct implications for designing jet lag, shift-work, and non-24-hour schedules, including scheduling for extreme environments, such as in space, undersea, or in polar regions. PMID:19543382

  7. Design and Evaluation of the Terminal Area Precision Scheduling and Spacing System

    NASA Technical Reports Server (NTRS)

    Swenson, Harry N.; Thipphavong, Jane; Sadovsky, Alex; Chen, Liang; Sullivan, Chris; Martin, Lynne

    2011-01-01

    This paper describes the design, development and results from a high fidelity human-in-the-loop simulation of an integrated set of trajectory-based automation tools providing precision scheduling, sequencing and controller merging and spacing functions. These integrated functions are combined into a system called the Terminal Area Precision Scheduling and Spacing (TAPSS) system. It is a strategic and tactical planning tool that provides Traffic Management Coordinators, En Route and Terminal Radar Approach Control air traffic controllers the ability to efficiently optimize the arrival capacity of a demand-impacted airport while simultaneously enabling fuel-efficient descent procedures. The TAPSS system consists of four-dimensional trajectory prediction, arrival runway balancing, aircraft separation constraint-based scheduling, traffic flow visualization and trajectory-based advisories to assist controllers in efficient metering, sequencing and spacing. The TAPSS system was evaluated and compared to today's ATC operation through extensive series of human-in-the-loop simulations for arrival flows into the Los Angeles International Airport. The test conditions included the variation of aircraft demand from a baseline of today's capacity constrained periods through 5%, 10% and 20% increases. Performance data were collected for engineering and human factor analysis and compared with similar operations both with and without the TAPSS system. The engineering data indicate operations with the TAPSS show up to a 10% increase in airport throughput during capacity constrained periods while maintaining fuel-efficient aircraft descent profiles from cruise to landing.

  8. Scheduling language and algorithm development study. Appendix: Study approach and activity summary

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The approach and organization of the study to develop a high level computer programming language and a program library are presented. The algorithm and problem modeling analyses are summarized. The approach used to identify and specify the capabilities required in the basic language is described. Results of the analyses used to define specifications for the scheduling module library are presented.

  9. Image quality prediction: an aid to the Viking Lander imaging investigation on Mars.

    PubMed

    Huck, F O; Wall, S D

    1976-07-01

    Two Viking spacecraft scheduled to land on Mars in the summer of 1976 will return multispectral panoramas of the Martian surface with resolutions 4 orders of magnitude higher than have been previously obtained and stereo views with resolutions approaching that of the human eye. Mission constraints and uncertainties require a carefully planned imaging investigation that is supported by a computer model of camera response and surface features to aid in diagnosing camera performance, in establishing a preflight imaging strategy, and in rapidly revising this strategy if pictures returned from Mars reveal unfavorable or unanticipated conditions.

  10. Issues in NASA Program and Project Management: Focus on Project Planning and Scheduling

    NASA Technical Reports Server (NTRS)

    Hoffman, Edward J. (Editor); Lawbaugh, William M. (Editor)

    1997-01-01

    Topics addressed include: Planning and scheduling training for working project teams at NASA, overview of project planning and scheduling workshops, project planning at NASA, new approaches to systems engineering, software reliability assessment, and software reuse in wind tunnel control systems.

  11. A case study: the initiative to improve RN scheduling at Hamilton Health Sciences.

    PubMed

    Wallace, Laurel-Anne; Pierson, Sharon

    2008-01-01

    In 2003, Hamilton Health Sciences embarked on an initiative to improve and standardize nursing schedules and scheduling practices. The scheduling project was one of several initiatives undertaken by a corporate-wide Nursing Resource Group established to enhance the work environment and patient care and to ensure appropriate utilization of nursing resources across the organization's five hospitals. This article focuses on major activities undertaken in the scheduling initiative. The step-by-step approach described, plus examples of the scheduling resources developed and samples of extended-tour schedules, will all provide insight, potential strategies and practical help for nursing administrators, human resources (HR) personnel and others interested in improving nurse scheduling.

  12. PARAMO: A Parallel Predictive Modeling Platform for Healthcare Analytic Research using Electronic Health Records

    PubMed Central

    Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R.; Stewart, Walter F.; Malin, Bradley; Sun, Jimeng

    2014-01-01

    Objective Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: 1) cohort construction, 2) feature construction, 3) cross-validation, 4) feature selection, and 5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. Methods To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which 1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, 2) schedules the tasks in a topological ordering of the graph, and 3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. Results We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3 hours in parallel compared to 9 days if running sequentially. Conclusion This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines that are specialized for health data researchers. PMID:24370496

  13. PARAMO: a PARAllel predictive MOdeling platform for healthcare analytic research using electronic health records.

    PubMed

    Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R; Stewart, Walter F; Malin, Bradley; Sun, Jimeng

    2014-04-01

    Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: (1) cohort construction, (2) feature construction, (3) cross-validation, (4) feature selection, and (5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which (1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, (2) schedules the tasks in a topological ordering of the graph, and (3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3h in parallel compared to 9days if running sequentially. This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines that are specialized for health data researchers. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Modeling procedure and surgical times for current procedural terminology-anesthesia-surgeon combinations and evaluation in terms of case-duration prediction and operating room efficiency: a multicenter study.

    PubMed

    Stepaniak, Pieter S; Heij, Christiaan; Mannaerts, Guido H H; de Quelerij, Marcel; de Vries, Guus

    2009-10-01

    Gains in operating room (OR) scheduling may be obtained by using accurate statistical models to predict surgical and procedure times. The 3 main contributions of this article are the following: (i) the validation of Strum's results on the statistical distribution of case durations, including surgeon effects, using OR databases of 2 European hospitals, (ii) the use of expert prior expectations to predict durations of rarely observed cases, and (iii) the application of the proposed methods to predict case durations, with an analysis of the resulting increase in OR efficiency. We retrospectively reviewed all recorded surgical cases of 2 large European teaching hospitals from 2005 to 2008, involving 85,312 cases and 92,099 h in total. Surgical times tended to be skewed and bounded by some minimally required time. We compared the fit of the normal distribution with that of 2- and 3-parameter lognormal distributions for case durations of a range of Current Procedural Terminology (CPT)-anesthesia combinations, including possible surgeon effects. For cases with very few observations, we investigated whether supplementing the data information with surgeons' prior guesses helps to obtain better duration estimates. Finally, we used best fitting duration distributions to simulate the potential efficiency gains in OR scheduling. The 3-parameter lognormal distribution provides the best results for the case durations of CPT-anesthesia (surgeon) combinations, with an acceptable fit for almost 90% of the CPTs when segmented by the factor surgeon. The fit is best for surgical times and somewhat less for total procedure times. Surgeons' prior guesses are helpful for OR management to improve duration estimates of CPTs with very few (<10) observations. Compared with the standard way of case scheduling using the mean of the 3-parameter lognormal distribution for case scheduling reduces the mean overreserved OR time per case up to 11.9 (11.8-12.0) min (55.6%) and the mean underreserved OR time per case up to 16.7 (16.5-16.8) min (53.1%). When scheduling cases using the 4-parameter lognormal model the mean overutilized OR time is up to 20.0 (19.7-20.3) min per OR per day lower than for the standard method and 11.6 (11.3-12.0) min per OR per day lower as compared with the biased corrected mean. OR case scheduling can be improved by using the 3-parameter lognormal model with surgeon effects and by using surgeons' prior guesses for rarely observed CPTs. Using the 3-parameter lognormal model for case-duration prediction and scheduling significantly reduces both the prediction error and OR inefficiency.

  15. Application of the Materials-by-Design Methodology to Redesign a New Grade of the High-Strength Low-Alloy Class of Steels with Improved Mechanical Properties and Processability

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Snipes, J. S.; Ramaswami, S.

    2016-01-01

    An alternative to the traditional trial-and-error empirical approach for the development of new materials is the so-called materials-by-design approach. Within the latter approach, a material is treated as a complex system and its design and optimization is carried out by employing computer-aided engineering analyses, predictive tools, and available material databases. In the present work, the materials-by-design approach is utilized to redesign a grade of high-strength low-alloy (HSLA) class of steels with improved mechanical properties (primarily strength and fracture toughness), processability (e.g., castability, hot formability, and weldability), and corrosion resistance. Toward that end, a number of material thermodynamics, kinetics of phase transformations, and physics of deformation and fracture computational models and databases have been developed/assembled and utilized within a multi-disciplinary, two-level material-by-design optimization scheme. To validate the models, their prediction is compared against the experimental results for the related steel HSLA100. Then the optimization procedure is employed to determine the optimal chemical composition and the tempering schedule for a newly designed grade of the HSLA class of steels with enhanced mechanical properties, processability, and corrosion resistance.

  16. A Climatic Stability Approach to Prioritizing Global Conservation Investments

    PubMed Central

    Iwamura, Takuya; Wilson, Kerrie A.; Venter, Oscar; Possingham, Hugh P.

    2010-01-01

    Climate change is impacting species and ecosystems globally. Many existing templates to identify the most important areas to conserve terrestrial biodiversity at the global scale neglect the future impacts of climate change. Unstable climatic conditions are predicted to undermine conservation investments in the future. This paper presents an approach to developing a resource allocation algorithm for conservation investment that incorporates the ecological stability of ecoregions under climate change. We discover that allocating funds in this way changes the optimal schedule of global investments both spatially and temporally. This allocation reduces the biodiversity loss of terrestrial endemic species from protected areas due to climate change by 22% for the period of 2002–2052, when compared to allocations that do not consider climate change. To maximize the resilience of global biodiversity to climate change we recommend that funding be increased in ecoregions located in the tropics and/or mid-elevation habitats, where climatic conditions are predicted to remain relatively stable. Accounting for the ecological stability of ecoregions provides a realistic approach to incorporating climate change into global conservation planning, with potential to save more species from extinction in the long term. PMID:21152095

  17. Short-Term Load Forecasting Based Automatic Distribution Network Reconfiguration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang; Ding, Fei; Zhang, Yingchen

    In a traditional dynamic network reconfiguration study, the optimal topology is determined at every scheduled time point by using the real load data measured at that time. The development of the load forecasting technique can provide an accurate prediction of the load power that will happen in a future time and provide more information about load changes. With the inclusion of load forecasting, the optimal topology can be determined based on the predicted load conditions during a longer time period instead of using a snapshot of the load at the time when the reconfiguration happens; thus, the distribution system operatormore » can use this information to better operate the system reconfiguration and achieve optimal solutions. This paper proposes a short-term load forecasting approach to automatically reconfigure distribution systems in a dynamic and pre-event manner. Specifically, a short-term and high-resolution distribution system load forecasting approach is proposed with a forecaster based on support vector regression and parallel parameters optimization. The network reconfiguration problem is solved by using the forecasted load continuously to determine the optimal network topology with the minimum amount of loss at the future time. The simulation results validate and evaluate the proposed approach.« less

  18. Short-Term Load Forecasting Based Automatic Distribution Network Reconfiguration: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang; Ding, Fei; Zhang, Yingchen

    In the traditional dynamic network reconfiguration study, the optimal topology is determined at every scheduled time point by using the real load data measured at that time. The development of load forecasting technique can provide accurate prediction of load power that will happen in future time and provide more information about load changes. With the inclusion of load forecasting, the optimal topology can be determined based on the predicted load conditions during the longer time period instead of using the snapshot of load at the time when the reconfiguration happens, and thus it can provide information to the distribution systemmore » operator (DSO) to better operate the system reconfiguration to achieve optimal solutions. Thus, this paper proposes a short-term load forecasting based approach for automatically reconfiguring distribution systems in a dynamic and pre-event manner. Specifically, a short-term and high-resolution distribution system load forecasting approach is proposed with support vector regression (SVR) based forecaster and parallel parameters optimization. And the network reconfiguration problem is solved by using the forecasted load continuously to determine the optimal network topology with the minimum loss at the future time. The simulation results validate and evaluate the proposed approach.« less

  19. Short-Term Load Forecasting-Based Automatic Distribution Network Reconfiguration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang; Ding, Fei; Zhang, Yingchen

    In a traditional dynamic network reconfiguration study, the optimal topology is determined at every scheduled time point by using the real load data measured at that time. The development of the load forecasting technique can provide an accurate prediction of the load power that will happen in a future time and provide more information about load changes. With the inclusion of load forecasting, the optimal topology can be determined based on the predicted load conditions during a longer time period instead of using a snapshot of the load at the time when the reconfiguration happens; thus, the distribution system operatormore » can use this information to better operate the system reconfiguration and achieve optimal solutions. This paper proposes a short-term load forecasting approach to automatically reconfigure distribution systems in a dynamic and pre-event manner. Specifically, a short-term and high-resolution distribution system load forecasting approach is proposed with a forecaster based on support vector regression and parallel parameters optimization. The network reconfiguration problem is solved by using the forecasted load continuously to determine the optimal network topology with the minimum amount of loss at the future time. The simulation results validate and evaluate the proposed approach.« less

  20. Smart EV Energy Management System to Support Grid Services

    NASA Astrophysics Data System (ADS)

    Wang, Bin

    Under smart grid scenarios, the advanced sensing and metering technologies have been applied to the legacy power grid to improve the system observability and the real-time situational awareness. Meanwhile, there is increasing amount of distributed energy resources (DERs), such as renewable generations, electric vehicles (EVs) and battery energy storage system (BESS), etc., being integrated into the power system. However, the integration of EVs, which can be modeled as controllable mobile energy devices, brings both challenges and opportunities to the grid planning and energy management, due to the intermittency of renewable generation, uncertainties of EV driver behaviors, etc. This dissertation aims to solve the real-time EV energy management problem in order to improve the overall grid efficiency, reliability and economics, using online and predictive optimization strategies. Most of the previous research on EV energy management strategies and algorithms are based on simplified models with unrealistic assumptions that the EV charging behaviors are perfectly known or following known distributions, such as the arriving time, leaving time and energy consumption values, etc. These approaches fail to obtain the optimal solutions in real-time because of the system uncertainties. Moreover, there is lack of data-driven strategy that performs online and predictive scheduling for EV charging behaviors under microgrid scenarios. Therefore, we develop an online predictive EV scheduling framework, considering uncertainties of renewable generation, building load and EV driver behaviors, etc., based on real-world data. A kernel-based estimator is developed to predict the charging session parameters in real-time with improved estimation accuracy. The efficacy of various optimization strategies that are supported by this framework, including valley-filling, cost reduction, event-based control, etc., has been demonstrated. In addition, the existing simulation-based approaches do not consider a variety of practical concerns of implementing such a smart EV energy management system, including the driver preferences, communication protocols, data models, and customized integration of existing standards to provide grid services. Therefore, this dissertation also solves these issues by designing and implementing a scalable system architecture to capture the user preferences, enable multi-layer communication and control, and finally improve the system reliability and interoperability.

  1. Ensemble forecasting for renewable energy applications - status and current challenges for their generation and verification

    NASA Astrophysics Data System (ADS)

    Pinson, Pierre

    2016-04-01

    The operational management of renewable energy generation in power systems and electricity markets requires forecasts in various forms, e.g., deterministic or probabilistic, continuous or categorical, depending upon the decision process at hand. Besides, such forecasts may also be necessary at various spatial and temporal scales, from high temporal resolutions (in the order of minutes) and very localized for an offshore wind farm, to coarser temporal resolutions (hours) and covering a whole country for day-ahead power scheduling problems. As of today, weather predictions are a common input to forecasting methodologies for renewable energy generation. Since for most decision processes, optimal decisions can only be made if accounting for forecast uncertainties, ensemble predictions and density forecasts are increasingly seen as the product of choice. After discussing some of the basic approaches to obtaining ensemble forecasts of renewable power generation, it will be argued that space-time trajectories of renewable power production may or may not be necessitate post-processing ensemble forecasts for relevant weather variables. Example approaches and test case applications will be covered, e.g., looking at the Horns Rev offshore wind farm in Denmark, or gridded forecasts for the whole continental Europe. Eventually, we will illustrate some of the limitations of current frameworks to forecast verification, which actually make it difficult to fully assess the quality of post-processing approaches to obtain renewable energy predictions.

  2. Development of Watch Schedule Using Rules Approach

    NASA Astrophysics Data System (ADS)

    Jurkevicius, Darius; Vasilecas, Olegas

    The software for schedule creation and optimization solves a difficult, important and practical problem. The proposed solution is an online employee portal where administrator users can create and manage watch schedules and employee requests. Each employee can login with his/her own account and see his/her assignments, manage requests, etc. Employees set as administrators can perform the employee scheduling online, manage requests, etc. This scheduling software allows users not only to see the initial and optimized watch schedule in a simple and understandable form, but also to create special rules and criteria and input their business. The system using rules automatically will generate watch schedule.

  3. Machine learning in updating predictive models of planning and scheduling transportation projects

    DOT National Transportation Integrated Search

    1997-01-01

    A method combining machine learning and regression analysis to automatically and intelligently update predictive models used in the Kansas Department of Transportations (KDOTs) internal management system is presented. The predictive models used...

  4. Computing the Expected Cost of an Appointment Schedule for Statistically Identical Customers with Probabilistic Service Times

    PubMed Central

    Dietz, Dennis C.

    2014-01-01

    A cogent method is presented for computing the expected cost of an appointment schedule where customers are statistically identical, the service time distribution has known mean and variance, and customer no-shows occur with time-dependent probability. The approach is computationally efficient and can be easily implemented to evaluate candidate schedules within a schedule optimization algorithm. PMID:24605070

  5. Synchrophasor Sensing and Processing based Smart Grid Security Assessment for Renewable Energy Integration

    NASA Astrophysics Data System (ADS)

    Jiang, Huaiguang

    With the evolution of energy and power systems, the emerging Smart Grid (SG) is mainly featured by distributed renewable energy generations, demand-response control and huge amount of heterogeneous data sources. Widely distributed synchrophasor sensors, such as phasor measurement units (PMUs) and fault disturbance recorders (FDRs), can record multi-modal signals, for power system situational awareness and renewable energy integration. An effective and economical approach is proposed for wide-area security assessment. This approach is based on wavelet analysis for detecting and locating the short-term and long-term faults in SG, using voltage signals collected by distributed synchrophasor sensors. A data-driven approach for fault detection, identification and location is proposed and studied. This approach is based on matching pursuit decomposition (MPD) using Gaussian atom dictionary, hidden Markov model (HMM) of real-time frequency and voltage variation features, and fault contour maps generated by machine learning algorithms in SG systems. In addition, considering the economic issues, the placement optimization of distributed synchrophasor sensors is studied to reduce the number of the sensors without affecting the accuracy and effectiveness of the proposed approach. Furthermore, because the natural hazards is a critical issue for power system security, this approach is studied under different types of faults caused by natural hazards. A fast steady-state approach is proposed for voltage security of power systems with a wind power plant connected. The impedance matrix can be calculated by the voltage and current information collected by the PMUs. Based on the impedance matrix, locations in SG can be identified, where cause the greatest impact on the voltage at the wind power plants point of interconnection. Furthermore, because this dynamic voltage security assessment method relies on time-domain simulations of faults at different locations, the proposed approach is feasible, convenient and effective. Conventionally, wind energy is highly location-dependent. Many desirable wind resources are located in rural areas without direct access to the transmission grid. By connecting MW-scale wind turbines or wind farms to the distributions system of SG, the cost of building long transmission facilities can be avoid and wind power supplied to consumers can be greatly increased. After the effective wide area monitoring (WAM) approach is built, an event-driven control strategy is proposed for renewable energy integration. This approach is based on support vector machine (SVM) predictor and multiple-input and multiple-output (MIMO) model predictive control (MPC) on linear time-invariant (LTI) and linear time-variant (LTV) systems. The voltage condition of the distribution system is predicted by the SVM classifier using synchrophasor measurement data. The controllers equipped with wind turbine generators are triggered by the prediction results. Both transmission level and distribution level are designed based on this proposed approach. Considering economic issues in the power system, a statistical scheduling approach to economic dispatch and energy reserves is proposed. The proposed approach focuses on minimizing the overall power operating cost with considerations of renewable energy uncertainty and power system security. The hybrid power system scheduling is formulated as a convex programming problem to minimize power operating cost, taking considerations of renewable energy generation, power generation-consumption balance and power system security. A genetic algorithm based approach is used for solving the minimization of the power operating cost. In addition, with technology development, it can be predicted that the renewable energy such as wind turbine generators and PV panels will be pervasively located in distribution systems. The distribution system is an unbalanced system, which contains single-phase, two-phase and three-phase loads, and distribution lines. The complex configuration brings a challenge to power flow calculation. A topology analysis based iterative approach is used to solve this problem. In this approach, a self-adaptive topology recognition method is used to analyze the distribution system, and the backward/forward sweep algorithm is used to generate the power flow results. Finally, for the numerical simulations, the IEEE 14-bus, 30-bus, 39-bus and 118-bus systems are studied for fault detection, identification and location. Both transmission level and distribution level models are employed with the proposed control strategy for voltage stability of renewable energy integration. The simulation results demonstrate the effectiveness of the proposed methods. The IEEE 24-bus reliability test system (IEEE-RTS), which is commonly used for evaluating the price stability and reliability of power system, is used as the test bench for verifying and evaluating system performance of the proposed scheduling approach.

  6. Congestion game scheduling for virtual drug screening optimization

    NASA Astrophysics Data System (ADS)

    Nikitina, Natalia; Ivashko, Evgeny; Tchernykh, Andrei

    2018-02-01

    In virtual drug screening, the chemical diversity of hits is an important factor, along with their predicted activity. Moreover, interim results are of interest for directing the further research, and their diversity is also desirable. In this paper, we consider a problem of obtaining a diverse set of virtual screening hits in a short time. To this end, we propose a mathematical model of task scheduling for virtual drug screening in high-performance computational systems as a congestion game between computational nodes to find the equilibrium solutions for best balancing the number of interim hits with their chemical diversity. The model considers the heterogeneous environment with workload uncertainty, processing time uncertainty, and limited knowledge about the input dataset structure. We perform computational experiments and evaluate the performance of the developed approach considering organic molecules database GDB-9. The used set of molecules is rich enough to demonstrate the feasibility and practicability of proposed solutions. We compare the algorithm with two known heuristics used in practice and observe that game-based scheduling outperforms them by the hit discovery rate and chemical diversity at earlier steps. Based on these results, we use a social utility metric for assessing the efficiency of our equilibrium solutions and show that they reach greatest values.

  7. 48 CFR 36.515 - Schedules for construction contracts.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... contemplated, the contract amount is expected to exceed the simplified acquisition threshold, and the period of... covering other management approaches for ensuring that a contractor makes adequate progress. [48 FR 42356... Schedules for construction contracts. The contracting officer may insert the clause at 52.236-15, Schedules...

  8. Friction Stir Process Mapping Methodology

    NASA Technical Reports Server (NTRS)

    Bjorkman, Gerry; Kooney, Alex; Russell, Carolyn

    2003-01-01

    The weld process performance for a given weld joint configuration and tool setup is summarized on a 2-D plot of RPM vs. IPM. A process envelope is drawn within the map to identify the range of acceptable welds. The sweet spot is selected as the nominal weld schedule The nominal weld schedule is characterized in the expected manufacturing environment. The nominal weld schedule in conjunction with process control ensures a consistent and predictable weld performance.

  9. Market-Based Approaches to Managing Science Return from Planetary Missions

    NASA Technical Reports Server (NTRS)

    Wessen, Randii R.; Porter, David; Hanson, Robin

    1996-01-01

    A research plan is described for the design and testing of a method for the planning and negotiation of science observations. The research plan is presented in relation to the fact that the current method, which involves a hierarchical process of science working groups, is unsuitable for the planning of the Cassini mission. The research plan involves the market-based approach in which participants are allocated budgets of scheduling points. The points are used to provide an intensity of preference for the observations being scheduled. In this way, the schedulers do not have to limit themselves to solving major conflicts, but try to maximize the number of scheduling points that result in a conflict-free timeline. Incentives are provided for the participants by the fixed budget concerning their tradeoff decisions. A degree of feedback is provided in the process so that the schedulers may rebid based on the current timeline.

  10. The NIEHS Predictive-Toxicology Evaluation Project.

    PubMed Central

    Bristol, D W; Wachsman, J T; Greenwell, A

    1996-01-01

    The Predictive-Toxicology Evaluation (PTE) project conducts collaborative experiments that subject the performance of predictive-toxicology (PT) methods to rigorous, objective evaluation in a uniquely informative manner. Sponsored by the National Institute of Environmental Health Sciences, it takes advantage of the ongoing testing conducted by the U.S. National Toxicology Program (NTP) to estimate the true error of models that have been applied to make prospective predictions on previously untested, noncongeneric-chemical substances. The PTE project first identifies a group of standardized NTP chemical bioassays either scheduled to be conducted or are ongoing, but not yet complete. The project then announces and advertises the evaluation experiment, disseminates information about the chemical bioassays, and encourages researchers from a wide variety of disciplines to publish their predictions in peer-reviewed journals, using whatever approaches and methods they feel are best. A collection of such papers is published in this Environmental Health Perspectives Supplement, providing readers the opportunity to compare and contrast PT approaches and models, within the context of their prospective application to an actual-use situation. This introduction to this collection of papers on predictive toxicology summarizes the predictions made and the final results obtained for the 44 chemical carcinogenesis bioassays of the first PTE experiment (PTE-1) and presents information that identifies the 30 chemical carcinogenesis bioassays of PTE-2, along with a table of prediction sets that have been published to date. It also provides background about the origin and goals of the PTE project, outlines the special challenge associated with estimating the true error of models that aspire to predict open-system behavior, and summarizes what has been learned to date. PMID:8933048

  11. Emergency response nurse scheduling with medical support robot by multi-agent and fuzzy technique.

    PubMed

    Kono, Shinya; Kitamura, Akira

    2015-08-01

    In this paper, a new co-operative re-scheduling method corresponding the medical support tasks that the time of occurrence can not be predicted is described, assuming robot can co-operate medical activities with the nurse. Here, Multi-Agent-System (MAS) is used for the co-operative re-scheduling, in which Fuzzy-Contract-Net (FCN) is applied to the robots task assignment for the emergency tasks. As the simulation results, it is confirmed that the re-scheduling results by the proposed method can keep the patients satisfaction and decrease the work load of the nurse.

  12. Work schedule manager gap analysis : assessing the future training needs of work schedule managers using a strategic job analysis approach.

    DOT National Transportation Integrated Search

    2010-05-01

    This report documents the results of a strategic job analysis that examined the job tasks and knowledge, skills, abilities, and other characteristics (KSAOs) needed to perform the job of a work schedule manager. The strategic job analysis compared in...

  13. Work schedule manager gap analysis : assessing the future training needs of work schedule managers using a strategic job analysis approach

    DOT National Transportation Integrated Search

    2010-05-01

    This report documents the results of a strategic job analysis that examined the job tasks and knowledge, skills, abilities, and other characteristics (KSAOs) needed to perform the job of a work schedule manager. The strategic job analysis compared in...

  14. A new distributed systems scheduling algorithm: a swarm intelligence approach

    NASA Astrophysics Data System (ADS)

    Haghi Kashani, Mostafa; Sarvizadeh, Raheleh; Jameii, Mahdi

    2011-12-01

    The scheduling problem in distributed systems is known as an NP-complete problem, and methods based on heuristic or metaheuristic search have been proposed to obtain optimal and suboptimal solutions. The task scheduling is a key factor for distributed systems to gain better performance. In this paper, an efficient method based on memetic algorithm is developed to solve the problem of distributed systems scheduling. With regard to load balancing efficiently, Artificial Bee Colony (ABC) has been applied as local search in the proposed memetic algorithm. The proposed method has been compared to existing memetic-Based approach in which Learning Automata method has been used as local search. The results demonstrated that the proposed method outperform the above mentioned method in terms of communication cost.

  15. A Model and Algorithms For a Software Evolution Control System

    DTIC Science & Technology

    1993-12-01

    dynamic scheduling approaches can be found in [67). Task scheduling can also be characterized as preemptive and nonpreemptive . A task is preemptive ...is NP-hard for both the preemptive and nonpreemptive cases [671 [84). Scheduling nonpreemptive tasks with arbitrary ready times is NP-hard in both...the preemptive and nonpreemptive cases [671 [841. Scheduling nonpreemptive tasks with arbitrary ready times is NP-hard in both multiprocessor and

  16. An Integrated Approach to Locality-Conscious Processor Allocation and Scheduling of Mixed-Parallel Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vydyanathan, Naga; Krishnamoorthy, Sriram; Sabin, Gerald M.

    2009-08-01

    Complex parallel applications can often be modeled as directed acyclic graphs of coarse-grained application-tasks with dependences. These applications exhibit both task- and data-parallelism, and combining these two (also called mixedparallelism), has been shown to be an effective model for their execution. In this paper, we present an algorithm to compute the appropriate mix of task- and data-parallelism required to minimize the parallel completion time (makespan) of these applications. In other words, our algorithm determines the set of tasks that should be run concurrently and the number of processors to be allocated to each task. The processor allocation and scheduling decisionsmore » are made in an integrated manner and are based on several factors such as the structure of the taskgraph, the runtime estimates and scalability characteristics of the tasks and the inter-task data communication volumes. A locality conscious scheduling strategy is used to improve inter-task data reuse. Evaluation through simulations and actual executions of task graphs derived from real applications as well as synthetic graphs shows that our algorithm consistently generates schedules with lower makespan as compared to CPR and CPA, two previously proposed scheduling algorithms. Our algorithm also produces schedules that have lower makespan than pure taskand data-parallel schedules. For task graphs with known optimal schedules or lower bounds on the makespan, our algorithm generates schedules that are closer to the optima than other scheduling approaches.« less

  17. Efficiently Scheduling Multi-core Guest Virtual Machines on Multi-core Hosts in Network Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S

    2011-01-01

    Virtual machine (VM)-based simulation is a method used by network simulators to incorporate realistic application behaviors by executing actual VMs as high-fidelity surrogates for simulated end-hosts. A critical requirement in such a method is the simulation time-ordered scheduling and execution of the VMs. Prior approaches such as time dilation are less efficient due to the high degree of multiplexing possible when multiple multi-core VMs are simulated on multi-core host systems. We present a new simulation time-ordered scheduler to efficiently schedule multi-core VMs on multi-core real hosts, with a virtual clock realized on each virtual core. The distinguishing features of ourmore » approach are: (1) customizable granularity of the VM scheduling time unit on the simulation time axis, (2) ability to take arbitrary leaps in virtual time by VMs to maximize the utilization of host (real) cores when guest virtual cores idle, and (3) empirically determinable optimality in the tradeoff between total execution (real) time and time-ordering accuracy levels. Experiments show that it is possible to get nearly perfect time-ordered execution, with a slight cost in total run time, relative to optimized non-simulation VM schedulers. Interestingly, with our time-ordered scheduler, it is also possible to reduce the time-ordering error from over 50% of non-simulation scheduler to less than 1% realized by our scheduler, with almost the same run time efficiency as that of the highly efficient non-simulation VM schedulers.« less

  18. Home Health Care for California's Injured Workers: Options for Implementing a Fee Schedule.

    PubMed

    Wynn, Barbara O; Boustead, Anne

    2015-07-15

    The California Department of Industrial Relations/Division of Worker's Compensation asked RAND to provide technical assistance in developing a fee schedule for home health services provided to injured workers. The fee schedule needs to address the full spectrum of home health services ranging from skilled nursing and therapy services to unskilled personal care or chore services that may be provided by family members. RAND researchers consulted with stakeholders in the California workers' compensation system to outline issues the fee schedule should address, reviewed home health fee schedules used by other payers, and conducted interviews with WC administrators from other jurisdictions to elicit their experiences. California stakeholders identified unskilled attendant services as most problematic in determining need and payment rates, particularly services furnished by family members. RAND researchers concentrated on fee schedule options that would result in a single fee schedule covering the full range of home health care services furnished to injured workers and made three sets of recommendations. The first set pertains to obtaining additional information that would highlight the policy issues likely to occur with the implementation of the fee schedule and alternatives for assessing an injured worker's home health care needs. Another approach conforms most closely with the Labor Code requirements. It would integrate the fee schedules used by Medicare, In-Home Health Supportive Services, and the federal Office of Workers' Compensation. The third approach would base the home health fee schedule on rules used by the federal Office of Workers' Compensation.

  19. Using Information Processing Techniques to Forecast, Schedule, and Deliver Sustainable Energy to Electric Vehicles

    NASA Astrophysics Data System (ADS)

    Pulusani, Praneeth R.

    As the number of electric vehicles on the road increases, current power grid infrastructure will not be able to handle the additional load. Some approaches in the area of Smart Grid research attempt to mitigate this, but those approaches alone will not be sufficient. Those approaches and traditional solution of increased power production can result in an insufficient and imbalanced power grid. It can lead to transformer blowouts, blackouts and blown fuses, etc. The proposed solution will supplement the ``Smart Grid'' to create a more sustainable power grid. To solve or mitigate the magnitude of the problem, measures can be taken that depend on weather forecast models. For instance, wind and solar forecasts can be used to create first order Markov chain models that will help predict the availability of additional power at certain times. These models will be used in conjunction with the information processing layer and bidirectional signal processing components of electric vehicle charging systems, to schedule the amount of energy transferred per time interval at various times. The research was divided into three distinct components: (1) Renewable Energy Supply Forecast Model, (2) Energy Demand Forecast from PEVs, and (3) Renewable Energy Resource Estimation. For the first component, power data from a local wind turbine, and weather forecast data from NOAA were used to develop a wind energy forecast model, using a first order Markov chain model as the foundation. In the second component, additional macro energy demand from PEVs in the Greater Rochester Area was forecasted by simulating concurrent driving routes. In the third component, historical data from renewable energy sources was analyzed to estimate the renewable resources needed to offset the energy demand from PEVs. The results from these models and components can be used in the smart grid applications for scheduling and delivering energy. Several solutions are discussed to mitigate the problem of overloading transformers, lack of energy supply, and higher utility costs.

  20. Appointment standardization evaluation in a primary care facility.

    PubMed

    Huang, Yu-Li

    2016-07-11

    Purpose - The purpose of this paper is to evaluate the performance on standardizing appointment slot length in a primary care clinic to understand the impact of providers' preferences and practice differences. Design/methodology/approach - The treatment time data were collected for each provider. There were six patient types: emergency/urgent care (ER/UC), follow-up patient (FU), new patient, office visit (OV), physical exam, and well-child care. Simulation model was developed to capture patient flow and measure patient wait time, provider idle time, cost, overtime, finish time, and the number of patients scheduled. Four scheduling scenarios were compared: scheduled all patients at 20 minutes; scheduled ER/UC, FU, OV at 20 minutes and others at 40 minutes; scheduled patient types on individual provider preference; and scheduled patient types on combined provider preference. Findings - Standardized scheduling among providers increase cost by 57 per cent, patient wait time by 83 per cent, provider idle time by five minutes per patient, overtime by 22 minutes, finish time by 30 minutes, and decrease patient access to care by approximately 11 per cent. An individualized scheduling approach could save as much as 14 per cent on cost and schedule 1.5 more patients. The combined preference method could save about 8 per cent while the number of patients scheduled remained the same. Research limitations/implications - The challenge is to actually disseminate the findings to medical providers and adjust scheduling systems accordingly. Originality/value - This paper concluded standardization of providers' clinic preference and practice negatively impact clinic service quality and access to care.

  1. Learning to integrate reactivity and deliberation in uncertain planning and scheduling problems

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Gervasio, Melinda T.; Dejong, Gerald F.

    1992-01-01

    This paper describes an approach to planning and scheduling in uncertain domains. In this approach, a system divides a task on a goal by goal basis into reactive and deliberative components. Initially, a task is handled entirely reactively. When failures occur, the system changes the reactive/deliverative goal division by moving goals into the deliberative component. Because our approach attempts to minimize the number of deliberative goals, we call our approach Minimal Deliberation (MD). Because MD allows goals to be treated reactively, it gains some of the advantages of reactive systems: computational efficiency, the ability to deal with noise and non-deterministic effects, and the ability to take advantage of unforseen opportunities. However, because MD can fall back upon deliberation, it can also provide some of the guarantees of classical planning, such as the ability to deal with complex goal interactions. This paper describes the Minimal Deliberation approach to integrating reactivity and deliberation and describe an ongoing application of the approach to an uncertain planning and scheduling domain.

  2. Solving a real-world problem using an evolving heuristically driven schedule builder.

    PubMed

    Hart, E; Ross, P; Nelson, J

    1998-01-01

    This work addresses the real-life scheduling problem of a Scottish company that must produce daily schedules for the catching and transportation of large numbers of live chickens. The problem is complex and highly constrained. We show that it can be successfully solved by division into two subproblems and solving each using a separate genetic algorithm (GA). We address the problem of whether this produces locally optimal solutions and how to overcome this. We extend the traditional approach of evolving a "permutation + schedule builder" by concentrating on evolving the schedule builder itself. This results in a unique schedule builder being built for each daily scheduling problem, each individually tailored to deal with the particular features of that problem. This results in a robust, fast, and flexible system that can cope with most of the circumstances imaginable at the factory. We also compare the performance of a GA approach to several other evolutionary methods and show that population-based methods are superior to both hill-climbing and simulated annealing in the quality of solutions produced. Population-based methods also have the distinct advantage of producing multiple, equally fit solutions, which is of particular importance when considering the practical aspects of the problem.

  3. The Business Change Initiative: A Novel Approach to Improved Cost and Schedule Management

    NASA Technical Reports Server (NTRS)

    Shinn, Stephen A.; Bryson, Jonathan; Klein, Gerald; Lunz-Ruark, Val; Majerowicz, Walt; McKeever, J.; Nair, Param

    2016-01-01

    Goddard Space Flight Center's Flight Projects Directorate employed a Business Change Initiative (BCI) to infuse a series of activities coordinated to drive improved cost and schedule performance across Goddard's missions. This sustaining change framework provides a platform to manage and implement cost and schedule control techniques throughout the project portfolio. The BCI concluded in December 2014, deploying over 100 cost and schedule management changes including best practices, tools, methods, training, and knowledge sharing. The new business approach has driven the portfolio to improved programmatic performance. The last eight launched GSFC missions have optimized cost, schedule, and technical performance on a sustained basis to deliver on time and within budget, returning funds in many cases. While not every future mission will boast such strong performance, improved cost and schedule tools, management practices, and ongoing comprehensive evaluations of program planning and control methods to refine and implement best practices will continue to provide a framework for sustained performance. This paper will describe the tools, techniques, and processes developed during the BCI and the utilization of collaborative content management tools to disseminate project planning and control techniques to ensure continuous collaboration and optimization of cost and schedule management in the future.

  4. A Novel Approach of Battery Energy Storage for Improving Value of Wind Power in Deregulated Markets

    NASA Astrophysics Data System (ADS)

    Nguyen, Y. Minh; Yoon, Yong Tae

    2013-06-01

    Wind power producers face many regulation costs in deregulated environment, which remarkably lowers the value of wind power in comparison with the conventional sources. One of these costs is associated with the real-time variation of power output and being paid in frequency control market according to the variation band. In this regard, this paper presents a new approach to the scheduling and operation of battery energy storage installed in wind generation system. This approach depends on the statistic data of wind generation and the prediction of frequency control market prices to determine the optimal charging and discharging of batteries in real-time, which ultimately gives the minimum cost of frequency regulation for wind power producers. The optimization problem is formulated as the trade-off between the decrease in regulation payment and the increase in the cost of using battery energy storage. The approach is illustrated in the case study and the results of simulation show its effectiveness.

  5. A Form 990 Schedule H conundrum: how much of your bad debt might be charity?

    PubMed

    Bailey, Shari; Franklin, David; Hearle, Keith

    2010-04-01

    IRS Form 990 Schedule H requires hospitals to estimate the amount of bad debt expense attributable to patients eligible for charity under the hospital's charity care policy. Responses to Schedule H, Part III.A.3 open up the entire patient collection process to examination by the IRS, state officials, and the public. Using predictive analytics can help hospitals efficiently identify charity-eligible patients when answering Part III.A.3.

  6. Friction Stir Process Mapping Methodology

    NASA Technical Reports Server (NTRS)

    Kooney, Alex; Bjorkman, Gerry; Russell, Carolyn; Smelser, Jerry (Technical Monitor)

    2002-01-01

    In FSW (friction stir welding), the weld process performance for a given weld joint configuration and tool setup is summarized on a 2-D plot of RPM vs. IPM. A process envelope is drawn within the map to identify the range of acceptable welds. The sweet spot is selected as the nominal weld schedule. The nominal weld schedule is characterized in the expected manufacturing environment. The nominal weld schedule in conjunction with process control ensures a consistent and predictable weld performance.

  7. Predicting pedestrian flow: a methodology and a proof of concept based on real-life data.

    PubMed

    Davidich, Maria; Köster, Gerta

    2013-01-01

    Building a reliable predictive model of pedestrian motion is very challenging: Ideally, such models should be based on observations made in both controlled experiments and in real-world environments. De facto, models are rarely based on real-world observations due to the lack of available data; instead, they are largely based on intuition and, at best, literature values and laboratory experiments. Such an approach is insufficient for reliable simulations of complex real-life scenarios: For instance, our analysis of pedestrian motion under natural conditions at a major German railway station reveals that the values for free-flow velocities and the flow-density relationship differ significantly from widely used literature values. It is thus necessary to calibrate and validate the model against relevant real-life data to make it capable of reproducing and predicting real-life scenarios. In this work we aim at constructing such realistic pedestrian stream simulation. Based on the analysis of real-life data, we present a methodology that identifies key parameters and interdependencies that enable us to properly calibrate the model. The success of the approach is demonstrated for a benchmark model, a cellular automaton. We show that the proposed approach significantly improves the reliability of the simulation and hence the potential prediction accuracy. The simulation is validated by comparing the local density evolution of the measured data to that of the simulated data. We find that for our model the most sensitive parameters are: the source-target distribution of the pedestrian trajectories, the schedule of pedestrian appearances in the scenario and the mean free-flow velocity. Our results emphasize the need for real-life data extraction and analysis to enable predictive simulations.

  8. A novel 2-step approach combining the NAFLD fibrosis score and liver stiffness measurement for predicting advanced fibrosis.

    PubMed

    Chan, Wah-Kheong; Nik Mustapha, Nik Raihan; Mahadeva, Sanjiv

    2015-10-01

    The non-alcoholic fatty liver disease (NAFLD) fibrosis score (NFS) is indeterminate in a proportion of NAFLD patients. Combining the NFS with liver stiffness measurement (LSM) may improve prediction of advanced fibrosis. We aim to evaluate the NFS and LSM in predicting advanced fibrosis in NAFLD patients. The NFS was calculated and LSM obtained for consecutive adult NAFLD patients scheduled for liver biopsy. The accuracy of predicting advanced fibrosis using either modality and in combination were assessed. An algorithm combining the NFS and LSM was developed from a training cohort and subsequently tested in a validation cohort. There were 101 and 46 patients in the training and validation cohort, respectively. In the training cohort, the percentages of misclassifications using the NFS alone, LSM alone, LSM alone (with grey zone), both tests for all patients and a 2-step approach using LSM only for patients with indeterminate and high NFS were 5.0, 28.7, 2.0, 2.0 and 4.0 %, respectively. The percentages of patients requiring liver biopsy were 30.7, 0, 36.6, 36.6 and 18.8 %, respectively. In the validation cohort, the percentages of misclassifications were 8.7, 28.3, 2.2, 2.2 and 8.7 %, respectively. The percentages of patients requiring liver biopsy were 28.3, 0, 41.3, 43.5 and 19.6 %, respectively. The novel 2-step approach further reduced the number of patients requiring a liver biopsy whilst maintaining the accuracy to predict advanced fibrosis. The combination of NFS and LSM for all patients provided no apparent advantage over using either of the tests alone.

  9. 75 FR 75961 - Notice of Implementation of the Wind Erosion Prediction System for Soil Erodibility System...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-07

    ... Wind Erosion Prediction System for Soil Erodibility System Calculations for the Natural Resources... Erosion Prediction System (WEPS) for soil erodibility system calculations scheduled for implementation for... computer model is a process-based, daily time-step computer model that predicts soil erosion via simulation...

  10. Dynamic Appliances Scheduling in Collaborative MicroGrids System

    PubMed Central

    Bilil, Hasnae; Aniba, Ghassane; Gharavi, Hamid

    2017-01-01

    In this paper a new approach which is based on a collaborative system of MicroGrids (MG’s), is proposed to enable household appliance scheduling. To achieve this, appliances are categorized into flexible and non-flexible Deferrable Loads (DL’s), according to their electrical components. We propose a dynamic scheduling algorithm where users can systematically manage the operation of their electric appliances. The main challenge is to develop a flattening function calculus (reshaping) for both flexible and non-flexible DL’s. In addition, implementation of the proposed algorithm would require dynamically analyzing two successive multi-objective optimization (MOO) problems. The first targets the activation schedule of non-flexible DL’s and the second deals with the power profiles of flexible DL’s. The MOO problems are resolved by using a fast and elitist multi-objective genetic algorithm (NSGA-II). Finally, in order to show the efficiency of the proposed approach, a case study of a collaborative system that consists of 40 MG’s registered in the load curve for the flattening program has been developed. The results verify that the load curve can indeed become very flat by applying the proposed scheduling approach. PMID:28824226

  11. Neural Network Prediction of New Aircraft Design Coefficients

    NASA Technical Reports Server (NTRS)

    Norgaard, Magnus; Jorgensen, Charles C.; Ross, James C.

    1997-01-01

    This paper discusses a neural network tool for more effective aircraft design evaluations during wind tunnel tests. Using a hybrid neural network optimization method, we have produced fast and reliable predictions of aerodynamical coefficients, found optimal flap settings, and flap schedules. For validation, the tool was tested on a 55% scale model of the USAF/NASA Subsonic High Alpha Research Concept aircraft (SHARC). Four different networks were trained to predict coefficients of lift, drag, moment of inertia, and lift drag ratio (C(sub L), C(sub D), C(sub M), and L/D) from angle of attack and flap settings. The latter network was then used to determine an overall optimal flap setting and for finding optimal flap schedules.

  12. Sustained immunogenicity of the HPV-16/18 AS04-adjuvanted vaccine administered as a two-dose schedule in adolescent girls: Five-year clinical data and modeling predictions from a randomized study

    PubMed Central

    Romanowski, Barbara; Schwarz, Tino F; Ferguson, Linda; Peters, Klaus; Dionne, Marc; Behre, Ulrich; Schulze, Karin; Hillemanns, Peter; Suryakiran, Pemmaraju; Thomas, Florence; Struyf, Frank

    2016-01-01

    In this randomized, partially-blind study (clinicaltrials.gov; NCT00541970), the licensed formulation of the human papillomavirus (HPV)-16/18 AS04-adjuvanted vaccine (20 μg each of HPV-16/18 antigens) was found highly immunogenic up to 4 y after first vaccination, whether administered as a 2-dose (2D) schedule in girls 9–14 y or 3-dose (3D) schedule in women 15–25 y. This end-of-study analysis extends immunogenicity and safety data until Month (M) 60, and presents antibody persistence predictions estimated by piecewise and modified power law models. Healthy females (age stratified: 9–14, 15–19, 20–25 y) were randomized to receive 2D at M0,6 (N = 240 ) or 3D at M0,1,6 (N = 239). Here, results are reported for girls 9–14 y (2D) and women 15–25 y (3D). Seropositivity rates, geometric mean titers (by enzyme-linked immunosorbent assay) and geometric mean titer ratios (GMRs; 3D/2D; post-hoc exploratory analysis) were calculated. All subjects seronegative pre-vaccination in the according-to-protocol immunogenicity cohort were seropositive for anti-HPV-16 and −18 at M60. Antibody responses elicited by the 2D and 3D schedules were comparable at M60, with GMRs close to 1 (anti-HPV-16: 1.13 [95% confidence interval: 0.82–1.54]; anti-HPV-18: 1.06 [0.74–1.51]). Statistical modeling predicted that in 95% of subjects, antibodies induced by 2D and 3D schedules could persist above natural infection levels for ≥ 21 y post-vaccination. The vaccine had a clinically acceptable safety profile in both groups. In conclusion, a 2D M0,6 schedule of the HPV-16/18 AS04-adjuvanted vaccine was immunogenic for up to 5 y in 9–14 y-old girls. Statistical modeling predicted that 2D-induced antibodies could persist for longer than 20 y. PMID:26176261

  13. Discrimination Training Reduces High Rate Social Approach Behaviors in Angelman Syndrome: Proof of Principle

    ERIC Educational Resources Information Center

    Heald, M.; Allen, D.; Villa, D.; Oliver, C.

    2013-01-01

    This proof of principle study was designed to evaluate whether excessively high rates of social approach behaviors in children with Angelman syndrome (AS) can be modified using a multiple schedule design. Four children with AS were exposed to a multiple schedule arrangement, in which social reinforcement and extinction, cued using a novel…

  14. Linear modeling of steady-state behavioral dynamics.

    PubMed Central

    Palya, William L; Walter, Donald; Kessel, Robert; Lucke, Robert

    2002-01-01

    The observed steady-state behavioral dynamics supported by unsignaled periods of reinforcement within repeating 2,000-s trials were modeled with a linear transfer function. These experiments employed improved schedule forms and analytical methods to improve the precision of the measured transfer function, compared to previous work. The refinements include both the use of multiple reinforcement periods that improve spectral coverage and averaging of independently determined transfer functions. A linear analysis was then used to predict behavior observed for three different test schedules. The fidelity of these predictions was determined. PMID:11831782

  15. Mass Uncertainty and Application For Space Systems

    NASA Technical Reports Server (NTRS)

    Beech, Geoffrey

    2013-01-01

    Expected development maturity under contract (spec) should correlate with Project/Program Approved MGA Depletion Schedule in Mass Properties Control Plan. If specification NTE, MGA is inclusive of Actual MGA (A5 & A6). If specification is not an NTE Actual MGA (e.g. nominal), then MGA values are reduced by A5 values and A5 is representative of remaining uncertainty. Basic Mass = Engineering Estimate based on design and construction principles with NO embedded margin MGA Mass = Basic Mass * assessed % from approved MGA schedule. Predicted Mass = Basic + MGA. Aggregate MGA % = (Aggregate Predicted - Aggregate Basic) /Aggregate Basic.

  16. Contribution of Schedule Delays to Cost Growth: How to Make Peace with a Marching Army

    NASA Technical Reports Server (NTRS)

    Majerowicz, Walt; Bitten, Robert; Emmons, Debra; Shinn, Stephen A.

    2016-01-01

    Numerous research papers have shown that cost and schedule growth are interrelated for NASA space science missions. Although there has shown to be a strong correlation of cost growth with schedule growth, it is unclear what percentage of cost growth is caused by schedule growth and how schedule growth can be controlled. This paper attempts to quantify this percentage by looking at historical data and show detailed examples of how schedule growth influences cost growth. The paper also addresses a methodology to show an alternate approach for assessing and setting a robust baseline schedule and use schedule performance metrics to help assess if the project is performing to plan. Finally, recommendations are presented to help control schedule growth in order to minimize cost growth for NASA space science missions.

  17. A random-key encoded harmony search approach for energy-efficient production scheduling with shared resources

    NASA Astrophysics Data System (ADS)

    Garcia-Santiago, C. A.; Del Ser, J.; Upton, C.; Quilligan, F.; Gil-Lopez, S.; Salcedo-Sanz, S.

    2015-11-01

    When seeking near-optimal solutions for complex scheduling problems, meta-heuristics demonstrate good performance with affordable computational effort. This has resulted in a gravitation towards these approaches when researching industrial use-cases such as energy-efficient production planning. However, much of the previous research makes assumptions about softer constraints that affect planning strategies and about how human planners interact with the algorithm in a live production environment. This article describes a job-shop problem that focuses on minimizing energy consumption across a production facility of shared resources. The application scenario is based on real facilities made available by the Irish Center for Manufacturing Research. The formulated problem is tackled via harmony search heuristics with random keys encoding. Simulation results are compared to a genetic algorithm, a simulated annealing approach and a first-come-first-served scheduling. The superior performance obtained by the proposed scheduler paves the way towards its practical implementation over industrial production chains.

  18. An improved robust buffer allocation method for the project scheduling problem

    NASA Astrophysics Data System (ADS)

    Ghoddousi, Parviz; Ansari, Ramin; Makui, Ahmad

    2017-04-01

    Unpredictable uncertainties cause delays and additional costs for projects. Often, when using traditional approaches, the optimizing procedure of the baseline project plan fails and leads to delays. In this study, a two-stage multi-objective buffer allocation approach is applied for robust project scheduling. In the first stage, some decisions are made on buffer sizes and allocation to the project activities. A set of Pareto-optimal robust schedules is designed using the meta-heuristic non-dominated sorting genetic algorithm (NSGA-II) based on the decisions made in the buffer allocation step. In the second stage, the Pareto solutions are evaluated in terms of the deviation from the initial start time and due dates. The proposed approach was implemented on a real dam construction project. The outcomes indicated that the obtained buffered schedule reduces the cost of disruptions by 17.7% compared with the baseline plan, with an increase of about 0.3% in the project completion time.

  19. An approach to rescheduling activities based on determination of priority and disruptivity

    NASA Technical Reports Server (NTRS)

    Sponsler, Jeffrey L.; Johnston, Mark D.

    1990-01-01

    A constraint-based scheduling system called SPIKE is being used to create long term schedules for the Hubble Space Telescope. Feedback for the spacecraft or from other ground support systems may invalidate some scheduling decisions and those activities concerned must be reconsidered. A function rescheduling priority is defined which for a given activity performs a heuristic analysis and produces a relative numerical value which is used to rank all such entities in the order that they should be rescheduled. A function disruptivity is also defined that is used to place a relative numeric value on how much a pre-existing schedule would be changed in order to reschedule an activity. Using these functions, two algorithms (a stochastic neural network approach and an exhaustive search approach) are proposed to find the best place to reschedule an activity. Prototypes were implemented and preliminary testing reveals that the exhaustive technique produces only marginally better results at much greater computational cost.

  20. Computer-Assisted Scheduling of Army Unit Training: An Application of Simulated Annealing.

    ERIC Educational Resources Information Center

    Hart, Roland J.; Goehring, Dwight J.

    This report of an ongoing research project intended to provide computer assistance to Army units for the scheduling of training focuses on the feasibility of simulated annealing, a heuristic approach for solving scheduling problems. Following an executive summary and brief introduction, the document is divided into three sections. First, the Army…

  1. The Isolation of Motivational, Motoric, and Schedule Effects on Operant Performance: A Modeling Approach

    ERIC Educational Resources Information Center

    Brackney, Ryan J.; Cheung, Timothy H. C.; Neisewander, Janet L.; Sanabria, Federico

    2011-01-01

    Dissociating motoric and motivational effects of pharmacological manipulations on operant behavior is a substantial challenge. To address this problem, we applied a response-bout analysis to data from rats trained to lever press for sucrose on variable-interval (VI) schedules of reinforcement. Motoric, motivational, and schedule factors (effort…

  2. Deep Space Network Scheduling Using Evolutionary Computational Methods

    NASA Technical Reports Server (NTRS)

    Guillaume, Alexandre; Lee, Seugnwon; Wang, Yeou-Fang; Terrile, Richard J.

    2007-01-01

    The paper presents the specific approach taken to formulate the problem in terms of gene encoding, fitness function, and genetic operations. The genome is encoded such that a subset of the scheduling constraints is automatically satisfied. Several fitness functions are formulated to emphasize different aspects of the scheduling problem. The optimal solutions of the different fitness functions demonstrate the trade-off of the scheduling problem and provide insight into a conflict resolution process.

  3. A Network Flow Approach to the Initial Skills Training Scheduling Problem

    DTIC Science & Technology

    2007-12-01

    include (but are not limited to) queuing theory, stochastic analysis and simulation. After the demand schedule has been estimated, it can be ...software package has already been purchased and is in use by AFPC, AFPC has requested that the new algorithm be programmed in this language as well ...the discussed outputs from those schedules. Required Inputs A single input file details the students to be scheduled as well as the courses

  4. A prediction model to forecast the cost impact from a break in the production schedule

    NASA Technical Reports Server (NTRS)

    Delionback, L. M.

    1977-01-01

    The losses which are experienced after a break or stoppage in sequence of a production cycle portends an extremely complex situation and involves numerous variables, some of uncertain quantity and quality. There are no discrete formulas to define the losses during a gap in production. The techniques which are employed are therefore related to a prediction or forecast of the losses that take place, based on the conditions which exist in the production environment. Such parameters as learning curve slope, number of predecessor units, and length of time the production sequence is halted are utilized in formulating a prediction model. The pertinent current publications related to this subject are few in number, but are reviewed to provide an understanding of the problem. Example problems are illustrated together with appropriate trend curves to show the approach. Solved problems are also given to show the application of the models to actual cases or production breaks in the real world.

  5. Selective Attention in Pigeon Temporal Discrimination.

    PubMed

    Subramaniam, Shrinidhi; Kyonka, Elizabeth

    2017-07-27

    Cues can vary in how informative they are about when specific outcomes, such as food availability, will occur. This study was an experimental investigation of the functional relation between cue informativeness and temporal discrimination in a peak-interval (PI) procedure. Each session consisted of fixed-interval (FI) 2-s and 4-s schedules of food and occasional, 12-s PI trials during which pecks had no programmed consequences. Across conditions, the phi (ϕ) correlation between key light color and FI schedule value was manipulated. Red and green key lights signaled the onset of either or both FI schedules. Different colors were either predictive (ϕ = 1), moderately predictive (ϕ = 0.2-0.8), or not predictive (ϕ = 0) of a specific FI schedule. This study tested the hypothesis that temporal discrimination is a function of the momentary conditional probability of food; that is, pigeons peck the most at either 2 s or 4 s when ϕ = 1 and peck at both intervals when ϕ < 1. Response distributions were bimodal Gaussian curves; distributions from red- and green-key PI trials converged when ϕ ≤ 0.6. Peak times estimated by summed Gaussian functions, averaged across conditions and pigeons, were 1.85 s and 3.87 s, however, pigeons did not always maximize the momentary probability of food. When key light color was highly correlated with FI schedules (ϕ ≥ 0.6), estimates of peak times indicated that temporal discrimination accuracy was reduced at the unlikely interval, but not the likely interval. The mechanism of this reduced temporal discrimination accuracy could be interpreted as an attentional process.

  6. How do current irrigation practices perform? Evaluation of different irrigation scheduling approaches based on experiements and crop model simulations

    NASA Astrophysics Data System (ADS)

    Seidel, Sabine J.; Werisch, Stefan; Barfus, Klemens; Wagner, Michael; Schütze, Niels; Laber, Hermann

    2014-05-01

    The increasing worldwide water scarcity, costs and negative off-site effects of irrigation are leading to the necessity of developing methods of irrigation that increase water productivity. Various approaches are available for irrigation scheduling. Traditionally schedules are calculated based on soil water balance (SWB) calculations using some measure of reference evaporation and empirical crop coeffcients. These crop-specific coefficients are provided by the FAO but are also available for different regions (e.g. Germany). The approach is simple but there are several inaccuracies due to simplifications and limitations such as poor transferability. Crop growth models - which simulate the main physiological plant processes through a set of assumptions and calibration parameter - are widely used to support decision making, but also for yield gap or scenario analyses. One major advantage of mechanistic models compared to empirical approaches is their spatial and temporal transferability. Irrigation scheduling can also be based on measurements of soil water tension which is closely related to plant stress. Advantages of precise and easy measurements are able to be automated but face difficulties of finding the place where to probe especially in heterogenous soils. In this study, a two-year field experiment was used to extensively evaluate the three mentioned irrigation scheduling approaches regarding their efficiency on irrigation water application with the aim to promote better agronomic practices in irrigated horticulture. To evaluate the tested irrigation scheduling approaches, an extensive plant and soil water data collection was used to precisely calibrate the mechanistic crop model Daisy. The experiment was conducted with white cabbage (Brassica oleracea L.) on a sandy loamy field in 2012/13 near Dresden, Germany. Hereby, three irrigation scheduling approaches were tested: (i) two schedules were estimated based on SWB calculations using different crop coefficients, and (ii) one treatment was automatically drip irrigated using tensiometers (irrigation of 15 mm at a soil tension of -250 hPa at 30 cm soil depth). In treatment (iii), the irrigation schedule was estimated (using the same critera as in the tension-based treatment) applying the model Daisy partially calibrated against data of 2012. Moreover, one control treatment was minimally irrigated. Measured yield was highest for the tension-based treatment with a low irrigation water input (8.5 DM t/ha, 120 mm). Both SWB treatments showed lower yields and higher irrigation water input (both 8.3 DM t/ha, 306 and 410 mm). The simulation model based treatment yielded lower (7.5 DM t/ha, 106 mm) mainly due to drought stress caused by inaccurate simulation of the soil water dynamics and thus an overestimation of the soil moisture. The evaluation using the calibrated model estimated heavy deep percolation under both SWB treatments. Targeting the challenge to increase water productivity, soil water tension-based irrigation should be favoured. Irrigation scheduling based on SWB calculation requires accurate estimates of crop coefficients. A robust calibration of mechanistic crop models implies a high effort and can be recommended to farmers only to some extent but enables comprehensive crop growth and site analyses.

  7. DNA targeting of rhinal cortex D2 receptor protein reversibly blocks learning of cues that predict reward.

    PubMed

    Liu, Zheng; Richmond, Barry J; Murray, Elisabeth A; Saunders, Richard C; Steenrod, Sara; Stubblefield, Barbara K; Montague, Deidra M; Ginns, Edward I

    2004-08-17

    When schedules of several operant trials must be successfully completed to obtain a reward, monkeys quickly learn to adjust their behavioral performance by using visual cues that signal how many trials have been completed and how many remain in the current schedule. Bilateral rhinal (perirhinal and entorhinal) cortex ablations irreversibly prevent this learning. Here, we apply a recombinant DNA technique to investigate the role of dopamine D2 receptor in rhinal cortex for this type of learning. Rhinal cortex was injected with a DNA construct that significantly decreased D2 receptor ligand binding and temporarily produced the same profound learning deficit seen after ablation. However, unlike after ablation, the D2 receptor-targeted, DNA-treated monkeys recovered cue-related learning after 11-19 weeks. Injecting a DNA construct that decreased N-methyl-d-aspartate but not D2 receptor ligand binding did not interfere with learning associations between the cues and the schedules. A second D2 receptor-targeted DNA treatment administered after either recovery from a first D2 receptor-targeted DNA treatment (one monkey), after N-methyl-d-aspartate receptor-targeted DNA treatment (two monkeys), or after a vector control treatment (one monkey) also induced a learning deficit of similar duration. These results suggest that the D2 receptor in primate rhinal cortex is essential for learning to relate the visual cues to the schedules. The specificity of the receptor manipulation reported here suggests that this approach could be generalized in this or other brain pathways to relate molecular mechanisms to cognitive functions.

  8. DNA targeting of rhinal cortex D2 receptor protein reversibly blocks learning of cues that predict reward

    PubMed Central

    Liu, Zheng; Richmond, Barry J.; Murray, Elisabeth A.; Saunders, Richard C.; Steenrod, Sara; Stubblefield, Barbara K.; Montague, Deidra M.; Ginns, Edward I.

    2004-01-01

    When schedules of several operant trials must be successfully completed to obtain a reward, monkeys quickly learn to adjust their behavioral performance by using visual cues that signal how many trials have been completed and how many remain in the current schedule. Bilateral rhinal (perirhinal and entorhinal) cortex ablations irreversibly prevent this learning. Here, we apply a recombinant DNA technique to investigate the role of dopamine D2 receptor in rhinal cortex for this type of learning. Rhinal cortex was injected with a DNA construct that significantly decreased D2 receptor ligand binding and temporarily produced the same profound learning deficit seen after ablation. However, unlike after ablation, the D2 receptor-targeted, DNA-treated monkeys recovered cue-related learning after 11–19 weeks. Injecting a DNA construct that decreased N-methyl-d-aspartate but not D2 receptor ligand binding did not interfere with learning associations between the cues and the schedules. A second D2 receptor-targeted DNA treatment administered after either recovery from a first D2 receptor-targeted DNA treatment (one monkey), after N-methyl-d-aspartate receptor-targeted DNA treatment (two monkeys), or after a vector control treatment (one monkey) also induced a learning deficit of similar duration. These results suggest that the D2 receptor in primate rhinal cortex is essential for learning to relate the visual cues to the schedules. The specificity of the receptor manipulation reported here suggests that this approach could be generalized in this or other brain pathways to relate molecular mechanisms to cognitive functions. PMID:15302926

  9. The Value of Weather Forecast in Irrigation

    NASA Astrophysics Data System (ADS)

    Cai, X.; Wang, D.

    2007-12-01

    This paper studies irrigation scheduling (when and how much water to apply during the crop growth season) in the Havana Lowlands region, Illinois, using meteorological, agronomic and agricultural production data from 2002. Irrigation scheduling determines the timing and amount of water applied to an irrigated cropland during the crop growing season. In this study a hydrologic-agronomic simulation is coupled with an optimization algorithm to search for the optimal irrigation schedule under various weather forecast horizons. The economic profit of irrigated corn from an optimized scheduling is compared to that from and the actual schedule, which is adopted from a pervious study. Extended and reliable climate prediction and weather forecast are found to be significantly valuable. If a weather forecast horizon is long enough to include the critical crop growth stage, in which crop yield bears the maximum loss over all stages, much economic loss can be avoided. Climate predictions of one to two months, which can cover the critical period, might be even more beneficial during a dry year. The other purpose of this paper is to analyze farmers' behavior in irrigation scheduling by comparing the "actual" schedule to the "optimized" ones. The ultimate goal of irrigation schedule optimization is to provide information to farmers so that they may modify their behavior. In practice, farmers' decision may not follow an optimal irrigation schedule due to the impact of various factors such as natural conditions, policies, farmers' habits and empirical knowledge, and the uncertain or inexact information that they receive. In this study farmers' behavior in irrigation decision making is analyzed by comparing the "actual" schedule to the "optimized" ones. This study finds that the identification of the crop growth stage with the most severe water stress is critical for irrigation scheduling. For the case study site in the year of 2002, framers' response to water stress was found to be late; they did not even respond appropriately to a major rainfall just 3 days ahead, which might be due to either an unreliable weather forecast or farmer's ignorance of the forecast.

  10. Impact of Scheduled Attrition Rates on Meeting Monthly Sortie Goals in United States Air Force Bomb Wings

    DTIC Science & Technology

    2009-06-12

    predict future losses in the monthly flying schedules. The purpose of the attrition is to ensure that units meet their sortie contract consistently. In...an era of decreasing force size, it is important for units to maximize aircrew training operations, without wasting manpower and resources. Thus, the ...primary research question is as follows: Is the current USAF scheduling technique of using a 5-year historical attrition rate an effective way to

  11. Predicting Schedule Duration for Defense Acquisition Programs: Program Initiation to Initial Operational Capability

    DTIC Science & Technology

    2016-03-24

    Corporation found that increases in schedule effort tend to be the reason for increases in the cost of acquiring a new weapons system due to, at a minimum...in-depth finance and schedule data for selected programs (Brown et al., 2015). We also give extra focus on Research Development Test & Evaluation...we create and employ an entirely new database. The database we utilize for our research is a database originally built by the RAND Corporation for

  12. Reposturing the Force: U.S. Overseas Presence in the Twenty-first Century

    DTIC Science & Technology

    2006-02-01

    provide predictability in scheduling , and offer more stability at home. Returning forces meet the services’ need to refit their units for increased...focused on troops scheduled to be pulled back to U.S. bases or consolidated in other locales. There re- mains a need for allies, particularly in Asia, to...relieve political pressure on the U.S.-Japan alliance, Guam is already scheduled to receive seven thousand Marines slated to be moved from Okinawa;96 it

  13. A Human-Centered Smart Home System with Wearable-Sensor Behavior Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Jianting; Liu, Ting; Shen, Chao

    Smart home has recently attracted much research interest owing to its potential in improving the quality of human life. How to obtain user's demand is the most important and challenging task for appliance optimal scheduling in smart home, since it is highly related to user's unpredictable behavior. In this paper, a human-centered smart home system is proposed to identify user behavior, predict their demand and schedule the household appliances. Firstly, the sensor data from user's wearable devices are monitored to profile user's full-day behavior. Then, the appliance-demand matrix is constructed to predict user's demand on home environment, which is extractedmore » from the history of appliance load data and user behavior. Two simulations are designed to demonstrate user behavior identification, appliance-demand matrix construction and strategy of appliance optimal scheduling generation.« less

  14. Mission scheduling

    NASA Technical Reports Server (NTRS)

    Gaspin, Christine

    1989-01-01

    How a neural network can work, compared to a hybrid system based on an operations research and artificial intelligence approach, is investigated through a mission scheduling problem. The characteristic features of each system are discussed.

  15. Deterioration, death and the evolution of reproductive restraint in late life.

    PubMed

    McNamara, John M; Houston, Alasdair I; Barta, Zoltan; Scheuerlein, Alexander; Fromhage, Lutz

    2009-11-22

    Explaining why organisms schedule reproduction over their lifetimes in the various ways that they do is an enduring challenge in biology. An influential theoretical prediction states that organisms should increasingly invest in reproduction as they approach the end of their life. An apparent mismatch of empirical data with this prediction has been attributed to age-related constraints on the ability to reproduce. Here we present a general framework for the evolution of age-related reproductive trajectories. Instead of characterizing an organism by its age, we characterize it by its physiological condition. We develop a common currency that if maximized at each time guarantees the whole life history is optimal. This currency integrates reproduction, mortality and changes in condition. We predict that under broad conditions it will be optimal for organisms to invest less in reproduction as they age, thus challenging traditional interpretations of age-related traits and renewing debate about the extent to which observed life histories are shaped by constraint versus adaptation. Our analysis gives a striking illustration of the differences between an age-based and a condition-based approach to life-history theory. It also provides a unified account of not only standard life-history models but of related models involving the allocation of limited resources.

  16. A short-term operating room surgery scheduling problem integrating multiple nurses roster constraints.

    PubMed

    Xiang, Wei; Yin, Jiao; Lim, Gino

    2015-02-01

    Operating room (OR) surgery scheduling determines the individual surgery's operation start time and assigns the required resources to each surgery over a schedule period, considering several constraints related to a complete surgery flow and the multiple resources involved. This task plays a decisive role in providing timely treatments for the patients while balancing hospital resource utilization. The originality of the present study is to integrate the surgery scheduling problem with real-life nurse roster constraints such as their role, specialty, qualification and availability. This article proposes a mathematical model and an ant colony optimization (ACO) approach to efficiently solve such surgery scheduling problems. A modified ACO algorithm with a two-level ant graph model is developed to solve such combinatorial optimization problems because of its computational complexity. The outer ant graph represents surgeries, while the inner graph is a dynamic resource graph. Three types of pheromones, i.e. sequence-related, surgery-related, and resource-related pheromone, fitting for a two-level model are defined. The iteration-best and feasible update strategy and local pheromone update rules are adopted to emphasize the information related to the good solution in makespan, and the balanced utilization of resources as well. The performance of the proposed ACO algorithm is then evaluated using the test cases from (1) the published literature data with complete nurse roster constraints, and 2) the real data collected from a hospital in China. The scheduling results using the proposed ACO approach are compared with the test case from both the literature and the real life hospital scheduling. Comparison results with the literature shows that the proposed ACO approach has (1) an 1.5-h reduction in end time; (2) a reduction in variation of resources' working time, i.e. 25% for ORs, 50% for nurses in shift 1 and 86% for nurses in shift 2; (3) an 0.25h reduction in individual maximum overtime (OT); and (4) an 42% reduction in the total OT of nurses. Comparison results with the real 10-workday hospital scheduling further show the advantage of the ACO in several measurements. Instead of assigning all surgeries by a surgeon to only one OR and the same nurses by traditional manual approach in hospital, ACO realizes a more balanced surgery arrangement by assigning the surgeries to different ORs and nurses. It eventually leads to shortening the end time within the confidential interval of [7.4%, 24.6%] with 95% confidence level. The ACO approach proposed in this paper efficiently solves the surgery scheduling problem with daily nurse roster while providing a shortened end time and relatively balanced resource allocations. It also supports the advantage of integrating the surgery scheduling with the nurse scheduling and the efficiency of systematic optimization considering a complete three-stage surgery flow and resources involved. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Dynamic Scheduling for Veterans Health Administration Patients using Geospatial Dynamic Overbooking.

    PubMed

    Adams, Stephen; Scherer, William T; White, K Preston; Payne, Jason; Hernandez, Oved; Gerber, Mathew S; Whitehead, N Peter

    2017-10-12

    The Veterans Health Administration (VHA) is plagued by abnormally high no-show and cancellation rates that reduce the productivity and efficiency of its medical outpatient clinics. We address this issue by developing a dynamic scheduling system that utilizes mobile computing via geo-location data to estimate the likelihood of a patient arriving on time for a scheduled appointment. These likelihoods are used to update the clinic's schedule in real time. When a patient's arrival probability falls below a given threshold, the patient's appointment is canceled. This appointment is immediately reassigned to another patient drawn from a pool of patients who are actively seeking an appointment. The replacement patients are prioritized using their arrival probability. Real-world data were not available for this study, so synthetic patient data were generated to test the feasibility of the design. The method for predicting the arrival probability was verified on a real set of taxicab data. This study demonstrates that dynamic scheduling using geo-location data can reduce the number of unused appointments with minimal risk of double booking resulting from incorrect predictions. We acknowledge that there could be privacy concerns with regards to government possession of one's location and offer strategies for alleviating these concerns in our conclusion.

  18. An Evaluation of a Three-Component Multiple Schedule to Indicate Attention Availability

    ERIC Educational Resources Information Center

    Nava, Maria J.; Vargo, Kristina K.; Babino, Misti M.

    2016-01-01

    Students may engage in high rates of social approach responses at inappropriate times throughout the school day. One intervention that has been used to teach students appropriate and inappropriate times to access attention is a multiple schedule of reinforcement. In this study, we evaluated the efficacy of a multiple schedule that indicated when…

  19. A Comparison of the DISASTER (Trademark) Scheduling Software with a Simultaneous Scheduling Algorithm for Minimizing Maximum Tardiness in Job Shops

    DTIC Science & Technology

    1993-09-01

    goal ( Heizer , Render , and Stair, 1993:94). Integer Prgronmming. Integer programming is a general purpose approach used to optimally solve job shop...Scheduling," Operations Research Journal. 29, No 4: 646-667 (July-August 1981). Heizer , Jay, Barry Render and Ralph M. Stair, Jr. Production and Operations

  20. Planning and Scheduling of Software Manufacturing Projects

    DTIC Science & Technology

    1991-03-01

    based on the previous results in social analysis of computing, operations research in manufacturing, artificial intelligence in manufacturing...planning and scheduling, and the traditional approaches to planning in artificial intelligence, and extends the techniques that have been developed by them...social analysis of computing, operations research in manufacturing, artificial intelligence in manufacturing planning and scheduling, and the

  1. Task Scheduling in Desktop Grids: Open Problems

    NASA Astrophysics Data System (ADS)

    Chernov, Ilya; Nikitina, Natalia; Ivashko, Evgeny

    2017-12-01

    We survey the areas of Desktop Grid task scheduling that seem to be insufficiently studied so far and are promising for efficiency, reliability, and quality of Desktop Grid computing. These topics include optimal task grouping, "needle in a haystack" paradigm, game-theoretical scheduling, domain-imposed approaches, special optimization of the final stage of the batch computation, and Enterprise Desktop Grids.

  2. Scheduling Software for Complex Scenarios

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Preparing a vehicle and its payload for a single launch is a complex process that involves thousands of operations. Because the equipment and facilities required to carry out these operations are extremely expensive and limited in number, optimal assignment and efficient use are critically important. Overlapping missions that compete for the same resources, ground rules, safety requirements, and the unique needs of processing vehicles and payloads destined for space impose numerous constraints that, when combined, require advanced scheduling. Traditional scheduling systems use simple algorithms and criteria when selecting activities and assigning resources and times to each activity. Schedules generated by these simple decision rules are, however, frequently far from optimal. To resolve mission-critical scheduling issues and predict possible problem areas, NASA historically relied upon expert human schedulers who used their judgment and experience to determine where things should happen, whether they will happen on time, and whether the requested resources are truly necessary.

  3. Comparison of 2-Dose and 3-Dose 9-Valent Human Papillomavirus Vaccine Schedules in the United States: A Cost-effectiveness Analysis.

    PubMed

    Laprise, Jean-François; Markowitz, Lauri E; Chesson, Harrell W; Drolet, Mélanie; Brisson, Marc

    2016-09-01

    A recent clinical trial using the 9-valent human papillomavirus virus (HPV) vaccine has shown that antibody responses after 2 doses are noninferior to those after 3 doses, suggesting that 2 and 3 doses may have comparable vaccine efficacy. We used an individual-based transmission-dynamic model to compare the population-level effectiveness and cost-effectiveness of 2- and 3-dose schedules of 9-valent HPV vaccine in the United States. Our model predicts that if 2 doses of 9-valent vaccine protect for ≥20 years, the additional benefits of a 3-dose schedule are small as compared to those of 2-dose schedules, and 2-dose schedules are likely much more cost-efficient than 3-dose schedules. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.

  4. Population dynamics of Varroa destructor (Acari: Varroidae) in commercial honey bee colonies and implications for control

    USDA-ARS?s Scientific Manuscript database

    Treatment schedules to maintain low levels of Varroa mites in honey bee colonies were tested in hives started from either package bees or splits of larger colonies. The schedules were developed based on predictions of Varroa population growth generated from a mathematical model of honey bee colony ...

  5. Estimating Controller Intervention Probabilities for Optimized Profile Descent Arrivals

    NASA Technical Reports Server (NTRS)

    Meyn, Larry A.; Erzberger, Heinz; Huynh, Phu V.

    2011-01-01

    Simulations of arrival traffic at Dallas/Fort-Worth and Denver airports were conducted to evaluate incorporating scheduling and separation constraints into advisories that define continuous descent approaches. The goal was to reduce the number of controller interventions required to ensure flights maintain minimum separation distances of 5 nmi horizontally and 1000 ft vertically. It was shown that simply incorporating arrival meter fix crossing-time constraints into the advisory generation could eliminate over half of the all predicted separation violations and more than 80% of the predicted violations between two arrival flights. Predicted separation violations between arrivals and non-arrivals were 32% of all predicted separation violations at Denver and 41% at Dallas/Fort-Worth. A probabilistic analysis of meter fix crossing-time errors is included which shows that some controller interventions will still be required even when the predicted crossing-times of the advisories are set to add a 1 or 2 nmi buffer above the minimum in-trail separation of 5 nmi. The 2 nmi buffer was shown to increase average flight delays by up to 30 sec when compared to the 1 nmi buffer, but it only resulted in a maximum decrease in average arrival throughput of one flight per hour.

  6. Memory consolidation and contextual interference effects with computer games.

    PubMed

    Shewokis, Patricia A

    2003-10-01

    Some investigators of the contextual interference effect contend that there is a direct relation between the amount of practice and the contextual interference effect based on the prediction that the improvement in learning tasks in a random practice schedule, compared to a blocked practice schedule, increases in magnitude as the amount of practice during acquisition on the tasks increases. Research using computer games in contextual interference studies has yielded a large effect (f = .50) with a random practice schedule advantage during transfer. These investigations had a total of 36 and 72 acquisition trials, respectively. The present study tested this prediction by having 72 college students, who were randomly assigned to a blocked or random practice schedule, practice 102 trials of three computer-game tasks across three days. After a 24-hr. interval, 6 retention and 5 transfer trials were performed. Dependent variables were time to complete an event in seconds and number of errors. No significant differences were found for retention and transfer. These results are discussed in terms of how the amount of practice, task-related factors, and memory consolidation mediate the contextual interference effect.

  7. Work-family conflict and enrichment from the perspective of psychosocial resources: comparing Finnish healthcare workers by working schedules.

    PubMed

    Mauno, Saija; Ruokolainen, Mervi; Kinnunen, Ulla

    2015-05-01

    We examined work-family conflict (WFC) and work-family enrichment (WFE) by comparing Finnish nurses, working dayshifts (non-shiftworkers, n = 874) and non-dayshifts. The non-dayshift employees worked either two different dayshifts (2-shiftworkers, n = 490) or three different shifts including nightshifts (3-shiftworkers, n = 270). Specifically, we investigated whether different resources, i.e. job control, managers' work-family support, co-workers' work-family support, control at home, personal coping strategies, and schedule satisfaction, predicted differently WFC and WFE in these three groups. Results showed that lower managers' work-family support predicted higher WFC only among 3-shiftworkers, whereas lower co-workers' support associated with increased WFC only in non-shiftworkers. In addition, shiftworkers reported higher WFC than non-shiftworkers. However, the level of WFE did not vary by schedule types. Moreover, the predictors of WFE varied only very little across schedule types. Shiftwork organizations should pay more attention to family-friendly management in order to reduce WFC among shiftworkers. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  8. Evaluation of an operational real-time irrigation scheduling scheme for drip irrigated citrus fields in Picassent, Spain

    NASA Astrophysics Data System (ADS)

    Li, Dazhi; Hendricks-Franssen, Harrie-Jan; Han, Xujun; Jiménez Bello, Miguel Angel; Martínez Alzamora, Fernando; Vereecken, Harry

    2017-04-01

    Irrigated agriculture accounts worldwide for 40% of food production and 70% of fresh water withdrawals. Irrigation scheduling aims to minimize water use while maintaining the agricultural production. In this study we were concerned with the real-time automatic control of irrigation, which calculates daily water allocation by combining information from soil moisture sensors and a land surface model. The combination of soil moisture measurements and predictions by the Community Land Model (CLM) using sequential data assimilation (DA) is a promising alternative to improve the estimate of soil and plant water status. The LETKF (Local Ensemble Transform Kalman Filter) was chosen to assimilate soil water content measured by FDR (Frequency Domain Reflectometry) into CLM and improve the initial (soil moisture) conditions for the next model run. In addition, predictions by the GFS (Global Forecast System) atmospheric simulation model were used as atmospheric input data for CLM to predict an ensemble of possible soil moisture evolutions for the next days. The difference between predicted and target soil water content is defined as the water deficit, and the irrigation amount was calculated by the integrated water deficit over the root zone. The corresponding irrigation time to apply the required water was introduced in SCADA (supervisory control and data acquisition system) for each citrus field. In total 6 fields were irrigated according our optimization approach including data assimilation (CLM-DA) and there were also 2 fields following the FAO (Food and Agriculture Organization) water balance method and 4 fields controlled by farmers as reference. During the real-time irrigation campaign in Valencia from July to October in 2015 and June to October in 2016, the applied irrigation amount, stem water potential and soil moisture content were recorded. The data indicated that 5% 20% less irrigation water was needed for the CLM-DA scheduled fields than for the other fields following the FAO or farmers' method. Stem water potential data indicated that the CLM-DA fields were not suffering from water stress during most of the irrigation period. Even though the CLM-DA fields received the least irrigation water, the orange production was not suppressed either. Our results show the water saving potential of the CLM-DA method compared to other traditional irrigation methods.

  9. FASTER - A tool for DSN forecasting and scheduling

    NASA Technical Reports Server (NTRS)

    Werntz, David; Loyola, Steven; Zendejas, Silvino

    1993-01-01

    FASTER (Forecasting And Scheduling Tool for Earth-based Resources) is a suite of tools designed for forecasting and scheduling JPL's Deep Space Network (DSN). The DSN is a set of antennas and other associated resources that must be scheduled for satellite communications, astronomy, maintenance, and testing. FASTER consists of MS-Windows based programs that replace two existing programs (RALPH and PC4CAST). FASTER was designed to be more flexible, maintainable, and user friendly. FASTER makes heavy use of commercial software to allow for customization by users. FASTER implements scheduling as a two pass process: the first pass calculates a predictive profile of resource utilization; the second pass uses this information to calculate a cost function used in a dynamic programming optimization step. This information allows the scheduler to 'look ahead' at activities that are not as yet scheduled. FASTER has succeeded in allowing wider access to data and tools, reducing the amount of effort expended and increasing the quality of analysis.

  10. Controller Strategies for Automation Tool Use under Varying Levels of Trajectory Prediction Uncertainty

    NASA Technical Reports Server (NTRS)

    Morey, Susan; Prevot, Thomas; Mercer, Joey; Martin, Lynne; Bienert, Nancy; Cabrall, Christopher; Hunt, Sarah; Homola, Jeffrey; Kraut, Joshua

    2013-01-01

    A human-in-the-loop simulation was conducted to examine the effects of varying levels of trajectory prediction uncertainty on air traffic controller workload and performance, as well as how strategies and the use of decision support tools change in response. This paper focuses on the strategies employed by two controllers from separate teams who worked in parallel but independently under identical conditions (airspace, arrival traffic, tools) with the goal of ensuring schedule conformance and safe separation for a dense arrival flow in en route airspace. Despite differences in strategy and methods, both controllers achieved high levels of schedule conformance and safe separation. Overall, results show that trajectory uncertainties introduced by wind and aircraft performance prediction errors do not affect the controllers' ability to manage traffic. Controller strategies were fairly robust to changes in error, though strategies were affected by the amount of delay to absorb (scheduled time of arrival minus estimated time of arrival). Using the results and observations, this paper proposes an ability to dynamically customize the display of information including delay time based on observed error to better accommodate different strategies and objectives.

  11. An innovative approach for distributed and integrated resources planning for the Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Hornstein, Rhoda S.; Shinkle, Gerald L.; Weiler, Jerry D.; Willoughby, John K.

    1990-01-01

    This paper presents a planning approach to the Space Station Freedom program which takes into account the widely distributed nature of that program. The program management structure is organized into three major levels: a strategic level, a tactical level, and an execution level. For each level, resource availabilities are determined, the resources are distributed, schedules are built independently within the resource limits, the schedules are integrated into a single schedule, and conflicts are resolved by negotiating requirements and/or relaxing contraints. This approach distributes resources to multiple planning entities in such a way that when the multiple plans are collected, they fit together with minimal modification. The up-front distribution is planned in such a way and to a sufficient degree that a fit is virtually assured.

  12. A Core Plug and Play Architecture for Reusable Flight Software Systems

    NASA Technical Reports Server (NTRS)

    Wilmot, Jonathan

    2006-01-01

    The Flight Software Branch, at Goddard Space Flight Center (GSFC), has been working on a run-time approach to facilitate a formal software reuse process. The reuse process is designed to enable rapid development and integration of high-quality software systems and to more accurately predict development costs and schedule. Previous reuse practices have been somewhat successful when the same teams are moved from project to project. But this typically requires taking the software system in an all-or-nothing approach where useful components cannot be easily extracted from the whole. As a result, the system is less flexible and scalable with limited applicability to new projects. This paper will focus on the rationale behind, and implementation of the run-time executive. This executive is the core for the component-based flight software commonality and reuse process adopted at Goddard.

  13. Bridging non-human primate correlates of protection to reassess the Anthrax Vaccine Adsorbed booster schedule in humans.

    PubMed

    Schiffer, Jarad M; Chen, Ligong; Dalton, Shannon; Niemuth, Nancy A; Sabourin, Carol L; Quinn, Conrad P

    2015-07-17

    Anthrax Vaccine Adsorbed (AVA, BioThrax) is approved for use in humans as a priming series of 3 intramuscular (i.m.) injections (0, 1, 6 months; 3-IM) with boosters at 12 and 18 months, and annually thereafter for those at continued risk of infection. A reduction in AVA booster frequency would lessen the burden of vaccination, reduce the cumulative frequency of vaccine associated adverse events and potentially expand vaccine coverage by requiring fewer doses per schedule. Because human inhalation anthrax studies are neither feasible nor ethical, AVA efficacy estimates are determined using cross-species bridging of immune correlates of protection (COP) identified in animal models. We have previously reported that the AVA 3-IM priming series provided high levels of protection in non-human primates (NHP) against inhalation anthrax for up to 4 years after the first vaccination. Penalized logistic regressions of those NHP immunological data identified that anti-protective antigen (anti-PA) IgG concentration measured just prior to infectious challenge was the most accurate single COP. In the present analysis, cross-species logistic regression models of this COP were used to predict probability of survival during a 43 month study in humans receiving the current 3-dose priming and 4 boosters (12, 18, 30 and 42 months; 7-IM) and reduced schedules with boosters at months 18 and 42 only (5-IM), or at month 42 only (4-IM). All models predicted high survival probabilities for the reduced schedules from 7 to 43 months. The predicted survival probabilities for the reduced schedules were 86.8% (4-IM) and 95.8% (5-IM) at month 42 when antibody levels were lowest. The data indicated that 4-IM and 5-IM are both viable alternatives to the current AVA pre-exposure prophylaxis schedule. Published by Elsevier Ltd.

  14. Space communications scheduler: A rule-based approach to adaptive deadline scheduling

    NASA Technical Reports Server (NTRS)

    Straguzzi, Nicholas

    1990-01-01

    Job scheduling is a deceptively complex subfield of computer science. The highly combinatorial nature of the problem, which is NP-complete in nearly all cases, requires a scheduling program to intelligently transverse an immense search tree to create the best possible schedule in a minimal amount of time. In addition, the program must continually make adjustments to the initial schedule when faced with last-minute user requests, cancellations, unexpected device failures, quests, cancellations, unexpected device failures, etc. A good scheduler must be quick, flexible, and efficient, even at the expense of generating slightly less-than-optimal schedules. The Space Communication Scheduler (SCS) is an intelligent rule-based scheduling system. SCS is an adaptive deadline scheduler which allocates modular communications resources to meet an ordered set of user-specified job requests on board the NASA Space Station. SCS uses pattern matching techniques to detect potential conflicts through algorithmic and heuristic means. As a result, the system generates and maintains high density schedules without relying heavily on backtracking or blind search techniques. SCS is suitable for many common real-world applications.

  15. Predicting field weed emergence with empirical models and soft computing techniques

    USDA-ARS?s Scientific Manuscript database

    Seedling emergence is the most important phenological process that influences the success of weed species; therefore, predicting weed emergence timing plays a critical role in scheduling weed management measures. Important efforts have been made in the attempt to develop models to predict seedling e...

  16. Optimal Shift Duration and Sequence: Recommended Approach for Short-Term Emergency Response Activations for Public Health and Emergency Management

    PubMed Central

    Burgess, Paula A.

    2007-01-01

    Since September 11, 2001, and the consequent restructuring of the US preparedness and response activities, public health workers are increasingly called on to activate a temporary round-the-clock staffing schedule. These workers may have to make key decisions that could significantly impact the health and safety of the public. The unique physiological demands of rotational shift work and night shift work have the potential to negatively impact decisionmaking ability. A responsible, evidence-based approach to scheduling applies the principles of circadian physiology, as well as unique individual physiologies and preferences. Optimal scheduling would use a clockwise (morning-afternoon-night) rotational schedule: limiting night shifts to blocks of 3, limiting shift duration to 8 hours, and allowing 3 days of recuperation after night shifts. PMID:17413074

  17. Scheduling, revenue management, and fairness in an academic-hospital radiology division.

    PubMed

    Baum, Richard; Bertsimas, Dimitris; Kallus, Nathan

    2014-10-01

    Physician staff of academic hospitals today practice in several geographic locations including their main hospital. This is referred to as the extended campus. With extended campuses expanding, the growing complexity of a single division's schedule means that a naive approach to scheduling compromises revenue. Moreover, it may provide an unfair allocation of individual revenue, desirable or burdensome assignments, and the extent to which the preferences of each individual are met. This has adverse consequences on incentivization and employee satisfaction and is simply against business policy. We identify the daily scheduling of physicians in this context as an operational problem that incorporates scheduling, revenue management, and fairness. Noting previous success of operations research and optimization in each of these disciplines, we propose a simple unified optimization formulation of this scheduling problem using mixed-integer optimization. Through a study of implementing the approach at the Division of Angiography and Interventional Radiology at the Brigham and Women's Hospital, which is directed by one of the authors, we exemplify the flexibility of the model to adapt to specific applications, the tractability of solving the model in practical settings, and the significant impact of the approach, most notably in increasing revenue by 8.2% over previous operating revenue while adhering strictly to a codified fairness and objectivity. We found that the investment in implementing such a system is far outweighed by the large potential revenue increase and the other benefits outlined. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  18. Deep space network resource scheduling approach and application

    NASA Technical Reports Server (NTRS)

    Eggemeyer, William C.; Bowling, Alan

    1987-01-01

    Deep Space Network (DSN) resource scheduling is the process of distributing ground-based facilities to track multiple spacecraft. The Jet Propulsion Laboratory has carried out extensive research to find ways of automating this process in an effort to reduce time and manpower costs. This paper presents a resource-scheduling system entitled PLAN-IT with a description of its design philosophy. The PLAN-IT's current on-line usage and limitations in scheduling the resources of the DSN are discussed, along with potential enhancements for DSN application.

  19. Adaptive Subframe Partitioning and Efficient Packet Scheduling in OFDMA Cellular System with Fixed Decode-and-Forward Relays

    NASA Astrophysics Data System (ADS)

    Wang, Liping; Ji, Yusheng; Liu, Fuqiang

    The integration of multihop relays with orthogonal frequency-division multiple access (OFDMA) cellular infrastructures can meet the growing demands for better coverage and higher throughput. Resource allocation in the OFDMA two-hop relay system is more complex than that in the conventional single-hop OFDMA system. With time division between transmissions from the base station (BS) and those from relay stations (RSs), fixed partitioning of the BS subframe and RS subframes can not adapt to various traffic demands. Moreover, single-hop scheduling algorithms can not be used directly in the two-hop system. Therefore, we propose a semi-distributed algorithm called ASP to adjust the length of every subframe adaptively, and suggest two ways to extend single-hop scheduling algorithms into multihop scenarios: link-based and end-to-end approaches. Simulation results indicate that the ASP algorithm increases system utilization and fairness. The max carrier-to-interference ratio (Max C/I) and proportional fairness (PF) scheduling algorithms extended using the end-to-end approach obtain higher throughput than those using the link-based approach, but at the expense of more overhead for information exchange between the BS and RSs. The resource allocation scheme using ASP and end-to-end PF scheduling achieves a tradeoff between system throughput maximization and fairness.

  20. Quantifying and understanding reproductive allocation schedules in plants.

    PubMed

    Wenk, Elizabeth Hedi; Falster, Daniel S

    2015-12-01

    A plant's reproductive allocation (RA) schedule describes the fraction of surplus energy allocated to reproduction as it increases in size. While theorists use RA schedules as the connection between life history and energy allocation, little is known about RA schedules in real vegetation. Here we review what is known about RA schedules for perennial plants using studies either directly quantifying RA or that collected data from which the shape of an RA schedule can be inferred. We also briefly review theoretical models describing factors by which variation in RA may arise. We identified 34 studies from which aspects of an RA schedule could be inferred. Within those, RA schedules varied considerably across species: some species abruptly shift all resources from growth to reproduction; most others gradually shift resources into reproduction, but under a variety of graded schedules. Available data indicate the maximum fraction of energy allocated to production ranges from 0.1 to 1 and that shorter lived species tend to have higher initial RA and increase their RA more quickly than do longer-lived species. Overall, our findings indicate, little data exist about RA schedules in perennial plants. Available data suggest a wide range of schedules across species. Collection of more data on RA schedules would enable a tighter integration between observation and a variety of models predicting optimal energy allocation, plant growth rates, and biogeochemical cycles.

  1. The natural mathematics of behavior analysis.

    PubMed

    Li, Don; Hautus, Michael J; Elliffe, Douglas

    2018-04-19

    Models that generate event records have very general scope regarding the dimensions of the target behavior that we measure. From a set of predicted event records, we can generate predictions for any dependent variable that we could compute from the event records of our subjects. In this sense, models that generate event records permit us a freely multivariate analysis. To explore this proposition, we conducted a multivariate examination of Catania's Operant Reserve on single VI schedules in transition using a Markov Chain Monte Carlo scheme for Approximate Bayesian Computation. Although we found systematic deviations between our implementation of Catania's Operant Reserve and our observed data (e.g., mismatches in the shape of the interresponse time distributions), the general approach that we have demonstrated represents an avenue for modelling behavior that transcends the typical constraints of algebraic models. © 2018 Society for the Experimental Analysis of Behavior.

  2. Final Technical Report for DOE Award SC0006616

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Andrew

    2015-08-01

    This report summarizes research carried out by the project "Collaborative Research, Type 1: Decadal Prediction and Stochastic Simulation of Hydroclimate Over Monsoonal Asia. This collaborative project brought together climate dynamicists (UCLA, IRI), dendroclimatologists (LDEO Tree Ring Laboratory), computer scientists (UCI), and hydrologists (Columbia Water Center, CWC), together with applied scientists in climate risk management (IRI) to create new scientific approaches to quantify and exploit the role of climate variability and change in the growing water crisis across southern and eastern Asia. This project developed new tree-ring based streamflow reconstructions for rivers in monsoonal Asia; improved understanding of hydrologic spatio-temporal modesmore » of variability over monsoonal Asia on interannual-to-centennial time scales; assessed decadal predictability of hydrologic spatio-temporal modes; developed stochastic simulation tools for creating downscaled future climate scenarios based on historical/proxy data and GCM climate change; and developed stochastic reservoir simulation and optimization for scheduling hydropower, irrigation and navigation releases.« less

  3. ARGES: an Expert System for Fault Diagnosis Within Space-Based ECLS Systems

    NASA Technical Reports Server (NTRS)

    Pachura, David W.; Suleiman, Salem A.; Mendler, Andrew P.

    1988-01-01

    ARGES (Atmospheric Revitalization Group Expert System) is a demonstration prototype expert system for fault management for the Solid Amine, Water Desorbed (SAWD) CO2 removal assembly, associated with the Environmental Control and Life Support (ECLS) System. ARGES monitors and reduces data in real time from either the SAWD controller or a simulation of the SAWD assembly. It can detect gradual degradations or predict failures. This allows graceful shutdown and scheduled maintenance, which reduces crew maintenance overhead. Status and fault information is presented in a user interface that simulates what would be seen by a crewperson. The user interface employs animated color graphics and an object oriented approach to provide detailed status information, fault identification, and explanation of reasoning in a rapidly assimulated manner. In addition, ARGES recommends possible courses of action for predicted and actual faults. ARGES is seen as a forerunner of AI-based fault management systems for manned space systems.

  4. Assignment Scheduling Capability for Unmanned Aerial Vehicles - A Discrete Event Simulation with Optimization in the Loop Approach to Solving a Scheduling Problem

    DTIC Science & Technology

    2006-12-01

    APPROACH As mentioned previously, ASCU does not use simulation in the traditional manner. Instead, it uses simulation to transition and capture the state...0 otherwise (by a heuristic discussed below). • Let cja = The reward for a UAV with sensor pack- age j being assigned to mission area a from the

  5. Prediction and uncertainty in human Pavlovian to instrumental transfer.

    PubMed

    Trick, Leanne; Hogarth, Lee; Duka, Theodora

    2011-05-01

    Attentional capture and behavioral control by conditioned stimuli have been dissociated in animals. The current study assessed this dissociation in humans. Participants were trained on a Pavlovian schedule in which 3 visual stimuli, A, B, and C, predicted the occurrence of an aversive noise with 90%, 50%, or 10% probability, respectively. Participants then went on to separate instrumental training in which a key-press response canceled the aversive noise with a .5 probability on a variable interval schedule. Finally, in the transfer phase, the 3 Pavlovian stimuli were presented in this instrumental schedule and were no longer differentially predictive of the outcome. Observing times and gaze dwell time indexed attention to these stimuli in both training and transfer. Aware participants acquired veridical outcome expectancies in training--that is, A > B > C, and these expectancies persisted into transfer. Most important, the transfer effect accorded with these expectancies, A > B > C. By contrast, observing times accorded with uncertainty--that is, they showed B > A = C during training, and B < A = C in the transfer phase. Dwell time bias supported this association between attention and uncertainty, although these data showed a slightly more complicated pattern. Overall, the study suggests that transfer is linked to outcome prediction and is dissociated from attention to conditioned stimuli, which is linked to outcome uncertainty.

  6. Optimised Environmental Test Approaches in the GOCE Project

    NASA Astrophysics Data System (ADS)

    Ancona, V.; Giordano, P.; Casagrande, C.

    2004-08-01

    The Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) is dedicated to measuring the Earth's gravity field and modelling the geoid with extremely high accuracy and spatial resolution. It is the first Earth Explorer Core mission to be developed as part of ESA's Living Planet Programme and is scheduled for launch in 2006. The program is managed by a consortium of European companies: Alenia Spazio, the prime contractor, Astrium GmbH, the platform responsible, Alcatel Space Industries and Laben, suppliers of the main payloads, respectively the Electrostatic Gravity Gradiometer (EGG) and the Satellite to Satellite Tracking Instrument (SSTI), actually a precise GPS receiver. The GOCE Assembly Integration and Verification (AIV) approach is established and implemented in order to demonstrate to the customer that the satellite design meets the applicable requirements and to qualify and accept from lower level up to system level. The driving keywords of "low cost" and "short schedule" program, call for minimizing the development effort by utilizing off-the-shelf equipment combined with a model philosophy lowering the number of models to be used. The paper will deal on the peculiarities of the optimized environmental test approach in the GOCE project. In particular it introduces the logic of the AIV approach and describe the foreseen tests at system level within the SM environmental test campaign, outlining the Quasi Static test performed in the frame of the SM sine vibration tests, and the PFM environmental test campaign pinpointing the deletion of the Sine Vibration test on PFM model. Furthermore the paper highlights how the Model and Test Effectiveness Database (MATD) can be utilized for the prediction of the new space projects like GOCE Satellite.

  7. A three-stage heuristic for harvest scheduling with access road network development

    Treesearch

    Mark M. Clark; Russell D. Meller; Timothy P. McDonald

    2000-01-01

    In this article we present a new model for the scheduling of forest harvesting with spatial and temporal constraints. Our approach is unique in that we incorporate access road network development into the harvest scheduling selection process. Due to the difficulty of solving the problem optimally, we develop a heuristic that consists of a solution construction stage...

  8. A Genetic Algorithm for Flow Shop Scheduling with Assembly Operations to Minimize Makespan

    NASA Astrophysics Data System (ADS)

    Bhongade, A. S.; Khodke, P. M.

    2014-04-01

    Manufacturing systems, in which, several parts are processed through machining workstations and later assembled to form final products, is common. Though scheduling of such problems are solved using heuristics, available solution approaches can provide solution for only moderate sized problems due to large computation time required. In this work, scheduling approach is developed for such flow-shop manufacturing system having machining workstations followed by assembly workstations. The initial schedule is generated using Disjunctive method and genetic algorithm (GA) is applied further for generating schedule for large sized problems. GA is found to give near optimal solution based on the deviation of makespan from lower bound. The lower bound of makespan of such problem is estimated and percent deviation of makespan from lower bounds is used as a performance measure to evaluate the schedules. Computational experiments are conducted on problems developed using fractional factorial orthogonal array, varying the number of parts per product, number of products, and number of workstations (ranging upto 1,520 number of operations). A statistical analysis indicated the significance of all the three factors considered. It is concluded that GA method can obtain optimal makespan.

  9. Trajectory-Based Takeoff Time Predictions Applied to Tactical Departure Scheduling: Concept Description, System Design, and Initial Observations

    NASA Technical Reports Server (NTRS)

    Engelland, Shawn A.; Capps, Alan

    2011-01-01

    Current aircraft departure release times are based on manual estimates of aircraft takeoff times. Uncertainty in takeoff time estimates may result in missed opportunities to merge into constrained en route streams and lead to lost throughput. However, technology exists to improve takeoff time estimates by using the aircraft surface trajectory predictions that enable air traffic control tower (ATCT) decision support tools. NASA s Precision Departure Release Capability (PDRC) is designed to use automated surface trajectory-based takeoff time estimates to improve en route tactical departure scheduling. This is accomplished by integrating an ATCT decision support tool with an en route tactical departure scheduling decision support tool. The PDRC concept and prototype software have been developed, and an initial test was completed at air traffic control facilities in Dallas/Fort Worth. This paper describes the PDRC operational concept, system design, and initial observations.

  10. The role of abnormal fetal heart rate in scheduling chorionic villus sampling.

    PubMed

    Yagel, S; Anteby, E; Ron, M; Hochner-Celnikier, D; Achiron, R

    1992-09-01

    To assess the value of fetal heart rate (FHR) measurements in predicting spontaneous fetal loss in pregnancies scheduled for chorionic villus sampling (CVS). A prospective descriptive study. Two hospital departments of obstetrics and gynaecology in Israel. 114 women between 9 and 11 weeks gestation scheduled for chorionic villus sampling (CVS). Fetal heart rate was measured by transvaginal Doppler ultrasound and compared with a monogram established from 75 fetuses. Whenever a normal FHR was recorded, CVS was performed immediately. 106 women had a normal FHR and underwent CVS; two of these pregnancies ended in miscarriage. In five pregnancies no fetal heart beats could be identified and fetal death was diagnosed. In three pregnancies an abnormal FHR was recorded and CVS was postponed; all three pregnancies ended in miscarriage within 2 weeks. Determination of FHR correlated with crown-rump length could be useful in predicting spontaneous miscarriage before performing any invasive procedure late in the first trimester.

  11. Sensor management in RADAR/IRST track fusion

    NASA Astrophysics Data System (ADS)

    Hu, Shi-qiang; Jing, Zhong-liang

    2004-07-01

    In this paper, a novel radar management strategy technique suitable for RADAR/IRST track fusion, which is based on Fisher Information Matrix (FIM) and fuzzy stochastic decision approach, is put forward. Firstly, optimal radar measurements' scheduling is obtained by the method of maximizing determinant of the Fisher information matrix of radar and IRST measurements, which is managed by the expert system. Then, suggested a "pseudo sensor" to predict the possible target position using the polynomial method based on the radar and IRST measurements, using "pseudo sensor" model to estimate the target position even if the radar is turned off. At last, based on the tracking performance and the state of target maneuver, fuzzy stochastic decision is used to adjust the optimal radar scheduling and retrieve the module parameter of "pseudo sensor". The experiment result indicates that the algorithm can not only limit Radar activity effectively but also keep the tracking accuracy of active/passive system well. And this algorithm eliminates the drawback of traditional Radar management methods that the Radar activity is fixed and not easy to control and protect.

  12. Enabling Remote Health-Caring Utilizing IoT Concept over LTE-Femtocell Networks.

    PubMed

    Hindia, M N; Rahman, T A; Ojukwu, H; Hanafi, E B; Fattouh, A

    2016-01-01

    As the enterprise of the "Internet of Things" is rapidly gaining widespread acceptance, sensors are being deployed in an unrestrained manner around the world to make efficient use of this new technological evolution. A recent survey has shown that sensor deployments over the past decade have increased significantly and has predicted an upsurge in the future growth rate. In health-care services, for instance, sensors are used as a key technology to enable Internet of Things oriented health-care monitoring systems. In this paper, we have proposed a two-stage fundamental approach to facilitate the implementation of such a system. In the first stage, sensors promptly gather together the particle measurements of an android application. Then, in the second stage, the collected data are sent over a Femto-LTE network following a new scheduling technique. The proposed scheduling strategy is used to send the data according to the application's priority. The efficiency of the proposed technique is demonstrated by comparing it with that of well-known algorithms, namely, proportional fairness and exponential proportional fairness.

  13. Enabling Remote Health-Caring Utilizing IoT Concept over LTE-Femtocell Networks

    PubMed Central

    Hindia, M. N.; Rahman, T. A.; Ojukwu, H.; Hanafi, E. B.; Fattouh, A.

    2016-01-01

    As the enterprise of the “Internet of Things” is rapidly gaining widespread acceptance, sensors are being deployed in an unrestrained manner around the world to make efficient use of this new technological evolution. A recent survey has shown that sensor deployments over the past decade have increased significantly and has predicted an upsurge in the future growth rate. In health-care services, for instance, sensors are used as a key technology to enable Internet of Things oriented health-care monitoring systems. In this paper, we have proposed a two-stage fundamental approach to facilitate the implementation of such a system. In the first stage, sensors promptly gather together the particle measurements of an android application. Then, in the second stage, the collected data are sent over a Femto-LTE network following a new scheduling technique. The proposed scheduling strategy is used to send the data according to the application’s priority. The efficiency of the proposed technique is demonstrated by comparing it with that of well-known algorithms, namely, proportional fairness and exponential proportional fairness. PMID:27152423

  14. Diagnosing Autism Spectrum Disorders in Adults: The Use of Autism Diagnostic Observation Schedule (ADOS) Module 4

    ERIC Educational Resources Information Center

    Bastiaansen, Jojanneke A.; Meffert, Harma; Hein, Simone; Huizinga, Petra; Ketelaars, Cees; Pijnenborg, Marieke; Bartels, Arnold; Minderaa, Ruud; Keysers, Christian; de Bildt, Annelies

    2011-01-01

    Autism Diagnostic Observation Schedule (ADOS) module 4 was investigated in an independent sample of high-functioning adult males with an autism spectrum disorder (ASD) compared to three specific diagnostic groups: schizophrenia, psychopathy, and typical development. ADOS module 4 proves to be a reliable instrument with good predictive value. It…

  15. A comprehensive approach to reactive power scheduling in restructured power systems

    NASA Astrophysics Data System (ADS)

    Shukla, Meera

    Financial constraints, regulatory pressure, and need for more economical power transfers have increased the loading of interconnected transmission systems. As a consequence, power systems have been operated close to their maximum power transfer capability limits, making the system more vulnerable to voltage instability events. The problem of voltage collapse characterized by a severe local voltage depression is generally believed to be associated with inadequate VAr support at key buses. The goal of reactive power planning is to maintain a high level of voltage security, through installation of properly sized and located reactive sources and their optimal scheduling. In case of vertically-operated power systems, the reactive requirement of the system is normally satisfied by using all of its reactive sources. But in case of different scenarios of restructured power systems, one may consider a fixed amount of exchange of reactive power through tie lines. Reviewed literature suggests a need for optimal scheduling of reactive power generation for fixed inter area reactive power exchange. The present work proposed a novel approach for reactive power source placement and a novel approach for its scheduling. The VAr source placement technique was based on the property of system connectivity. This is followed by development of optimal reactive power dispatch formulation which facilitated fixed inter area tie line reactive power exchange. This formulation used a Line Flow-Based (LFB) model of power flow analysis. The formulation determined the generation schedule for fixed inter area tie line reactive power exchange. Different operating scenarios were studied to analyze the impact of VAr management approach for vertically operated and restructured power systems. The system loadability, losses, generation and the cost of generation were the performance measures to study the impact of VAr management strategy. The novel approach was demonstrated on IEEE 30 bus system.

  16. Planning as a Precursor to Scheduling for Space Station Payload Operations

    NASA Technical Reports Server (NTRS)

    Howell, Eric; Maxwell, Theresa

    1995-01-01

    Contemporary schedulers attempt to solve the problem of best fitting a set of activities into an available timeframe while still satisfying the necessary constraints. This approach produces results which are optimized for the region of time the scheduler is able to process, satisfying the near term goals of the operation. In general the scheduler is not able to reason about the activities which precede or follow the window into which it is inputs to scheduling so that the intermediate placing activities. This creates a problem for operations which are composed of many activities spanning long durations (which exceed the scheduler's reasoning horizon) such as the continuous operations environment for payload operations on the Space Station. Not only must the near term scheduling objectives be met, but somehow the results of near term scheduling must be made to support the attainment of long term goals.

  17. A neural network approach to job-shop scheduling.

    PubMed

    Zhou, D N; Cherkassky, V; Baldwin, T R; Olson, D E

    1991-01-01

    A novel analog computational network is presented for solving NP-complete constraint satisfaction problems, i.e. job-shop scheduling. In contrast to most neural approaches to combinatorial optimization based on quadratic energy cost function, the authors propose to use linear cost functions. As a result, the network complexity (number of neurons and the number of resistive interconnections) grows only linearly with problem size, and large-scale implementations become possible. The proposed approach is related to the linear programming network described by D.W. Tank and J.J. Hopfield (1985), which also uses a linear cost function for a simple optimization problem. It is shown how to map a difficult constraint-satisfaction problem onto a simple neural net in which the number of neural processors equals the number of subjobs (operations) and the number of interconnections grows linearly with the total number of operations. Simulations show that the authors' approach produces better solutions than existing neural approaches to job-shop scheduling, i.e. the traveling salesman problem-type Hopfield approach and integer linear programming approach of J.P.S. Foo and Y. Takefuji (1988), in terms of the quality of the solution and the network complexity.

  18. An expert system for scheduling requests for communications Links between TDRSS and ERBS

    NASA Technical Reports Server (NTRS)

    Mclean, David R.; Littlefield, Ronald G.; Beyer, David S.

    1987-01-01

    An ERBS-TDRSS Contact Planning System (ERBS-TDRSS CPS) is described which uses a graphics interface and the NASA Transportable Interference Engine. The procedure involves transfer of the ERBS-TDRSS Ground Track Orbit Prediction data to the ERBS flight operations area, where the ERBS-TDRSS CPS automatically generates requests for TDRSS service. As requested events are rejected, alternative context sensitive strategies are employed to generate new requested events until a schedule is completed. A report generator builds schedule requests for separate ERBS-TDRSS contacts.

  19. Constraint satisfaction adaptive neural network and heuristics combined approaches for generalized job-shop scheduling.

    PubMed

    Yang, S; Wang, D

    2000-01-01

    This paper presents a constraint satisfaction adaptive neural network, together with several heuristics, to solve the generalized job-shop scheduling problem, one of NP-complete constraint satisfaction problems. The proposed neural network can be easily constructed and can adaptively adjust its weights of connections and biases of units based on the sequence and resource constraints of the job-shop scheduling problem during its processing. Several heuristics that can be combined with the neural network are also presented. In the combined approaches, the neural network is used to obtain feasible solutions, the heuristic algorithms are used to improve the performance of the neural network and the quality of the obtained solutions. Simulations have shown that the proposed neural network and its combined approaches are efficient with respect to the quality of solutions and the solving speed.

  20. A survey on faculty perspectives on the transition to a biochemistry course-based undergraduate research experience laboratory.

    PubMed

    Craig, Paul A

    2017-09-01

    It will always remain a goal of an undergraduate biochemistry laboratory course to engage students hands-on in a wide range of biochemistry laboratory experiences. In 2006, our research group initiated a project for in silico prediction of enzyme function based only on the 3D coordinates of the more than 3800 proteins "of unknown function" in the Protein Data Bank, many of which resulted from the Protein Structure Initiative. Students have used the ProMOL plugin to the PyMOL molecular graphics environment along with BLAST, Pfam, and Dali to predict protein functions. As young scientists, these undergraduate research students wanted to see if their predictions were correct and so they developed an approach for in vitro testing of predicted enzyme function that included literature exploration, selection of a suitable assay and the search for commercially available substrates. Over the past two years, a team of faculty members from seven different campuses (California Polytechnic San Luis Obispo, Hope College, Oral Roberts University, Rochester Institute of Technology, St. Mary's University, Ursinus College, and Purdue University) have transferred this approach to the undergraduate biochemistry teaching laboratory as a Course-based Undergraduate Research Experience. A series of ten course modules and eight instructional videos have been created (www.promol.org/home/basil-modules-1) and the group is now expanding these resources, creating assessments and evaluating how this approach helps student to grow as scientists. The focus of this manuscript will be the logistical implications of this transition on campuses that have different cultures, expectations, schedules, and student populations. © 2017 by The International Union of Biochemistry and Molecular Biology, 45(5):426-436, 2017. © 2017 The International Union of Biochemistry and Molecular Biology.

  1. An intelligent value-driven scheduling system for Space Station Freedom with special emphasis on the electric power system

    NASA Technical Reports Server (NTRS)

    Krupp, Joseph C.

    1991-01-01

    The Electric Power Control System (EPCS) created by Decision-Science Applications, Inc. (DSA) for the Lewis Research Center is discussed. This system makes decisions on what to schedule and when to schedule it, including making choices among various options or ways of performing a task. The system is goal-directed and seeks to shape resource usage in an optimal manner using a value-driven approach. Discussed here are considerations governing what makes a good schedule, how to design a value function to find the best schedule, and how to design the algorithm that finds the schedule that maximizes this value function. Results are shown which demonstrate the usefulness of the techniques employed.

  2. A Genetic-Based Scheduling Algorithm to Minimize the Makespan of the Grid Applications

    NASA Astrophysics Data System (ADS)

    Entezari-Maleki, Reza; Movaghar, Ali

    Task scheduling algorithms in grid environments strive to maximize the overall throughput of the grid. In order to maximize the throughput of the grid environments, the makespan of the grid tasks should be minimized. In this paper, a new task scheduling algorithm is proposed to assign tasks to the grid resources with goal of minimizing the total makespan of the tasks. The algorithm uses the genetic approach to find the suitable assignment within grid resources. The experimental results obtained from applying the proposed algorithm to schedule independent tasks within grid environments demonstrate the applicability of the algorithm in achieving schedules with comparatively lower makespan in comparison with other well-known scheduling algorithms such as, Min-min, Max-min, RASA and Sufferage algorithms.

  3. VPipe: Virtual Pipelining for Scheduling of DAG Stream Query Plans

    NASA Astrophysics Data System (ADS)

    Wang, Song; Gupta, Chetan; Mehta, Abhay

    There are data streams all around us that can be harnessed for tremendous business and personal advantage. For an enterprise-level stream processing system such as CHAOS [1] (Continuous, Heterogeneous Analytic Over Streams), handling of complex query plans with resource constraints is challenging. While several scheduling strategies exist for stream processing, efficient scheduling of complex DAG query plans is still largely unsolved. In this paper, we propose a novel execution scheme for scheduling complex directed acyclic graph (DAG) query plans with meta-data enriched stream tuples. Our solution, called Virtual Pipelined Chain (or VPipe Chain for short), effectively extends the "Chain" pipelining scheduling approach to complex DAG query plans.

  4. Rethinking the Clockwork of Work: Why Schedule Control May Pay Off at Work and at Home.

    PubMed

    Kelly, Erin L; Moen, Phyllis

    2007-11-01

    Many employees face work-life conflicts and time deficits that negatively affect their health, well-being, effectiveness on the job, and organizational commitment. Many organizations have adopted flexible work arrangements but not all of them increase schedule control, that is, employees' control over when, where, and how much they work. This article describes some limitations of flexible work policies, proposes a conceptual model of how schedule control impacts work-life conflicts, and describes specific ways to increase employees' schedule control, including best practices for implementing common flexible work policies and Best Buy's innovative approach to creating a culture of schedule control.

  5. Optimal Cluster Mill Pass Scheduling With an Accurate and Rapid New Strip Crown Model

    NASA Astrophysics Data System (ADS)

    Malik, Arif S.; Grandhi, Ramana V.; Zipf, Mark E.

    2007-05-01

    Besides the requirement to roll coiled sheet at high levels of productivity, the optimal pass scheduling of cluster-type reversing cold mills presents the added challenge of assigning mill parameters that facilitate the best possible strip flatness. The pressures of intense global competition, and the requirements for increasingly thinner, higher quality specialty sheet products that are more difficult to roll, continue to force metal producers to commission innovative flatness-control technologies. This means that during the on-line computerized set-up of rolling mills, the mathematical model should not only determine the minimum total number of passes and maximum rolling speed, it should simultaneously optimize the pass-schedule so that desired flatness is assured, either by manual or automated means. In many cases today, however, on-line prediction of strip crown and corresponding flatness for the complex cluster-type rolling mills is typically addressed either by trial and error, by approximate deflection models for equivalent vertical roll-stacks, or by non-physical pattern recognition style models. The abundance of the aforementioned methods is largely due to the complexity of cluster-type mill configurations and the lack of deflection models with sufficient accuracy and speed for on-line use. Without adequate assignment of the pass-schedule set-up parameters, it may be difficult or impossible to achieve the required strip flatness. In this paper, we demonstrate optimization of cluster mill pass-schedules using a new accurate and rapid strip crown model. This pass-schedule optimization includes computations of the predicted strip thickness profile to validate mathematical constraints. In contrast to many of the existing methods for on-line prediction of strip crown and flatness on cluster mills, the demonstrated method requires minimal prior tuning and no extensive training with collected mill data. To rapidly and accurately solve the multi-contact problem and predict the strip crown, a new customized semi-analytical modeling technique that couples the Finite Element Method (FEM) with classical solid mechanics was developed to model the deflection of the rolls and strip while under load. The technique employed offers several important advantages over traditional methods to calculate strip crown, including continuity of elastic foundations, non-iterative solution when using predetermined foundation moduli, continuous third-order displacement fields, simple stress-field determination, and a comparatively faster solution time.

  6. Pre-Scheduled and Self Organized Sleep-Scheduling Algorithms for Efficient K-Coverage in Wireless Sensor Networks

    PubMed Central

    Hwang, I-Shyan

    2017-01-01

    The K-coverage configuration that guarantees coverage of each location by at least K sensors is highly popular and is extensively used to monitor diversified applications in wireless sensor networks. Long network lifetime and high detection quality are the essentials of such K-covered sleep-scheduling algorithms. However, the existing sleep-scheduling algorithms either cause high cost or cannot preserve the detection quality effectively. In this paper, the Pre-Scheduling-based K-coverage Group Scheduling (PSKGS) and Self-Organized K-coverage Scheduling (SKS) algorithms are proposed to settle the problems in the existing sleep-scheduling algorithms. Simulation results show that our pre-scheduled-based KGS approach enhances the detection quality and network lifetime, whereas the self-organized-based SKS algorithm minimizes the computation and communication cost of the nodes and thereby is energy efficient. Besides, SKS outperforms PSKGS in terms of network lifetime and detection quality as it is self-organized. PMID:29257078

  7. SMEX-Lite Modular Solar Array Architecture

    NASA Technical Reports Server (NTRS)

    Lyons, John W.; Day, John (Technical Monitor)

    2002-01-01

    The NASA Small Explorer (SMEX) missions have typically had three years between mission definition and launch. This short schedule has posed significant challenges with respect to solar array design and procurement. Typically, the solar panel geometry is frozen prior to going out with a procurement. However, with the SMEX schedule, it has been virtually impossible to freeze the geometry in time to avoid scheduling problems with integrating the solar panels to the spacecraft. A modular solar array architecture was developed to alleviate this problem. This approach involves procuring sufficient modules for multiple missions and assembling the modules onto a solar array framework that is unique to each mission. The modular approach removes the solar array from the critical path of the SMEX integration and testing schedule. It also reduces the cost per unit area of the solar arrays and facilitates the inclusion of experiments involving new solar cell or panel technologies in the SMEX missions.

  8. Optimisation of assembly scheduling in VCIM systems using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Dao, Son Duy; Abhary, Kazem; Marian, Romeo

    2017-09-01

    Assembly plays an important role in any production system as it constitutes a significant portion of the lead time and cost of a product. Virtual computer-integrated manufacturing (VCIM) system is a modern production system being conceptually developed to extend the application of traditional computer-integrated manufacturing (CIM) system to global level. Assembly scheduling in VCIM systems is quite different from one in traditional production systems because of the difference in the working principles of the two systems. In this article, the assembly scheduling problem in VCIM systems is modeled and then an integrated approach based on genetic algorithm (GA) is proposed to search for a global optimised solution to the problem. Because of dynamic nature of the scheduling problem, a novel GA with unique chromosome representation and modified genetic operations is developed herein. Robustness of the proposed approach is verified by a numerical example.

  9. Final Report on DOE Project entitled Dynamic Optimized Advanced Scheduling of Bandwidth Demands for Large-Scale Science Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramamurthy, Byravamurthy

    2014-05-05

    In this project, developed scheduling frameworks for dynamic bandwidth demands for large-scale science applications. In particular, we developed scheduling algorithms for dynamic bandwidth demands in this project. Apart from theoretical approaches such as Integer Linear Programming, Tabu Search and Genetic Algorithm heuristics, we have utilized practical data from ESnet OSCARS project (from our DOE lab partners) to conduct realistic simulations of our approaches. We have disseminated our work through conference paper presentations and journal papers and a book chapter. In this project we addressed the problem of scheduling of lightpaths over optical wavelength division multiplexed (WDM) networks. We published severalmore » conference papers and journal papers on this topic. We also addressed the problems of joint allocation of computing, storage and networking resources in Grid/Cloud networks and proposed energy-efficient mechanisms for operatin optical WDM networks.« less

  10. Enrollment Management in Medical School Admissions: A Novel Evidence-Based Approach at One Institution.

    PubMed

    Burkhardt, John C; DesJardins, Stephen L; Teener, Carol A; Gay, Steven E; Santen, Sally A

    2016-11-01

    In higher education, enrollment management has been developed to accurately predict the likelihood of enrollment of admitted students. This allows evidence to dictate numbers of interviews scheduled, offers of admission, and financial aid package distribution. The applicability of enrollment management techniques for use in medical education was tested through creation of a predictive enrollment model at the University of Michigan Medical School (U-M). U-M and American Medical College Application Service data (2006-2014) were combined to create a database including applicant demographics, academic application scores, institutional financial aid offer, and choice of school attended. Binomial logistic regression and multinomial logistic regression models were estimated in order to study factors related to enrollment at the local institution versus elsewhere and to groupings of competing peer institutions. A predictive analytic "dashboard" was created for practical use. Both models were significant at P < .001 and had similar predictive performance. In the binomial model female, underrepresented minority students, grade point average, Medical College Admission Test score, admissions committee desirability score, and most individual financial aid offers were significant (P < .05). The significant covariates were similar in the multinomial model (excluding female) and provided separate likelihoods of students enrolling at different institutional types. An enrollment-management-based approach would allow medical schools to better manage the number of students they admit and target recruitment efforts to improve their likelihood of success. It also performs a key institutional research function for understanding failed recruitment of highly desirable candidates.

  11. Deterministic decomposition and seasonal ARIMA time series models applied to airport noise forecasting

    NASA Astrophysics Data System (ADS)

    Guarnaccia, Claudio; Quartieri, Joseph; Tepedino, Carmine

    2017-06-01

    One of the most hazardous physical polluting agents, considering their effects on human health, is acoustical noise. Airports are a strong source of acoustical noise, due to the airplanes turbines, to the aero-dynamical noise of transits, to the acceleration or the breaking during the take-off and landing phases of aircrafts, to the road traffic around the airport, etc.. The monitoring and the prediction of the acoustical level emitted by airports can be very useful to assess the impact on human health and activities. In the airports noise scenario, thanks to flights scheduling, the predominant sources may have a periodic behaviour. Thus, a Time Series Analysis approach can be adopted, considering that a general trend and a seasonal behaviour can be highlighted and used to build a predictive model. In this paper, two different approaches are adopted, thus two predictive models are constructed and tested. The first model is based on deterministic decomposition and is built composing the trend, that is the long term behaviour, the seasonality, that is the periodic component, and the random variations. The second model is based on seasonal autoregressive moving average, and it belongs to the stochastic class of models. The two different models are fitted on an acoustical level dataset collected close to the Nice (France) international airport. Results will be encouraging and will show good prediction performances of both the adopted strategies. A residual analysis is performed, in order to quantify the forecasting error features.

  12. A Descriptive Outline of a Modular Schedule, Flexible Scheduling Using the Data Processing Method. A Report from Virgin Valley High School, Mesquite, Nevada.

    ERIC Educational Resources Information Center

    Allan, Blaine W., Comp.

    The procedures, forms, and philosophy of the computerized modular scheduling program developed at Virgin Valley High School are outlined. The modular concept is eveloped as a new approach to course structure with explanations, examples, and worksheets included. Examples of courses of study, input information for the data processing center, output…

  13. Working Notes from the 1992 AAAI Spring Symposium on Practical Approaches to Scheduling and Planning

    NASA Technical Reports Server (NTRS)

    Drummond, Mark; Fox, Mark; Tate, Austin; Zweben, Monte

    1992-01-01

    The symposium presented issues involved in the development of scheduling systems that can deal with resource and time limitations. To qualify, a system must be implemented and tested to some degree on non-trivial problems (ideally, on real-world problems). However, a system need not be fully deployed to qualify. Systems that schedule actions in terms of metric time constraints typically represent and reason about an external numeric clock or calendar and can be contrasted with those systems that represent time purely symbolically. The following topics are discussed: integrating planning and scheduling; integrating symbolic goals and numerical utilities; managing uncertainty; incremental rescheduling; managing limited computation time; anytime scheduling and planning algorithms, systems; dependency analysis and schedule reuse; management of schedule and plan execution; and incorporation of discrete event techniques.

  14. Detection of crop water status in mature olive orchards using vegetation spectral measurements

    NASA Astrophysics Data System (ADS)

    Rallo, Giovanni; Ciraolo, Giuseppe; Farina, Giuseppe; Minacapilli, Mario; Provenzano, Giuseppe

    2013-04-01

    Leaf/stem water potentials are generally considered the most accurate indicators of crop water status (CWS) and they are quite often used for irrigation scheduling, even if costly and time-consuming. For this reason, in the last decade vegetation spectral measurements have been proposed, not only for environmental monitoring, but also in precision agriculture, to evaluate crop parameters and consequently for irrigation scheduling. Objective of the study was to assess the potential of hyperspectral reflectance (450-2400 nm) data to predict the crop water status (CWS) of a Mediterranean olive orchard. Different approaches were tested and particularly, (i) several standard broad- and narrow-band vegetation indices (VIs), (ii) specific VIs computed on the basis of some key wavelengths, predetermined by simple correlations and finally, (iii) using partial least squares (PLS) regression technique. To this aim, an intensive experimental campaign was carried out in 2010 and a total of 201 reflectance spectra, at leaf and canopy level, were collected with an ASD FieldSpec Pro (Analytical Spectral Devices, Inc.) handheld field spectroradiometer. CWS was contemporarily determined by measuring leaf and stem water potentials with the Scholander chamber. The results indicated that the considered standard vegetation indices were weakly correlated with CWS. On the other side, the prediction of CWS can be improved using VIs pointed to key-specific wavelengths, predetermined with a correlation analysis. The best prediction accuracy, however, can be achieved with models based on PLS regressions. The results confirmed the dependence of leaf/canopy optical features from CWS so that, for the examined crop, the proposed methodology can be considered a promising tool that could also be extended for operational applications using multispectral aerial sensors.

  15. Intraoperative Conversion From Partial to Radical Nephrectomy: Incidence, Predictive Factors, and Outcomes.

    PubMed

    Petros, Firas G; Keskin, Sarp K; Yu, Kai-Jie; Li, Roger; Metcalfe, Michael J; Fellman, Bryan M; Chang, Courtney M; Gu, Cindy; Tamboli, Pheroze; Matin, Surena F; Karam, Jose A; Wood, Christopher G

    2018-06-01

    To evaluate preoperative and intraoperative predictors of conversion to radical nephrectomy (RN) in a cohort of patients undergoing a planned partial nephrectomy (PN) for renal cell carcinoma (RCC). A single-center, retrospective review was conducted using our PN database that includes patients who were scheduled to undergo PN (regardless of the approach) but were converted to RN between August 1990 and December 2016. Reasons for conversion were collected from the operative report. Patient demographics and perioperative variables were compared with the successful PN group. Univariate and multivariate logistic regression analyses were conducted to assess predictors of conversion. A total of 1857 patients were scheduled to undergo PN. Of these patients, 90 (5%) were converted to RN. The multivariate model showed that larger tumor size (odds ratio [OR] = 1.20, P = .040), higher RENAL nephrometry score (OR = 1.41, P = .001), hilar tumor or renal sinus invasion (OR = 2.80, P = .004), laparoscopic PN (OR = 7.34, P <.001), intraoperative bleeding (OR = 19.62, P <.001), positive surgical margin (OR = 31.85, P <.001), and advanced pathologic tumor-stage (T3 or T4) (OR = 7.29, P <.001) were associated with increased odds of intraoperative conversion to RN. The rate of conversion to RN was low in patients who were scheduled to undergo PN in this series. Larger tumor size with increasing complexity, hilar tumor location or renal sinus invasion, locally advanced tumors, laparoscopic PN but not robotic PN, bleeding complication, and positive surgical margin were associated with intraoperative conversion from scheduled PN to RN. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Using mean duration and variation of procedure times to plan a list of surgical operations to fit into the scheduled list time.

    PubMed

    Pandit, Jaideep J; Tavare, Aniket

    2011-07-01

    It is important that a surgical list is planned to utilise as much of the scheduled time as possible while not over-running, because this can lead to cancellation of operations. We wished to assess whether, theoretically, the known duration of individual operations could be used quantitatively to predict the likely duration of the operating list. In a university hospital setting, we first assessed the extent to which the current ad-hoc method of operating list planning was able to match the scheduled operating list times for 153 consecutive historical lists. Using receiver operating curve analysis, we assessed the ability of an alternative method to predict operating list duration for the same operating lists. This method uses a simple formula: the sum of individual operation times and a pooled standard deviation of these times. We used the operating list duration estimated from this formula to generate a probability that the operating list would finish within its scheduled time. Finally, we applied the simple formula prospectively to 150 operating lists, 'shadowing' the current ad-hoc method, to confirm the predictive ability of the formula. The ad-hoc method was very poor at planning: 50% of historical operating lists were under-booked and 37% over-booked. In contrast, the simple formula predicted the correct outcome (under-run or over-run) for 76% of these operating lists. The calculated probability that a planned series of operations will over-run or under-run was found useful in developing an algorithm to adjust the planned cases optimally. In the prospective series, 65% of operating lists were over-booked and 10% were under-booked. The formula predicted the correct outcome for 84% of operating lists. A simple quantitative method of estimating operating list duration for a series of operations leads to an algorithm (readily created on an Excel spreadsheet, http://links.lww.com/EJA/A19) that can potentially improve operating list planning.

  17. Remote sensing as a tool in assessing soil moisture

    NASA Technical Reports Server (NTRS)

    Carlson, C. W.

    1978-01-01

    The effects of soil moisture as it relates to agriculture is briefly discussed. The use of remote sensing to predict scheduling of irrigation, runoff and soil erosion which contributes to the prediction of crop yield is also discussed.

  18. An expert system for planning and scheduling in a telerobotic environment

    NASA Technical Reports Server (NTRS)

    Ntuen, Celestine A.; Park, Eui H.

    1991-01-01

    A knowledge based approach to assigning tasks to multi-agents working cooperatively in jobs that require a telerobot in the loop was developed. The generality of the approach allows for such a concept to be applied in a nonteleoperational domain. The planning architecture known as the task oriented planner (TOP) uses the principle of flow mechanism and the concept of planning by deliberation to preserve and use knowledge about a particular task. The TOP is an open ended architecture developed with a NEXPERT expert system shell and its knowledge organization allows for indirect consultation at various levels of task abstraction. Considering that a telerobot operates in a hostile and nonstructured environment, task scheduling should respond to environmental changes. A general heuristic was developed for scheduling jobs with the TOP system. The technique is not to optimize a given scheduling criterion as in classical job and/or flow shop problems. For a teleoperation job schedule, criteria are situation dependent. A criterion selection is fuzzily embedded in the task-skill matrix computation. However, goal achievement with minimum expected risk to the human operator is emphasized.

  19. A modify ant colony optimization for the grid jobs scheduling problem with QoS requirements

    NASA Astrophysics Data System (ADS)

    Pu, Xun; Lu, XianLiang

    2011-10-01

    Job scheduling with customers' quality of service (QoS) requirement is challenging in grid environment. In this paper, we present a modify Ant colony optimization (MACO) for the Job scheduling problem in grid. Instead of using the conventional construction approach to construct feasible schedules, the proposed algorithm employs a decomposition method to satisfy the customer's deadline and cost requirements. Besides, a new mechanism of service instances state updating is embedded to improve the convergence of MACO. Experiments demonstrate the effectiveness of the proposed algorithm.

  20. Aeon: Synthesizing Scheduling Algorithms from High-Level Models

    NASA Astrophysics Data System (ADS)

    Monette, Jean-Noël; Deville, Yves; van Hentenryck, Pascal

    This paper describes the aeon system whose aim is to synthesize scheduling algorithms from high-level models. A eon, which is entirely written in comet, receives as input a high-level model for a scheduling application which is then analyzed to generate a dedicated scheduling algorithm exploiting the structure of the model. A eon provides a variety of synthesizers for generating complete or heuristic algorithms. Moreover, synthesizers are compositional, making it possible to generate complex hybrid algorithms naturally. Preliminary experimental results indicate that this approach may be competitive with state-of-the-art search algorithms.

  1. Interest level in 2-year-olds with autism spectrum disorder predicts rate of verbal, nonverbal, and adaptive skill acquisition.

    PubMed

    Klintwall, Lars; Macari, Suzanne; Eikeseth, Svein; Chawarska, Katarzyna

    2015-11-01

    Recent studies have suggested that skill acquisition rates for children with autism spectrum disorders receiving early interventions can be predicted by child motivation. We examined whether level of interest during an Autism Diagnostic Observation Schedule assessment at 2 years predicts subsequent rates of verbal, nonverbal, and adaptive skill acquisition to the age of 3 years. A total of 70 toddlers with autism spectrum disorder, mean age of 21.9 months, were scored using Interest Level Scoring for Autism, quantifying toddlers' interest in toys, social routines, and activities that could serve as reinforcers in an intervention. Adaptive level and mental age were measured concurrently (Time 1) and again after a mean of 16.3 months of treatment (Time 2). Interest Level Scoring for Autism score, Autism Diagnostic Observation Schedule score, adaptive age equivalent, verbal and nonverbal mental age, and intensity of intervention were entered into regression models to predict rates of skill acquisition. Interest level at Time 1 predicted subsequent acquisition rate of adaptive skills (R(2) = 0.36) and verbal mental age (R(2) = 0.30), above and beyond the effects of Time 1 verbal and nonverbal mental ages and Autism Diagnostic Observation Schedule scores. Interest level at Time 1 also contributed (R(2) = 0.30), with treatment intensity, to variance in development of nonverbal mental age. © The Author(s) 2014.

  2. Predictive Scheduling for Electric Vehicles Considering Uncertainty of Load and User Behaviors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Bin; Huang, Rui; Wang, Yubo

    2016-05-02

    Un-coordinated Electric Vehicle (EV) charging can create unexpected load in local distribution grid, which may degrade the power quality and system reliability. The uncertainty of EV load, user behaviors and other baseload in distribution grid, is one of challenges that impedes optimal control for EV charging problem. Previous researches did not fully solve this problem due to lack of real-world EV charging data and proper stochastic model to describe these behaviors. In this paper, we propose a new predictive EV scheduling algorithm (PESA) inspired by Model Predictive Control (MPC), which includes a dynamic load estimation module and a predictive optimizationmore » module. The user-related EV load and base load are dynamically estimated based on the historical data. At each time interval, the predictive optimization program will be computed for optimal schedules given the estimated parameters. Only the first element from the algorithm outputs will be implemented according to MPC paradigm. Current-multiplexing function in each Electric Vehicle Supply Equipment (EVSE) is considered and accordingly a virtual load is modeled to handle the uncertainties of future EV energy demands. This system is validated by the real-world EV charging data collected on UCLA campus and the experimental results indicate that our proposed model not only reduces load variation up to 40% but also maintains a high level of robustness. Finally, IEC 61850 standard is utilized to standardize the data models involved, which brings significance to more reliable and large-scale implementation.« less

  3. Mathematical modeling of prostate cancer progression in response to androgen ablation therapy.

    PubMed

    Jain, Harsh Vardhan; Clinton, Steven K; Bhinder, Arvinder; Friedman, Avner

    2011-12-06

    Prostate cancer progression depends in part on the complex interactions between testosterone, its active metabolite DHT, and androgen receptors. In a metastatic setting, the first line of treatment is the elimination of testosterone. However, such interventions are not curative because cancer cells evolve via multiple mechanisms to a castrate-resistant state, allowing progression to a lethal outcome. It is hypothesized that administration of antiandrogen therapy in an intermittent, as opposed to continuous, manner may bestow improved disease control with fewer treatment-related toxicities. The present study develops a biochemically motivated mathematical model of antiandrogen therapy that can be tested prospectively as a predictive tool. The model includes "personalized" parameters, which address the heterogeneity in the predicted course of the disease under various androgen-deprivation schedules. Model simulations are able to capture a variety of clinically observed outcomes for "average" patient data under different intermittent schedules. The model predicts that in the absence of a competitive advantage of androgen-dependent cancer cells over castration-resistant cancer cells, intermittent scheduling can lead to more rapid treatment failure as compared to continuous treatment. However, increasing a competitive advantage for hormone-sensitive cells swings the balance in favor of intermittent scheduling, delaying the acquisition of genetic or epigenetic alterations empowering androgen resistance. Given the near universal prevalence of antiandrogen treatment failure in the absence of competing mortality, such modeling has the potential of developing into a useful tool for incorporation into clinical research trials and ultimately as a prognostic tool for individual patients.

  4. Hidden Efficiencies: Making Completion of the Pediatric Vaccine Schedule More Efficient for Physicians

    PubMed Central

    Ciarametaro, Mike; Bradshaw, Steven E.; Guiglotto, Jillian; Hahn, Beth; Meier, Genevieve

    2015-01-01

    Abstract The objective of this work is to demonstrate the potential time and labor savings that may result from increased use of combination vaccinations. The study (GSK study identifier: HO-12-4735) was a model developed to evaluate the efficiency of the pediatric vaccine schedule, using time and motion studies. The model considered vaccination time and the associated labor costs, but vaccination acquisition costs were not considered. We also did not consider any efficacy or safety differences between formulations. The model inputs were supported by a targeted literature review. The reference year for the model was 2012. The most efficient vaccination program using currently available vaccines was predicted to reduce costs through a combination of fewer injections (62%) and less time per vaccination (38%). The most versus the least efficient vaccine program was predicted to result in a 47% reduction in vaccination time and a 42% reduction in labor and supply costs. The estimated administration cost saving with the most versus the least efficient program was estimated to be nearly US $45 million. If hypothetical 6- or 7-valent vaccines are developed using the already most efficient schedule by adding additional antigens (pneumococcal conjugate vaccine and Haemophilus influenzae type b) to the most efficient 5-valent vaccine, the savings are predicted to be even greater. Combination vaccinations reduce the time burden of the childhood immunization schedule and could create the potential to improve vaccination uptake and compliance as a result of fewer required injections. PMID:25634165

  5. Application of multiobjective optimization to scheduling capacity expansion of urban water resource systems

    NASA Astrophysics Data System (ADS)

    Mortazavi-Naeini, Mohammad; Kuczera, George; Cui, Lijie

    2014-06-01

    Significant population increase in urban areas is likely to result in a deterioration of drought security and level of service provided by urban water resource systems. One way to cope with this is to optimally schedule the expansion of system resources. However, the high capital costs and environmental impacts associated with expanding or building major water infrastructure warrant the investigation of scheduling system operational options such as reservoir operating rules, demand reduction policies, and drought contingency plans, as a way of delaying or avoiding the expansion of water supply infrastructure. Traditionally, minimizing cost has been considered the primary objective in scheduling capacity expansion problems. In this paper, we consider some of the drawbacks of this approach. It is shown that there is no guarantee that the social burden of coping with drought emergencies is shared equitably across planning stages. In addition, it is shown that previous approaches do not adequately exploit the benefits of joint optimization of operational and infrastructure options and do not adequately address the need for the high level of drought security expected for urban systems. To address these shortcomings, a new multiobjective optimization approach to scheduling capacity expansion in an urban water resource system is presented and illustrated in a case study involving the bulk water supply system for Canberra. The results show that the multiobjective approach can address the temporal equity issue of sharing the burden of drought emergencies and that joint optimization of operational and infrastructure options can provide solutions superior to those just involving infrastructure options.

  6. Model-Based Individualized Treatment of Chemotherapeutics: Bayesian Population Modeling and Dose Optimization

    PubMed Central

    Jayachandran, Devaraj; Laínez-Aguirre, José; Rundell, Ann; Vik, Terry; Hannemann, Robert; Reklaitis, Gintaras; Ramkrishna, Doraiswami

    2015-01-01

    6-Mercaptopurine (6-MP) is one of the key drugs in the treatment of many pediatric cancers, auto immune diseases and inflammatory bowel disease. 6-MP is a prodrug, converted to an active metabolite 6-thioguanine nucleotide (6-TGN) through enzymatic reaction involving thiopurine methyltransferase (TPMT). Pharmacogenomic variation observed in the TPMT enzyme produces a significant variation in drug response among the patient population. Despite 6-MP’s widespread use and observed variation in treatment response, efforts at quantitative optimization of dose regimens for individual patients are limited. In addition, research efforts devoted on pharmacogenomics to predict clinical responses are proving far from ideal. In this work, we present a Bayesian population modeling approach to develop a pharmacological model for 6-MP metabolism in humans. In the face of scarcity of data in clinical settings, a global sensitivity analysis based model reduction approach is used to minimize the parameter space. For accurate estimation of sensitive parameters, robust optimal experimental design based on D-optimality criteria was exploited. With the patient-specific model, a model predictive control algorithm is used to optimize the dose scheduling with the objective of maintaining the 6-TGN concentration within its therapeutic window. More importantly, for the first time, we show how the incorporation of information from different levels of biological chain-of response (i.e. gene expression-enzyme phenotype-drug phenotype) plays a critical role in determining the uncertainty in predicting therapeutic target. The model and the control approach can be utilized in the clinical setting to individualize 6-MP dosing based on the patient’s ability to metabolize the drug instead of the traditional standard-dose-for-all approach. PMID:26226448

  7. Safety Discrete Event Models for Holonic Cyclic Manufacturing Systems

    NASA Astrophysics Data System (ADS)

    Ciufudean, Calin; Filote, Constantin

    In this paper the expression “holonic cyclic manufacturing systems” refers to complex assembly/disassembly systems or fork/join systems, kanban systems, and in general, to any discrete event system that transforms raw material and/or components into products. Such a system is said to be cyclic if it provides the same sequence of products indefinitely. This paper considers the scheduling of holonic cyclic manufacturing systems and describes a new approach using Petri nets formalism. We propose an approach to frame the optimum schedule of holonic cyclic manufacturing systems in order to maximize the throughput while minimize the work in process. We also propose an algorithm to verify the optimum schedule.

  8. Strategies for managing work/life interaction among women and men with variable and unpredictable work hours in retail sales in Québec, Canada.

    PubMed

    Messing, Karen; Tissot, France; Couture, Vanessa; Bernstein, Stephanie

    2014-01-01

    Increasingly, work schedules in retail sales are generated by software that takes into account variations in predicted sales. The resulting variable and unpredictable schedules require employees to be available, unpaid, over extended periods. At the request of a union, we studied schedule preferences in a retail chain in Québec using observations, interviews, and questionnaires. Shift start times had varied on average by four hours over the previous week; 83 percent had worked at least one day the previous weekend. Difficulties with work/life balance were associated with schedules and, among women, with family responsibilities. Most workers wanted: more advance notice; early shifts; regular schedules; two days off in sequence; and weekends off. Choices varied, so software could be adapted to take preferences into account. Also, employers could give better advance notice and establish systems for shift exchanges. Governments could limit store hours and schedule variability while prolonging the minimum sequential duration of leave per week.

  9. Prediction based proactive thermal virtual machine scheduling in green clouds.

    PubMed

    Kinger, Supriya; Kumar, Rajesh; Sharma, Anju

    2014-01-01

    Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM) scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs) before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated.

  10. SPANR planning and scheduling

    NASA Astrophysics Data System (ADS)

    Freund, Richard F.; Braun, Tracy D.; Kussow, Matthew; Godfrey, Michael; Koyama, Terry

    2001-07-01

    SPANR (Schedule, Plan, Assess Networked Resources) is (i) a pre-run, off-line planning and (ii) a runtime, just-in-time scheduling mechanism. It is designed to support primarily commercial applications in that it optimizes throughput rather than individual jobs (unless they have highest priority). Thus it is a tool for a commercial production manager to maximize total work. First the SPANR Planner is presented showing the ability to do predictive 'what-if' planning. It can answer such questions as, (i) what is the overall effect of acquiring new hardware or (ii) what would be the effect of a different scheduler. The ability of the SPANR Planner to formulate in advance tree-trimming strategies is useful in several commercial applications, such as electronic design or pharmaceutical simulations. The SPANR Planner is demonstrated using a variety of benchmarks. The SPANR Runtime Scheduler (RS) is briefly presented. The SPANR RS can provide benefit for several commercial applications, such as airframe design and financial applications. Finally a design is shown whereby SPANR can provide scheduling advice to most resource management systems.

  11. A classification procedure for the effective management of changes during the maintenance process

    NASA Technical Reports Server (NTRS)

    Briand, Lionel C.; Basili, Victor R.

    1992-01-01

    During software operation, maintainers are often faced with numerous change requests. Given available resources such as effort and calendar time, changes, if approved, have to be planned to fit within budget and schedule constraints. In this paper, we address the issue of assessing the difficulty of a change based on known or predictable data. This paper should be considered as a first step towards the construction of customized economic models for maintainers. In it, we propose a modeling approach, based on regular statistical techniques, that can be used in a variety of software maintenance environments. The approach can be easily automated, and is simple for people with limited statistical experience to use. Moreover, it deals effectively with the uncertainty usually associated with both model inputs and outputs. The modeling approach is validated on a data set provided by NASA/GSFC which shows it was effective in classifying changes with respect to the effort involved in implementing them. Other advantages of the approach are discussed along with additional steps to improve the results.

  12. Nonstandard Work Schedules and Developmentally Generative Parenting Practices: An Application of Propensity Score Techniques

    ERIC Educational Resources Information Center

    Grzywacz, Joseph G.; Daniel, Stephanie S.; Tucker, Jenna; Walls, Jill; Leerkes, Esther

    2011-01-01

    Data from the National Institute for Child Health and Human Development Study of Early Child Care (Phase I) and propensity score techniques were used to determine whether working full time in a nonstandard schedule job during the child's first year predicted parenting practices over 3 years. Results indicated that women who worked full time in a…

  13. FlexTech{trademark}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilhelm, B.

    1996-12-31

    Information is presented on a 110 MWe atmospheric CFB located in the Czech Republic firing brown coal. The following topics are discussed: fuel analysis; boiler design parameters; CFB fluidizing nozzle; and project time schedule. Information is also given on a 200 MWe atmospheric CFB located in the Republic of Korea firing Korean anthracite. Data are presented on fuel specifications; predicted performance; and engineering and construction schedule.

  14. Blocked vs. interleaved presentation and proactive interference in episodic memory.

    PubMed

    Del Missier, Fabio; Sassano, Alessia; Coni, Valentina; Salomonsson, Martina; Mäntylä, Timo

    2018-05-01

    Although a number of theoretical accounts of proactive interference (PI) in episodic memory have been proposed, existing empirical evidence does not support conclusively a single view yet. In two experiments we tested the predictions of the temporal discrimination theory of PI against alternative accounts by manipulating the presentation schedule of study materials (lists blocked by category vs. interleaved). In line with the temporal discrimination theory, we observed a clear buildup of (and release from) PI in the blocked condition, in which all the lists of the same category were presented sequentially. In the interleaved condition, with alternating lists of different categories, a more gradual and smoother buildup of PI was observed. When participants were left free to choose their presentation schedule, they spontaneously adopted an interleaved schedule, resulting again in more gradual PI. After longer delays, we observed recency effects at the list level in overall recall and, in the blocked condition, PI-related effects. The overall pattern of findings agrees with the predictions of the temporal discrimination theory of PI, complemented with categorical processing of list items, but not with alternative accounts, shedding light on the dynamics and underpinnings of PI under diverse presentation schedules and over different time scales.

  15. A half century of scalloping in the work habits of the United States Congress.

    PubMed Central

    Critchfield, Thomas S; Haley, Rebecca; Sabo, Benjamin; Colbert, Jorie; Macropoulis, Georgette

    2003-01-01

    It has been suggested that the work environment of the United States Congress bears similarity to a fixed-interval reinforcement schedule. Consistent with this notion, Weisberg and Waldrop (1972) described a positively accelerating pattern in annual congressional bill production (selected years from 1947 to 1968) that is reminiscent of the scalloped response pattern often attributed to fixed-interval schedules, but their analysis is now dated and does not bear on the functional relations that might yield scalloping. The present study described annual congressional bill production over a period of 52 years and empirically evaluated predictions derived from four hypotheses about the mechanisms that underlie scalloping. Scalloping occurred reliably in every year. The data supported several predictions about congressional productivity based on fixed-interval schedule performance, but did not consistently support any of three alternative accounts. These findings argue for the external validity of schedule-controlled operant behavior as measured in the laboratory. The present analysis also illustrates a largely overlooked role for applied behavior analysis: that of shedding light on the functional properties of behavior in uncontrolled settings of considerable interest to the public. PMID:14768667

  16. Optimizing Staffing levels and Schedules for Railroad Dispatching Centers

    DOT National Transportation Integrated Search

    2004-09-01

    This report presents the results of a study to explore approaches to establishing staffing levels and schedules for railroad dispatchers. The : work was conducted as follow-up to a prior study that found fatigue among dispatchers, particularly those ...

  17. Which resources moderate the effects of demanding work schedules on nurses working in residential elder care? A longitudinal study.

    PubMed

    Peters, Velibor; Houkes, Inge; de Rijk, Angelique E; Bohle, Philip L; Engels, Josephine A; Nijhuis, Frans J N

    2016-06-01

    Shiftwork is a major job demand for nurses and has been related to various negative consequences. Research suggests that personal and job resources moderate the impact of work schedules on stress, health and well-being. This longitudinal study examined whether the interactions of personal and job resources with work schedule demands predicted work engagement and emotional exhaustion in nursing. This longitudinal study included two waves of data collection with a one year follow-up using self-report questionnaires among 247 nurses working shifts or irregular working hours in residential care for the elderly in the Netherlands. Moderated structural equation modelling was conducted to examine the interactions between personal and job resources and work schedule demands. Two work schedule demands were assessed: type of work schedule (demanding vs. less demanding) and average weekly working hours. Two personal resources, active coping and healthy lifestyle, and two job resources, work schedule control and the work schedule fit with nurses' private life, were assessed. Results showed that the work schedule fit with nurses' private life buffered the relationship between work schedule demands and emotional exhaustion one year later. Furthermore, the work schedule fit with nurses' private life increased work engagement one year later when work schedule demands were high. Work schedule control strengthened the positive relationship between work schedule demands and emotional exhaustion one year later. The personal resources, active coping and healthy lifestyle were no moderators in this model. Nurses suffer less from decreasing work engagement and emotional exhaustion due to work schedule demands when their work schedules fit with their private lives. Work schedule control did not buffer, but strengthened the positive relationship between weekly working hours and emotional exhaustion one year later. Job resources appeared to be more important for nurses' well-being than personal resources. These findings highlight the importance of the fit of a work schedule with nurse's private life, if the work schedule is demanding. Copyright © 2016. Published by Elsevier Ltd.

  18. Developing interpretable models with optimized set reduction for identifying high risk software components

    NASA Technical Reports Server (NTRS)

    Briand, Lionel C.; Basili, Victor R.; Hetmanski, Christopher J.

    1993-01-01

    Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault frequency components so that testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents the Optimized Set Reduction approach for constructing such models, intended to fulfill specific software engineering needs. Our approach to classification is to measure the software system and build multivariate stochastic models for predicting high risk system components. We present experimental results obtained by classifying Ada components into two classes: is or is not likely to generate faults during system and acceptance test. Also, we evaluate the accuracy of the model and the insights it provides into the error making process.

  19. Choice and conditioned reinforcement.

    PubMed Central

    Fantino, E; Freed, D; Preston, R A; Williams, W A

    1991-01-01

    A potential weakness of one formulation of delay-reduction theory is its failure to include a term for rate of conditioned reinforcement, that is, the rate at which the terminal-link stimuli occur in concurrent-chains schedules. The present studies assessed whether or not rate of conditioned reinforcement has an independent effect upon choice. Pigeons responded on either modified concurrent-chains schedules or on comparable concurrent-tandem schedules. The initial link was shortened on only one of two concurrent-chains schedules and on only one of two corresponding concurrent-tandem schedules. This manipulation increased rate of conditioned reinforcement sharply in the chain but not in the tandem schedule. According to a formulation of delay-reduction theory, when the outcomes chosen (the terminal links) are equal, as in Experiment 1, choice should depend only on rate of primary reinforcement; thus, choice should be equivalent for the tandem and chain schedules despite a large difference in rate of conditioned reinforcement. When the outcomes chosen are unequal, however, as in Experiment 2, choice should depend upon both rate of primary reinforcement and relative signaled delay reduction; thus, larger preferences should occur in the chain than in the tandem schedules. These predictions were confirmed, suggesting that increasing the rate of conditioned reinforcement on concurrent-chains schedules may have no independent effect on choice. PMID:2037826

  20. Assessing Tactical Scheduler Options for Time-Based Surface Metering

    NASA Technical Reports Server (NTRS)

    Zelinski, Shannon; Windhorst, Robert

    2017-01-01

    NASA is committed to demonstrating a concept of integrated arrival, departure, and surface operations by 2020 under the Airspace Technology Demonstration 2 (ATD2) sub-project. This will be accomplished starting with a demonstration of flight specific time-based departure metering at Charlotte Douglass International Airport (CLT). ATD2 tactical metering capability is based on NASAs Spot And Runway Departure Advisor (SARDA) which has been tested successfully in human-in-the-loop simulations of CLT. SARDA makes use of surface surveillance data and surface modeling to estimate the earliest takeoff time for each flight active on the airport surface or ready for pushback from the gate. The system then schedules each flight to its assigned runway in order of earliest takeoff time and assigns a target pushback time, displayed to ramp controllers as an advisory gate hold time. The objective of this method of departure metering is to move as much delay as possible to the gate to minimize surface congestion and engine on-time, while keeping sufficient pressure on the runway to maintain throughput. This flight specific approached enables greater flight efficiency and predictability, facilitating trajectory-based operations and surface-airspace integration, which ATD2 aims to achieve.Throughout ATD2 project formulation and system development, researchers have continuously engaged with stakeholders and future users, uncovering key system requirements for tactical metering that SARDA did not address. The SARDA scheduler is updated every 10 seconds using real-time surface surveillance data to ensure the most up-to-date information is used to predict runway usage. However, rapid updates also open the potential for fluctuating advisories, which Ramp controllers at a busy airport like CLT find unacceptable. Therefore, ATD2 tactical metering requires that all advisories freeze once flights are ready so that Ramp controllers may communicate a single hold time when responding to pilot ready calls.

  1. Design of a universal logic block for fault-tolerant realization of any logic operation in trapped-ion quantum circuits

    NASA Astrophysics Data System (ADS)

    Goudarzi, H.; Dousti, M. J.; Shafaei, A.; Pedram, M.

    2014-05-01

    This paper presents a physical mapping tool for quantum circuits, which generates the optimal universal logic block (ULB) that can, on average, perform any logical fault-tolerant (FT) quantum operations with the minimum latency. The operation scheduling, placement, and qubit routing problems tackled by the quantum physical mapper are highly dependent on one another. More precisely, the scheduling solution affects the quality of the achievable placement solution due to resource pressures that may be created as a result of operation scheduling, whereas the operation placement and qubit routing solutions influence the scheduling solution due to resulting distances between predecessor and current operations, which in turn determines routing latencies. The proposed flow for the quantum physical mapper captures these dependencies by applying (1) a loose scheduling step, which transforms an initial quantum data flow graph into one that explicitly captures the no-cloning theorem of the quantum computing and then performs instruction scheduling based on a modified force-directed scheduling approach to minimize the resource contention and quantum circuit latency, (2) a placement step, which uses timing-driven instruction placement to minimize the approximate routing latencies while making iterative calls to the aforesaid force-directed scheduler to correct scheduling levels of quantum operations as needed, and (3) a routing step that finds dynamic values of routing latencies for the qubits. In addition to the quantum physical mapper, an approach is presented to determine the single best ULB size for a target quantum circuit by examining the latency of different FT quantum operations mapped onto different ULB sizes and using information about the occurrence frequency of operations on critical paths of the target quantum algorithm to weigh these latencies. Experimental results show an average latency reduction of about 40 % compared to previous work.

  2. Cost and schedule estimation study report

    NASA Technical Reports Server (NTRS)

    Condon, Steve; Regardie, Myrna; Stark, Mike; Waligora, Sharon

    1993-01-01

    This report describes the analysis performed and the findings of a study of the software development cost and schedule estimation models used by the Flight Dynamics Division (FDD), Goddard Space Flight Center. The study analyzes typical FDD projects, focusing primarily on those developed since 1982. The study reconfirms the standard SEL effort estimation model that is based on size adjusted for reuse; however, guidelines for the productivity and growth parameters in the baseline effort model have been updated. The study also produced a schedule prediction model based on empirical data that varies depending on application type. Models for the distribution of effort and schedule by life-cycle phase are also presented. Finally, this report explains how to use these models to plan SEL projects.

  3. An Update on the Role of Systems Modeling in the Design and Verification of the James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Muheim, Danniella; Menzel, Michael; Mosier, Gary; Irish, Sandra; Maghami, Peiman; Mehalick, Kimberly; Parrish, Keith

    2010-01-01

    The James Web Space Telescope (JWST) is a large, infrared-optimized space telescope scheduled for launch in 2014. System-level verification of critical performance requirements will rely on integrated observatory models that predict the wavefront error accurately enough to verify that allocated top-level wavefront error of 150 nm root-mean-squared (rms) through to the wave-front sensor focal plane is met. The assembled models themselves are complex and require the insight of technical experts to assess their ability to meet their objectives. This paper describes the systems engineering and modeling approach used on the JWST through the detailed design phase.

  4. Preliminary power train design for a state-of-the-art electric vehicle

    NASA Technical Reports Server (NTRS)

    Ross, J. A.; Wooldridge, G. A.

    1978-01-01

    The state-of-the-art (SOTA) of electric vehicles built since 1965 was reviewed to establish a base for the preliminary design of a power train for a SOTA electric vehicle. The performance of existing electric vehicles were evaluated to establish preliminary specifications for a power train design using state-of-the-art technology and commercially available components. Power train components were evaluated and selected using a computer simulation of the SAE J227a Schedule D driving cycle. Predicted range was determined for a number of motor and controller combinations in conjunction with the mechanical elements of power trains and a battery pack of sixteen lead-acid batteries - 471.7 kg at 0.093 MJ/Kg (1040 lbs. at 11.7 Whr/lb). On the basis of maximum range and overall system efficiency using the Schedule D cycle, an induction motor and 3 phase inverter/controller was selected as the optimum combination when used with a two-speed transaxle and steel belted radial tires. The predicted Schedule D range is 90.4 km (56.2 mi). Four near term improvements to the SOTA were identified, evaluated, and predicted to increase range approximately 7%.

  5. Link Scheduling Algorithm with Interference Prediction for Multiple Mobile WBANs

    PubMed Central

    Le, Thien T. T.

    2017-01-01

    As wireless body area networks (WBANs) become a key element in electronic healthcare (e-healthcare) systems, the coexistence of multiple mobile WBANs is becoming an issue. The network performance is negatively affected by the unpredictable movement of the human body. In such an environment, inter-WBAN interference can be caused by the overlapping transmission range of nearby WBANs. We propose a link scheduling algorithm with interference prediction (LSIP) for multiple mobile WBANs, which allows multiple mobile WBANs to transmit at the same time without causing inter-WBAN interference. In the LSIP, a superframe includes the contention access phase using carrier sense multiple access with collision avoidance (CSMA/CA) and the scheduled phase using time division multiple access (TDMA) for non-interfering nodes and interfering nodes, respectively. For interference prediction, we define a parameter called interference duration as the duration during which disparate WBANs interfere with each other. The Bayesian model is used to estimate and classify the interference using a signal to interference plus noise ratio (SINR) and the number of neighboring WBANs. The simulation results show that the proposed LSIP algorithm improves the packet delivery ratio and throughput significantly with acceptable delay. PMID:28956827

  6. Multiparametric MRI characterization and prediction in autism spectrum disorder using graph theory and machine learning.

    PubMed

    Zhou, Yongxia; Yu, Fang; Duong, Timothy

    2014-01-01

    This study employed graph theory and machine learning analysis of multiparametric MRI data to improve characterization and prediction in autism spectrum disorders (ASD). Data from 127 children with ASD (13.5±6.0 years) and 153 age- and gender-matched typically developing children (14.5±5.7 years) were selected from the multi-center Functional Connectome Project. Regional gray matter volume and cortical thickness increased, whereas white matter volume decreased in ASD compared to controls. Small-world network analysis of quantitative MRI data demonstrated decreased global efficiency based on gray matter cortical thickness but not with functional connectivity MRI (fcMRI) or volumetry. An integrative model of 22 quantitative imaging features was used for classification and prediction of phenotypic features that included the autism diagnostic observation schedule, the revised autism diagnostic interview, and intelligence quotient scores. Among the 22 imaging features, four (caudate volume, caudate-cortical functional connectivity and inferior frontal gyrus functional connectivity) were found to be highly informative, markedly improving classification and prediction accuracy when compared with the single imaging features. This approach could potentially serve as a biomarker in prognosis, diagnosis, and monitoring disease progression.

  7. Evaluation of scenario-specific modeling approaches to predict plane of array solar irradiation

    DOE PAGES

    Moslehi, Salim; Reddy, T. Agami; Katipamula, Srinivas

    2017-12-20

    Predicting thermal or electric power output from solar collectors requires knowledge of solar irradiance incident on the collector, known as plane of array irradiance. In the absence of such a measurement, plane of array irradiation can be predicted using relevant transposition models which essentially requires diffuse (or beam) radiation to be to be known along with total horizontal irradiation. The two main objectives of the current study are (1) to evaluate the extent to which the prediction of plane of array irradiance is improved when diffuse radiation is predicted using location-specific regression models developed from on-site measured data as againstmore » using generalized models; and (2) to estimate the expected uncertainties associated with plane of array irradiance predictions under different data collection scenarios likely to be encountered in practical situations. These issues have been investigated using monitored data for several U.S. locations in conjunction with the Typical Meteorological Year, version 3 database. An interesting behavior in the Typical Meteorological Year, version 3 data was also observed in correlation patterns between diffuse and total radiation taken from different years which seems to attest to a measurement problem. Furthermore, the current study was accomplished under a broader research agenda aimed at providing energy managers the necessary tools for predicting, scheduling, and controlling various sub-systems of an integrated energy system.« less

  8. Evaluation of scenario-specific modeling approaches to predict plane of array solar irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moslehi, Salim; Reddy, T. Agami; Katipamula, Srinivas

    Predicting thermal or electric power output from solar collectors requires knowledge of solar irradiance incident on the collector, known as plane of array irradiance. In the absence of such a measurement, plane of array irradiation can be predicted using relevant transposition models which essentially requires diffuse (or beam) radiation to be to be known along with total horizontal irradiation. The two main objectives of the current study are (1) to evaluate the extent to which the prediction of plane of array irradiance is improved when diffuse radiation is predicted using location-specific regression models developed from on-site measured data as againstmore » using generalized models; and (2) to estimate the expected uncertainties associated with plane of array irradiance predictions under different data collection scenarios likely to be encountered in practical situations. These issues have been investigated using monitored data for several U.S. locations in conjunction with the Typical Meteorological Year, version 3 database. An interesting behavior in the Typical Meteorological Year, version 3 data was also observed in correlation patterns between diffuse and total radiation taken from different years which seems to attest to a measurement problem. Furthermore, the current study was accomplished under a broader research agenda aimed at providing energy managers the necessary tools for predicting, scheduling, and controlling various sub-systems of an integrated energy system.« less

  9. Conflict-Aware Scheduling Algorithm

    NASA Technical Reports Server (NTRS)

    Wang, Yeou-Fang; Borden, Chester

    2006-01-01

    conflict-aware scheduling algorithm is being developed to help automate the allocation of NASA s Deep Space Network (DSN) antennas and equipment that are used to communicate with interplanetary scientific spacecraft. The current approach for scheduling DSN ground resources seeks to provide an equitable distribution of tracking services among the multiple scientific missions and is very labor intensive. Due to the large (and increasing) number of mission requests for DSN services, combined with technical and geometric constraints, the DSN is highly oversubscribed. To help automate the process, and reduce the DSN and spaceflight project labor effort required for initiating, maintaining, and negotiating schedules, a new scheduling algorithm is being developed. The scheduling algorithm generates a "conflict-aware" schedule, where all requests are scheduled based on a dynamic priority scheme. The conflict-aware scheduling algorithm allocates all requests for DSN tracking services while identifying and maintaining the conflicts to facilitate collaboration and negotiation between spaceflight missions. These contrast with traditional "conflict-free" scheduling algorithms that assign tracks that are not in conflict and mark the remainder as unscheduled. In the case where full schedule automation is desired (based on mission/event priorities, fairness, allocation rules, geometric constraints, and ground system capabilities/ constraints), a conflict-free schedule can easily be created from the conflict-aware schedule by removing lower priority items that are in conflict.

  10. An AI approach for scheduling space-station payloads at Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Castillo, D.; Ihrie, D.; Mcdaniel, M.; Tilley, R.

    1987-01-01

    The Payload Processing for Space-Station Operations (PHITS) is a prototype modeling tool capable of addressing many Space Station related concerns. The system's object oriented design approach coupled with a powerful user interface provide the user with capabilities to easily define and model many applications. PHITS differs from many artificial intelligence based systems in that it couples scheduling and goal-directed simulation to ensure that on-orbit requirement dates are satisfied.

  11. Coordinated scheduling for dynamic real-time systems

    NASA Technical Reports Server (NTRS)

    Natarajan, Swaminathan; Zhao, Wei

    1994-01-01

    In this project, we addressed issues in coordinated scheduling for dynamic real-time systems. In particular, we concentrated on design and implementation of a new distributed real-time system called R-Shell. The design objective of R-Shell is to provide computing support for space programs that have large, complex, fault-tolerant distributed real-time applications. In R-shell, the approach is based on the concept of scheduling agents, which reside in the application run-time environment, and are customized to provide just those resource management functions which are needed by the specific application. With this approach, we avoid the need for a sophisticated OS which provides a variety of generalized functionality, while still not burdening application programmers with heavy responsibility for resource management. In this report, we discuss the R-Shell approach, summarize the achievement of the project, and describe a preliminary prototype of R-Shell system.

  12. Scheduling Diet for Diabetes Mellitus Patients using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Syahputra, M. F.; Felicia, V.; Rahmat, R. F.; Budiarto, R.

    2017-01-01

    Diabetes Melitus (DM) is one of metabolic diseases which affects on productivity and lowers the human resources quality. This disease can be controlled by maintaining and regulating balanced and healthy lifestyle especially for daily diet. However, nowadays, there is no system able to help DM patient to get any information of proper diet. Therefore, an approach is required to provide scheduling diet every day in a week with appropriate nutrition for DM patients to help them regulate their daily diet for healing this disease. In this research, we calculate the number of caloric needs using Harris-Benedict equation and propose genetic algorithm for scheduling diet for DM patient. The results show that the greater the number of individuals, the greater the more the possibility of changes in fitness score approaches the best fitness score. Moreover, the greater the created generation, the more the opportunites to obtain best individual with fitness score approaching 0 or equal to 0.

  13. Littoral Combat Ship Crew Scheduling

    DTIC Science & Technology

    2015-03-01

    events and schedules. The selection of u for each sub-problem also has the same tradeoff considerations of balancing solve time and overly myopic ...extending them beyond four months in a phase. Results are compared based on solve time and penalty value. The MIP solution has the best quality...benefits to crew alignment for longer-range schedules. The planner must balance solve time and solution quality when determining the approach to

  14. A Systems Approach to Military Construction.

    DTIC Science & Technology

    1982-11-01

    Unclassi fled I150. OECL ASSI F1C ATI ON/ DOWNGRADING I SCHEDULE IS. DISYR1EUTION STATEMENT (ot this Repo"t) Approved for public release; distribution...30 Procurement Alternatives 30 Design Alternatives 33 Preconcept Control Data 34 AE Selection Procedure 36 Scheduling 40 Cost Estimating 44 4...data, scheduling , and cost estimating. The objectives of project coordination for a systems-oriented project do not differ from those of a

  15. A bicriteria heuristic for an elective surgery scheduling problem.

    PubMed

    Marques, Inês; Captivo, M Eugénia; Vaz Pato, Margarida

    2015-09-01

    Resource rationalization and reduction of waiting lists for surgery are two main guidelines for hospital units outlined in the Portuguese National Health Plan. This work is dedicated to an elective surgery scheduling problem arising in a Lisbon public hospital. In order to increase the surgical suite's efficiency and to reduce the waiting lists for surgery, two objectives are considered: maximize surgical suite occupation and maximize the number of surgeries scheduled. This elective surgery scheduling problem consists of assigning an intervention date, an operating room and a starting time for elective surgeries selected from the hospital waiting list. Accordingly, a bicriteria surgery scheduling problem arising in the hospital under study is presented. To search for efficient solutions of the bicriteria optimization problem, the minimization of a weighted Chebyshev distance to a reference point is used. A constructive and improvement heuristic procedure specially designed to address the objectives of the problem is developed and results of computational experiments obtained with empirical data from the hospital are presented. This study shows that by using the bicriteria approach presented here it is possible to build surgical plans with very good performance levels. This method can be used within an interactive approach with the decision maker. It can also be easily adapted to other hospitals with similar scheduling conditions.

  16. A self-organizing neural network for job scheduling in distributed systems

    NASA Astrophysics Data System (ADS)

    Newman, Harvey B.; Legrand, Iosif C.

    2001-08-01

    The aim of this work is to describe a possible approach for the optimization of the job scheduling in large distributed systems, based on a self-organizing Neural Network. This dynamic scheduling system should be seen as adaptive middle layer software, aware of current available resources and making the scheduling decisions using the "past experience." It aims to optimize job specific parameters as well as the resource utilization. The scheduling system is able to dynamically learn and cluster information in a large dimensional parameter space and at the same time to explore new regions in the parameters space. This self-organizing scheduling system may offer a possible solution to provide an effective use of resources for the off-line data processing jobs for future HEP experiments.

  17. ATD-2 IADS Metroplex Traffic Management Overview Brief

    NASA Technical Reports Server (NTRS)

    Engelland, Shawn

    2016-01-01

    ATD-2 will improve the predictability and the operational efficiency of the air traffic system in metroplex environments through the enhancement, development and integration of the nation's most advanced and sophisticated arrival, departure, and surface prediction, scheduling and management systems.

  18. Efficient genetic algorithms using discretization scheduling.

    PubMed

    McLay, Laura A; Goldberg, David E

    2005-01-01

    In many applications of genetic algorithms, there is a tradeoff between speed and accuracy in fitness evaluations when evaluations use numerical methods with varying discretization. In these types of applications, the cost and accuracy vary from discretization errors when implicit or explicit quadrature is used to estimate the function evaluations. This paper examines discretization scheduling, or how to vary the discretization within the genetic algorithm in order to use the least amount of computation time for a solution of a desired quality. The effectiveness of discretization scheduling can be determined by comparing its computation time to the computation time of a GA using a constant discretization. There are three ingredients for the discretization scheduling: population sizing, estimated time for each function evaluation and predicted convergence time analysis. Idealized one- and two-dimensional experiments and an inverse groundwater application illustrate the computational savings to be achieved from using discretization scheduling.

  19. Consumption-leisure tradeoffs in pigeons: Effects of changing marginal wage rates by varying amount of reinforcement.

    PubMed

    Green, L; Kagel, J H; Battalio, R C

    1987-01-01

    Pigeons' rates of responding and food reinforcement under simple random-ratio schedules were compared with those obtained under comparable ratio schedules in which free food deliveries were added, but the duration of each food delivery was halved. These ratio-with-free-food schedules were constructed so that, were the pigeon to maintain the same rate of responding as it had under the simple ratio schedule, total food obtained (earned plus free) would remain unchanged. However, any reduction in responding would reduce total food consumption below that under the simple ratio schedule. These "compensated wage decreases" led to decreases in responding and decreases in food consumption, as predicted by an economic model of labor supply. Moreover, the reductions in responding increased as the ratio value increased (i.e., as wage rates decreased). Pigeons, therefore, substituted leisure for consumption. The relationship between these procedures and negative-income-tax programs is noted.

  20. Particle swarm optimization based space debris surveillance network scheduling

    NASA Astrophysics Data System (ADS)

    Jiang, Hai; Liu, Jing; Cheng, Hao-Wen; Zhang, Yao

    2017-02-01

    The increasing number of space debris has created an orbital debris environment that poses increasing impact risks to existing space systems and human space flights. For the safety of in-orbit spacecrafts, we should optimally schedule surveillance tasks for the existing facilities to allocate resources in a manner that most significantly improves the ability to predict and detect events involving affected spacecrafts. This paper analyzes two criteria that mainly affect the performance of a scheduling scheme and introduces an artificial intelligence algorithm into the scheduling of tasks of the space debris surveillance network. A new scheduling algorithm based on the particle swarm optimization algorithm is proposed, which can be implemented in two different ways: individual optimization and joint optimization. Numerical experiments with multiple facilities and objects are conducted based on the proposed algorithm, and simulation results have demonstrated the effectiveness of the proposed algorithm.

  1. Dose Schedule Optimization and the Pharmacokinetic Driver of Neutropenia

    PubMed Central

    Patel, Mayankbhai; Palani, Santhosh; Chakravarty, Arijit; Yang, Johnny; Shyu, Wen Chyi; Mettetal, Jerome T.

    2014-01-01

    Toxicity often limits the utility of oncology drugs, and optimization of dose schedule represents one option for mitigation of this toxicity. Here we explore the schedule-dependency of neutropenia, a common dose-limiting toxicity. To this end, we analyze previously published mathematical models of neutropenia to identify a pharmacokinetic (PK) predictor of the neutrophil nadir, and confirm this PK predictor in an in vivo experimental system. Specifically, we find total AUC and Cmax are poor predictors of the neutrophil nadir, while a PK measure based on the moving average of the drug concentration correlates highly with neutropenia. Further, we confirm this PK parameter for its ability to predict neutropenia in vivo following treatment with different doses and schedules. This work represents an attempt at mechanistically deriving a fundamental understanding of the underlying pharmacokinetic drivers of neutropenia, and provides insights that can be leveraged in a translational setting during schedule selection. PMID:25360756

  2. Scheduling Results for the THEMIS Observation Scheduling Tool

    NASA Technical Reports Server (NTRS)

    Mclaren, David; Rabideau, Gregg; Chien, Steve; Knight, Russell; Anwar, Sadaat; Mehall, Greg; Christensen, Philip

    2011-01-01

    We describe a scheduling system intended to assist in the development of instrument data acquisitions for the THEMIS instrument, onboard the Mars Odyssey spacecraft, and compare results from multiple scheduling algorithms. This tool creates observations of both (a) targeted geographical regions of interest and (b) general mapping observations, while respecting spacecraft constraints such as data volume, observation timing, visibility, lighting, season, and science priorities. This tool therefore must address both geometric and state/timing/resource constraints. We describe a tool that maps geometric polygon overlap constraints to set covering constraints using a grid-based approach. These set covering constraints are then incorporated into a greedy optimization scheduling algorithm incorporating operations constraints to generate feasible schedules. The resultant tool generates schedules of hundreds of observations per week out of potential thousands of observations. This tool is currently under evaluation by the THEMIS observation planning team at Arizona State University.

  3. Autonomous mission planning and scheduling: Innovative, integrated, responsive

    NASA Technical Reports Server (NTRS)

    Sary, Charisse; Liu, Simon; Hull, Larry; Davis, Randy

    1994-01-01

    Autonomous mission scheduling, a new concept for NASA ground data systems, is a decentralized and distributed approach to scientific spacecraft planning, scheduling, and command management. Systems and services are provided that enable investigators to operate their own instruments. In autonomous mission scheduling, separate nodes exist for each instrument and one or more operations nodes exist for the spacecraft. Each node is responsible for its own operations which include planning, scheduling, and commanding; and for resolving conflicts with other nodes. One or more database servers accessible to all nodes enable each to share mission and science planning, scheduling, and commanding information. The architecture for autonomous mission scheduling is based upon a realistic mix of state-of-the-art and emerging technology and services, e.g., high performance individual workstations, high speed communications, client-server computing, and relational databases. The concept is particularly suited to the smaller, less complex missions of the future.

  4. DTS: Building custom, intelligent schedulers

    NASA Technical Reports Server (NTRS)

    Hansson, Othar; Mayer, Andrew

    1994-01-01

    DTS is a decision-theoretic scheduler, built on top of a flexible toolkit -- this paper focuses on how the toolkit might be reused in future NASA mission schedulers. The toolkit includes a user-customizable scheduling interface, and a 'Just-For-You' optimization engine. The customizable interface is built on two metaphors: objects and dynamic graphs. Objects help to structure problem specifications and related data, while dynamic graphs simplify the specification of graphical schedule editors (such as Gantt charts). The interface can be used with any 'back-end' scheduler, through dynamically-loaded code, interprocess communication, or a shared database. The 'Just-For-You' optimization engine includes user-specific utility functions, automatically compiled heuristic evaluations, and a postprocessing facility for enforcing scheduling policies. The optimization engine is based on BPS, the Bayesian Problem-Solver (1,2), which introduced a similar approach to solving single-agent and adversarial graph search problems.

  5. Why does working memory capacity predict variation in reading comprehension? On the influence of mind wandering and executive attention.

    PubMed

    McVay, Jennifer C; Kane, Michael J

    2012-05-01

    Some people are better readers than others, and this variation in comprehension ability is predicted by measures of working memory capacity (WMC). The primary goal of this study was to investigate the mediating role of mind-wandering experiences in the association between WMC and normal individual differences in reading comprehension, as predicted by the executive-attention theory of WMC (e.g., Engle & Kane, 2004). We used a latent-variable, structural-equation-model approach, testing skilled adult readers on 3 WMC span tasks, 7 varied reading-comprehension tasks, and 3 attention-control tasks. Mind wandering was assessed using experimenter-scheduled thought probes during 4 different tasks (2 reading, 2 attention-control). The results support the executive-attention theory of WMC. Mind wandering across the 4 tasks loaded onto a single latent factor, reflecting a stable individual difference. Most important, mind wandering was a significant mediator in the relationship between WMC and reading comprehension, suggesting that the WMC-comprehension correlation is driven, in part, by attention control over intruding thoughts. We discuss implications for theories of WMC, attention control, and reading comprehension.

  6. Understanding London's Water Supply Tradeoffs When Scheduling Interventions Under Deep Uncertainty

    NASA Astrophysics Data System (ADS)

    Huskova, I.; Matrosov, E. S.; Harou, J. J.; Kasprzyk, J. R.; Reed, P. M.

    2015-12-01

    Water supply planning in many major world cities faces several challenges associated with but not limited to climate change, population growth and insufficient land availability for infrastructure development. Long-term plans to maintain supply-demand balance and ecosystem services require careful consideration of uncertainties associated with future conditions. The current approach for London's water supply planning utilizes least cost optimization of future intervention schedules with limited uncertainty consideration. Recently, the focus of the long-term plans has shifted from solely least cost performance to robustness and resilience of the system. Identifying robust scheduling of interventions requires optimizing over a statistically representative sample of stochastic inputs which may be computationally difficult to achieve. In this study we optimize schedules using an ensemble of plausible scenarios and assess how manipulating that ensemble influences the different Pareto-approximate intervention schedules. We investigate how a major stress event's location in time as well as the optimization problem formulation influence the Pareto-approximate schedules. A bootstrapping method that respects the non-stationary trend of climate change scenarios and ensures the even distribution of the major stress event in the scenario ensemble is proposed. Different bootstrapped hydrological scenario ensembles are assessed using many-objective scenario optimization of London's future water supply and demand intervention scheduling. However, such a "fixed" scheduling of interventions approach does not aim to embed flexibility or adapt effectively as the future unfolds. Alternatively, making decisions based on the observations of occurred conditions could help planners who prefer adaptive planning. We will show how rules to guide the implementation of interventions based on observations may result in more flexible strategies.

  7. Planning and Scheduling for Fleets of Earth Observing Satellites

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Jonsson, Ari; Morris, Robert; Smith, David E.; Norvig, Peter (Technical Monitor)

    2001-01-01

    We address the problem of scheduling observations for a collection of earth observing satellites. This scheduling task is a difficult optimization problem, potentially involving many satellites, hundreds of requests, constraints on when and how to service each request, and resources such as instruments, recording devices, transmitters, and ground stations. High-fidelity models are required to ensure the validity of schedules; at the same time, the size and complexity of the problem makes it unlikely that systematic optimization search methods will be able to solve them in a reasonable time. This paper presents a constraint-based approach to solving the Earth Observing Satellites (EOS) scheduling problem, and proposes a stochastic heuristic search method for solving it.

  8. Rethinking the Clockwork of Work: Why Schedule Control May Pay Off at Work and at Home

    PubMed Central

    Kelly, Erin L.; Moen, Phyllis

    2014-01-01

    The problem and the solution Many employees face work–life conflicts and time deficits that negatively affect their health, well-being, effectiveness on the job, and organizational commitment. Many organizations have adopted flexible work arrangements but not all of them increase schedule control, that is, employees’ control over when, where, and how much they work. This article describes some limitations of flexible work policies, proposes a conceptual model of how schedule control impacts work–life conflicts, and describes specific ways to increase employees’ schedule control, including best practices for implementing common flexible work policies and Best Buy’s innovative approach to creating a culture of schedule control. PMID:25598711

  9. MaGate Simulator: A Simulation Environment for a Decentralized Grid Scheduler

    NASA Astrophysics Data System (ADS)

    Huang, Ye; Brocco, Amos; Courant, Michele; Hirsbrunner, Beat; Kuonen, Pierre

    This paper presents a simulator for of a decentralized modular grid scheduler named MaGate. MaGate’s design emphasizes scheduler interoperability by providing intelligent scheduling serving the grid community as a whole. Each MaGate scheduler instance is able to deal with dynamic scheduling conditions, with continuously arriving grid jobs. Received jobs are either allocated on local resources, or delegated to other MaGates for remote execution. The proposed MaGate simulator is based on GridSim toolkit and Alea simulator, and abstracts the features and behaviors of complex fundamental grid elements, such as grid jobs, grid resources, and grid users. Simulation of scheduling tasks is supported by a grid network overlay simulator executing distributed ant-based swarm intelligence algorithms to provide services such as group communication and resource discovery. For evaluation, a comparison of behaviors of different collaborative policies among a community of MaGates is provided. Results support the use of the proposed approach as a functional ready grid scheduler simulator.

  10. Performance analysis of a large-grain dataflow scheduling paradigm

    NASA Technical Reports Server (NTRS)

    Young, Steven D.; Wills, Robert W.

    1993-01-01

    A paradigm for scheduling computations on a network of multiprocessors using large-grain data flow scheduling at run time is described and analyzed. The computations to be scheduled must follow a static flow graph, while the schedule itself will be dynamic (i.e., determined at run time). Many applications characterized by static flow exist, and they include real-time control and digital signal processing. With the advent of computer-aided software engineering (CASE) tools for capturing software designs in dataflow-like structures, macro-dataflow scheduling becomes increasingly attractive, if not necessary. For parallel implementations, using the macro-dataflow method allows the scheduling to be insulated from the application designer and enables the maximum utilization of available resources. Further, by allowing multitasking, processor utilizations can approach 100 percent while they maintain maximum speedup. Extensive simulation studies are performed on 4-, 8-, and 16-processor architectures that reflect the effects of communication delays, scheduling delays, algorithm class, and multitasking on performance and speedup gains.

  11. A hybrid online scheduling mechanism with revision and progressive techniques for autonomous Earth observation satellite

    NASA Astrophysics Data System (ADS)

    Li, Guoliang; Xing, Lining; Chen, Yingwu

    2017-11-01

    The autonomicity of self-scheduling on Earth observation satellite and the increasing scale of satellite network attract much attention from researchers in the last decades. In reality, the limited onboard computational resource presents challenge for the online scheduling algorithm. This study considered online scheduling problem for a single autonomous Earth observation satellite within satellite network environment. It especially addressed that the urgent tasks arrive stochastically during the scheduling horizon. We described the problem and proposed a hybrid online scheduling mechanism with revision and progressive techniques to solve this problem. The mechanism includes two decision policies, a when-to-schedule policy combining periodic scheduling and critical cumulative number-based event-driven rescheduling, and a how-to-schedule policy combining progressive and revision approaches to accommodate two categories of task: normal tasks and urgent tasks. Thus, we developed two heuristic (re)scheduling algorithms and compared them with other generally used techniques. Computational experiments indicated that the into-scheduling percentage of urgent tasks in the proposed mechanism is much higher than that in periodic scheduling mechanism, and the specific performance is highly dependent on some mechanism-relevant and task-relevant factors. For the online scheduling, the modified weighted shortest imaging time first and dynamic profit system benefit heuristics outperformed the others on total profit and the percentage of successfully scheduled urgent tasks.

  12. Advanced timeline systems

    NASA Technical Reports Server (NTRS)

    Bulfin, R. L.; Perdue, C. A.

    1994-01-01

    The Mission Planning Division of the Mission Operations Laboratory at NASA's Marshall Space Flight Center is responsible for scheduling experiment activities for space missions controlled at MSFC. In order to draw statistically relevant conclusions, all experiments must be scheduled at least once and may have repeated performances during the mission. An experiment consists of a series of steps which, when performed, provide results pertinent to the experiment's functional objective. Since these experiments require a set of resources such as crew and power, the task of creating a timeline of experiment activities for the mission is one of resource constrained scheduling. For each experiment, a computer model with detailed information of the steps involved in running the experiment, including crew requirements, processing times, and resource requirements is created. These models are then loaded into the Experiment Scheduling Program (ESP) which attempts to create a schedule which satisfies all resource constraints. ESP uses a depth-first search technique to place each experiment into a time interval, and a scoring function to evaluate the schedule. The mission planners generate several schedules and choose one with a high value of the scoring function to send through the approval process. The process of approving a mission timeline can take several months. Each timeline must meet the requirements of the scientists, the crew, and various engineering departments as well as enforce all resource restrictions. No single objective is considered in creating a timeline. The experiment scheduling problem is: given a set of experiments, place each experiment along the mission timeline so that all resource requirements and temporal constraints are met and the timeline is acceptable to all who must approve it. Much work has been done on multicriteria decision making (MCDM). When there are two criteria, schedules which perform well with respect to one criterion will often perform poorly with respect to the other. One schedule dominates another if it performs strictly better on one criterion, and no worse on the other. Clearly, dominated schedules are undesireable. A nondominated schedule can be generated by some sort of optimization problem. Generally there are two approaches: the first is a hierarchical approach while the second requires optimizing a weighting or scoring function.

  13. The MER/CIP Portal for Ground Operations

    NASA Technical Reports Server (NTRS)

    Chan, Louise; Desai, Sanjay; DOrtenzio, Matthew; Filman, Robtert E.; Heher, Dennis M.; Hubbard, Kim; Johan, Sandra; Keely, Leslie; Magapu, Vish; Mak, Ronald

    2003-01-01

    We developed the Mars Exploration Rover/Collaborative Information Portal (MER/CIP) to facilitate MER operations. MER/CIP provides a centralized, one-stop delivery platform integrating science and engineering data from several distributed heterogeneous data sources. Key issues for MER/CIP include: 1) Scheduling and schedule reminders; 2) Tracking the status of daily predicted outputs; 3) Finding and analyzing data products; 4) Collaboration; 5) Announcements; 6) Personalization.

  14. The Health Behavior Schedule-II for Diabetes Predicts Self-Monitoring of Blood Glucose

    ERIC Educational Resources Information Center

    Frank, Maxwell T.; Cho, Sungkun; Heiby, Elaine M.; Lee, Chun-I; Lahtela, Adrienne L.

    2006-01-01

    The Health Behavior Schedule-II for Diabetes (HBS-IID) is a 27-item questionnaire that was evaluated as a predictor of self-monitoring of blood glucose (SMBG). The HBS-IID was completed by 96 adults with Type 2 diabetes. Recent glycosylated hemoglobin HbA1c and fasting blood glucose results were taken from participants' medical records. Only 31.3%…

  15. A Comparison of Center/TRACON Automation System and Airline Time of Arrival Predictions

    NASA Technical Reports Server (NTRS)

    Heere, Karen R.; Zelenka, Richard E.

    2000-01-01

    Benefits from information sharing between an air traffic service provider and a major air carrier are evaluated. Aircraft arrival time schedules generated by the NASA/FAA Center/TRACON Automation System (CTAS) were provided to the American Airlines System Operations Control Center in Fort Worth, Texas, during a field trial of a specialized CTAS display. A statistical analysis indicates that the CTAS schedules, based on aircraft trajectories predicted from real-time radar and weather data, are substantially more accurate than the traditional airline arrival time estimates, constructed from flight plans and en route crew updates. The improvement offered by CTAS is especially advantageous during periods of heavy traffic and substantial terminal area delay, allowing the airline to avoid large predictive errors with serious impact on the efficiency and profitability of flight operations.

  16. Planning a Stigmatized Nonvisible Illness Disclosure: Applying the Disclosure Decision-Making Model

    PubMed Central

    Choi, Soe Yoon; Venetis, Maria K.; Greene, Kathryn; Magsamen-Conrad, Kate; Checton, Maria G.; Banerjee, Smita C.

    2016-01-01

    This study applied the disclosure decision-making model (DD-MM) to explore how individuals plan to disclose nonvisible illness (Study 1), compared to planning to disclose personal information (Study 2). Study 1 showed that perceived stigma from the illness negatively predicted disclosure efficacy; closeness predicted anticipated response (i.e., provision of support) although it did not influence disclosure efficacy; disclosure efficacy led to reduced planning, with planning leading to scheduling. Study 2 demonstrated that when information was considered to be intimate, it negatively influenced disclosure efficacy. Unlike the model with stigma (Study 1), closeness positively predicted both anticipated response and disclosure efficacy. The rest of the hypothesized relationships showed a similar pattern to Study 1: disclosure efficacy reduced planning, which then positively influenced scheduling. Implications of understanding stages of planning for stigmatized information are discussed. PMID:27662447

  17. Planning a Stigmatized Nonvisible Illness Disclosure: Applying the Disclosure Decision-Making Model.

    PubMed

    Choi, Soe Yoon; Venetis, Maria K; Greene, Kathryn; Magsamen-Conrad, Kate; Checton, Maria G; Banerjee, Smita C

    2016-11-16

    This study applied the disclosure decision-making model (DD-MM) to explore how individuals plan to disclose nonvisible illness (Study 1), compared to planning to disclose personal information (Study 2). Study 1 showed that perceived stigma from the illness negatively predicted disclosure efficacy; closeness predicted anticipated response (i.e., provision of support) although it did not influence disclosure efficacy; disclosure efficacy led to reduced planning, with planning leading to scheduling. Study 2 demonstrated that when information was considered to be intimate, it negatively influenced disclosure efficacy. Unlike the model with stigma (Study 1), closeness positively predicted both anticipated response and disclosure efficacy. The rest of the hypothesized relationships showed a similar pattern to Study 1: disclosure efficacy reduced planning, which then positively influenced scheduling. Implications of understanding stages of planning for stigmatized information are discussed.

  18. Interest Level in 2-Year-Olds with Autism Spectrum Disorder Predicts Rate of Verbal, Nonverbal, and Adaptive Skill Acquisition

    ERIC Educational Resources Information Center

    Klintwall, Lars; Macari, Suzanne; Eikeseth, Svein; Chawarska, Katarzyna

    2015-01-01

    Recent studies have suggested that skill acquisition rates for children with autism spectrum disorders receiving early interventions can be predicted by child motivation. We examined whether level of interest during an Autism Diagnostic Observation Schedule assessment at 2?years predicts subsequent rates of verbal, nonverbal, and adaptive skill…

  19. Heuristic approach to Satellite Range Scheduling with Bounds using Lagrangian Relaxation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Nathanael J. K.; Arguello, Bryan; Nozick, Linda Karen

    This paper focuses on scheduling antennas to track satellites using a heuristic method. In order to validate the performance of the heuristic, bounds are developed using Lagrangian relaxation. The performance of the algorithm is established using several illustrative problems.

  20. On scheduling task systems with variable service times

    NASA Astrophysics Data System (ADS)

    Maset, Richard G.; Banawan, Sayed A.

    1993-08-01

    Several strategies have been proposed for developing optimal and near-optimal schedules for task systems (jobs consisting of multiple tasks that can be executed in parallel). Most such strategies, however, implicitly assume deterministic task service times. We show that these strategies are much less effective when service times are highly variable. We then evaluate two strategies—one adaptive, one static—that have been proposed for retaining high performance despite such variability. Both strategies are extensions of critical path scheduling, which has been found to be efficient at producing near-optimal schedules. We found the adaptive approach to be quite effective.

  1. Solving a mathematical model integrating unequal-area facilities layout and part scheduling in a cellular manufacturing system by a genetic algorithm.

    PubMed

    Ebrahimi, Ahmad; Kia, Reza; Komijan, Alireza Rashidi

    2016-01-01

    In this article, a novel integrated mixed-integer nonlinear programming model is presented for designing a cellular manufacturing system (CMS) considering machine layout and part scheduling problems simultaneously as interrelated decisions. The integrated CMS model is formulated to incorporate several design features including part due date, material handling time, operation sequence, processing time, an intra-cell layout of unequal-area facilities, and part scheduling. The objective function is to minimize makespan, tardiness penalties, and material handling costs of inter-cell and intra-cell movements. Two numerical examples are solved by the Lingo software to illustrate the results obtained by the incorporated features. In order to assess the effects and importance of integration of machine layout and part scheduling in designing a CMS, two approaches, sequentially and concurrent are investigated and the improvement resulted from a concurrent approach is revealed. Also, due to the NP-hardness of the integrated model, an efficient genetic algorithm is designed. As a consequence, computational results of this study indicate that the best solutions found by GA are better than the solutions found by B&B in much less time for both sequential and concurrent approaches. Moreover, the comparisons between the objective function values (OFVs) obtained by sequential and concurrent approaches demonstrate that the OFV improvement is averagely around 17 % by GA and 14 % by B&B.

  2. C/SCSC overview: approach, implementation, use. [Cost/Schedule Control Systems Criteria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turf, Larry

    1979-01-01

    An overview of the Cost/Schedule Control System Criteria, known as C/SCS or C/S Squared is pesented. In the mid-1960s, several DOD service agencies embarked on a new performance measurement concept to track cost and schedule performance on major DOD programs. The performance measurement concept of C/SCS has expanded from DOD use to the Department of Energy (PMS), NASA (533 reports), and private industry such as shipbuilding, utilities, and construction. This paper describes the C/SCSC with the events leading to the C/SCS requirement, how to approach the requirement, and discusses implementing and using the system. Many government publications, directives, and instructionsmore » on the subject are listed in the publication.« less

  3. Evaluation of performance of seasonal precipitation prediction at regional scale over India

    NASA Astrophysics Data System (ADS)

    Mohanty, U. C.; Nageswararao, M. M.; Sinha, P.; Nair, A.; Singh, A.; Rai, R. K.; Kar, S. C.; Ramesh, K. J.; Singh, K. K.; Ghosh, K.; Rathore, L. S.; Sharma, R.; Kumar, A.; Dhekale, B. S.; Maurya, R. K. S.; Sahoo, R. K.; Dash, G. P.

    2018-03-01

    The seasonal scale precipitation amount is an important ingredient in planning most of the agricultural practices (such as a type of crops, and showing and harvesting schedules). India being an agroeconomic country, the seasonal scale prediction of precipitation is directly linked to the socioeconomic growth of the nation. At present, seasonal precipitation prediction at regional scale is a challenging task for the scientific community. In the present study, an attempt is made to develop multi-model dynamical-statistical approach for seasonal precipitation prediction at the regional scale (meteorological subdivisions) over India for four prominent seasons which are winter (from December to February; DJF), pre-monsoon (from March to May; MAM), summer monsoon (from June to September; JJAS), and post-monsoon (from October to December; OND). The present prediction approach is referred as extended range forecast system (ERFS). For this purpose, precipitation predictions from ten general circulation models (GCMs) are used along with the India Meteorological Department (IMD) rainfall analysis data from 1982 to 2008 for evaluation of the performance of the GCMs, bias correction of the model results, and development of the ERFS. An extensive evaluation of the performance of the ERFS is carried out with dependent data (1982-2008) as well as independent predictions for the period 2009-2014. In general, the skill of the ERFS is reasonably better and consistent for all the seasons and different regions over India as compared to the GCMs and their simple mean. The GCM products failed to explain the extreme precipitation years, whereas the bias-corrected GCM mean and the ERFS improved the prediction and well represented the extremes in the hindcast period. The peak intensity, as well as regions of maximum precipitation, is better represented by the ERFS than the individual GCMs. The study highlights the improvement of forecast skill of the ERFS over 34 meteorological subdivisions as well as India as a whole during all the four seasons.

  4. [Volumes of services supplied by Italian Stop-Smoking Services and their characteristics predictive of abstinence].

    PubMed

    Gorini, Giuseppe; Ameglio, Matteo; Martini, Andrea; Bosi, Sandra; Laezza, Maurizio

    2013-01-01

    to evaluate differences in terms of smokers' attendance to National Health System (NHS) Stop-Smoking Services with a prevalent individual approach (SSSi), and to those with a prevalent group approach (SSSg). To identify predictive characteristics of success, in terms of quit rates at the end of treatment (QR0) and after 6 months (QR1), according to SSS type (SSSi/SSSg), treatment (individual/ group counseling with/without pharmacologic treatments), 5 SSS scores: type of structure (S), number and hours per week of SSS health professionals (P), SSS involvement in local tobacco control networks (N), and type of smokers' assessment (A); and 3 principal components of SSS characteristics. survey to 19 SSSs, and survey to smokers attending these SSSs, with a six month follow-up. 1,276 smokers attending 19 SSSs (664 at 7 SSSi; 612 at 12 SSSg) in 9 months in the period 2008-2010. smokers' attendance to scheduled sessions; QR0; QR1. even though SSSi treated more smokers per month (12 vs. 8 in SSSg), SSSi scheduled fewer treatment sessions (7 vs. 9 sessions) in a wider treatment period (3 months vs. 2 in SSSg). SSSg recorded lower P and higher A scores. Four out of 5 smokers attending SSSg and 2/5 of smokers attending SSSi completed treatment protocols. Considering all smokers, QR1 in both types of SSS were around 36%. Smokers treated with pharmacotherapy, those more motivated and with high self-efficacy, and those non-living together with smokers were more likely to recorded higher QR1. the most relevant interventions in order to increase the number of smokers treated at SSS and to improve cessation rates among them were: for SSSi, increasing completion to treatment protocol; for SSSg, improving the P scores to increase the number of treated smokers; for all SSS, increasing the use of pharmacotherapy in combination with individual/group counseling to sustain abstinence.

  5. IOPS advisor: Research in progress on knowledge-intensive methods for irregular operations airline scheduling

    NASA Technical Reports Server (NTRS)

    Borse, John E.; Owens, Christopher C.

    1992-01-01

    Our research focuses on the problem of recovering from perturbations in large-scale schedules, specifically on the ability of a human-machine partnership to dynamically modify an airline schedule in response to unanticipated disruptions. This task is characterized by massive interdependencies and a large space of possible actions. Our approach is to apply the following: qualitative, knowledge-intensive techniques relying on a memory of stereotypical failures and appropriate recoveries; and quantitative techniques drawn from the Operations Research community's work on scheduling. Our main scientific challenge is to represent schedules, failures, and repairs so as to make both sets of techniques applicable to the same data. This paper outlines ongoing research in which we are cooperating with United Airlines to develop our understanding of the scientific issues underlying the practicalities of dynamic, real-time schedule repair.

  6. Assessment of New Load Schedules for the Machine Calibration of a Force Balance

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Gisler, R.; Kew, R.

    2015-01-01

    New load schedules for the machine calibration of a six-component force balance are currently being developed and evaluated at the NASA Ames Balance Calibration Laboratory. One of the proposed load schedules is discussed in the paper. It has a total of 2082 points that are distributed across 16 load series. Several criteria were applied to define the load schedule. It was decided, for example, to specify the calibration load set in force balance format as this approach greatly simplifies the definition of the lower and upper bounds of the load schedule. In addition, all loads are assumed to be applied in a calibration machine by using the one-factor-at-a-time approach. At first, all single-component loads are applied in six load series. Then, three two-component load series are applied. They consist of the load pairs (N1, N2), (S1, S2), and (RM, AF). Afterwards, four three-component load series are applied. They consist of the combinations (N1, N2, AF), (S1, S2, AF), (N1, N2, RM), and (S1, S2, RM). In the next step, one four-component load series is applied. It is the load combination (N1, N2, S1, S2). Finally, two five-component load series are applied. They are the load combination (N1, N2, S1, S2, AF) and (N1, N2, S1, S2, RM). The maximum difference between loads of two subsequent data points of the load schedule is limited to 33 % of capacity. This constraint helps avoid unwanted load "jumps" in the load schedule that can have a negative impact on the performance of a calibration machine. Only loadings of the single- and two-component load series are loaded to 100 % of capacity. This approach was selected because it keeps the total number of calibration points to a reasonable limit while still allowing for the application of some of the more complex load combinations. Data from two of NASA's force balances is used to illustrate important characteristics of the proposed 2082-point calibration load schedule.

  7. Scheduling Anesthesia Time Reduces Case Cancellations and Improves Operating Room Workflow in a University Hospital Setting.

    PubMed

    van Veen-Berkx, Elizabeth; van Dijk, Menno V; Cornelisse, Diederich C; Kazemier, Geert; Mokken, Fleur C

    2016-08-01

    A new method of scheduling anesthesia-controlled time (ACT) was implemented on July 1, 2012 in an academic inpatient operating room (OR) department. This study examined the relationship between this new scheduling method and OR performance. The new method comprised the development of predetermined time frames per anesthetic technique based on historical data of the actual time needed for anesthesia induction and emergence. Seven "anesthesia scheduling packages" (0 to 6) were established. Several options based on the quantity of anesthesia monitoring and the complexity of the patient were differentiated in time within each package. This was a quasi-experimental time-series design. Relevant data were divided into 4 equal periods of time. These time periods were compared with ANOVA with contrast analysis: an intervention, pre-intervention, and post-intervention contrast were tested. All emergency cases were excluded. A total of 34,976 inpatient elective cases performed from January 1, 2010 to December 31, 2014 were included for statistical analyses. The intervention contrast showed a significant decrease (p < 0.001) of 4.5% in the prediction error. The total number of cancellations decreased to 19.9%. The ANOVA with contrast analyses showed no significant differences with respect to under- and over-used OR time and raw use. Unanticipated results derived from this study, allowing for a smoother workflow: eg anesthesia nurses know exactly which medical equipment and devices need to be assembled and tested beforehand, based on the scheduled anesthesia package. Scheduling the 2 major components of a procedure (anesthesia- and surgeon-controlled time) more accurately leads to fewer case cancellations, lower prediction errors, and smoother OR workflow in a university hospital setting. Copyright © 2016 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  8. Spike: Artificial intelligence scheduling for Hubble space telescope

    NASA Technical Reports Server (NTRS)

    Johnston, Mark; Miller, Glenn; Sponsler, Jeff; Vick, Shon; Jackson, Robert

    1990-01-01

    Efficient utilization of spacecraft resources is essential, but the accompanying scheduling problems are often computationally intractable and are difficult to approximate because of the presence of numerous interacting constraints. Artificial intelligence techniques were applied to the scheduling of the NASA/ESA Hubble Space Telescope (HST). This presents a particularly challenging problem since a yearlong observing program can contain some tens of thousands of exposures which are subject to a large number of scientific, operational, spacecraft, and environmental constraints. New techniques were developed for machine reasoning about scheduling constraints and goals, especially in cases where uncertainty is an important scheduling consideration and where resolving conflicts among conflicting preferences is essential. These technique were utilized in a set of workstation based scheduling tools (Spike) for HST. Graphical displays of activities, constraints, and schedules are an important feature of the system. High level scheduling strategies using both rule based and neural network approaches were developed. While the specific constraints implemented are those most relevant to HST, the framework developed is far more general and could easily handle other kinds of scheduling problems. The concept and implementation of the Spike system are described along with some experiments in adapting Spike to other spacecraft scheduling domains.

  9. Laboratory services series: a programmed maintenance system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuxbury, D.C.; Srite, B.E.

    1980-01-01

    The diverse facilities, operations and equipment at a major national research and development laboratory require a systematic, analytical approach to operating equipment maintenance. A computer-scheduled preventive maintenance program is described including program development, equipment identification, maintenance and inspection instructions, scheduling, personnel, and equipment history.

  10. State Regulation of Heliport Design

    DTIC Science & Technology

    2001-05-01

    Arpt Lgt Sked: Dusk-Dawn 81. Schedule for beacon; if lights on different from beacon list as a remark. If no beacon list light schedule. 82. Unicorn ...continued importance of hospital heliports and the rapidly growing use of instrument approach/departure procedures at such sites, is it appropriate

  11. Energy Efficient Real-Time Scheduling Using DPM on Mobile Sensors with a Uniform Multi-Cores

    PubMed Central

    Kim, Youngmin; Lee, Chan-Gun

    2017-01-01

    In wireless sensor networks (WSNs), sensor nodes are deployed for collecting and analyzing data. These nodes use limited energy batteries for easy deployment and low cost. The use of limited energy batteries is closely related to the lifetime of the sensor nodes when using wireless sensor networks. Efficient-energy management is important to extending the lifetime of the sensor nodes. Most effort for improving power efficiency in tiny sensor nodes has focused mainly on reducing the power consumed during data transmission. However, recent emergence of sensor nodes equipped with multi-cores strongly requires attention to be given to the problem of reducing power consumption in multi-cores. In this paper, we propose an energy efficient scheduling method for sensor nodes supporting a uniform multi-cores. We extend the proposed T-Ler plane based scheduling for global optimal scheduling of a uniform multi-cores and multi-processors to enable power management using dynamic power management. In the proposed approach, processor selection for a scheduling and mapping method between the tasks and processors is proposed to efficiently utilize dynamic power management. Experiments show the effectiveness of the proposed approach compared to other existing methods. PMID:29240695

  12. Effects of novelty and methamphetamine on conditioned and sensory reinforcement

    PubMed Central

    Lloyd, David R.; Kausch, Michael A.; Gancarz, Amy M.; Beyley, Linda J.; Richards, Jerry B.

    2012-01-01

    Background Light onset can be both a sensory reinforcer (SR) with intrinsic reinforcing properties, and a conditioned reinforcer (CR) which predicts a biologically important reinforcer. Stimulant drugs, such as methamphetamine (METH), may increase the reinforcing effectiveness of CRs by enhancing the predictive properties of the CR. In contrast, METH-induced increases in the reinforcing effectiveness of SRs, are mediated by the immediate sensory consequences of the light. Methods The effects of novelty (on SRs) and METH (on both CRs and SRs) were tested. Experiment 1: Rats were pre-exposed to 5 s light and water pairings presented according to a variable-time (VT) 2 min schedule or unpaired water and light presented according to independent, concurrent VT 2 min schedules. Experiment 2: Rats were pre-exposed to 5 s light presented according to a VT 2 min schedule, or no stimuli. In both experiments, the pre-exposure phase was followed by a test phase in which 5 s light onset was made response-contingent on a variable-interval (VI) 2 min schedule and the effects of METH (0.5 mg/kg) were determined. Results Novel light onset was a more effective reinforcer than familiar light onset. METH increased the absolute rate of responding without increasing the relative frequency of responding for both CRs and SRs. Conclusion Novelty plays a role in determining the reinforcing effectiveness of SRs. The results are consistent with the interpretation that METH-induced increases in reinforcer effectiveness of CRs and SRs may be mediated by immediate sensory consequences, rather than prediction. PMID:22814112

  13. Efficient Trajectory Options Allocation for the Collaborative Trajectory Options Program

    NASA Technical Reports Server (NTRS)

    Rodionova, Olga; Arneson, Heather; Sridhar, Banavar; Evans, Antony

    2017-01-01

    The Collaborative Trajectory Options Program (CTOP) is a Traffic Management Initiative (TMI) intended to control the air traffic flow rates at multiple specified Flow Constrained Areas (FCAs), where demand exceeds capacity. CTOP allows flight operators to submit the desired Trajectory Options Set (TOS) for each affected flight with associated Relative Trajectory Cost (RTC) for each option. CTOP then creates a feasible schedule that complies with capacity constraints by assigning affected flights with routes and departure delays in such a way as to minimize the total cost while maintaining equity across flight operators. The current version of CTOP implements a Ration-by-Schedule (RBS) scheme, which assigns the best available options to flights based on a First-Scheduled-First-Served heuristic. In the present study, an alternative flight scheduling approach is developed based on linear optimization. Results suggest that such an approach can significantly reduce flight delays, in the deterministic case, while maintaining equity as defined using a Max-Min fairness scheme.

  14. A Novel Strategy Using Factor Graphs and the Sum-Product Algorithm for Satellite Broadcast Scheduling Problems

    NASA Astrophysics Data System (ADS)

    Chen, Jung-Chieh

    This paper presents a low complexity algorithmic framework for finding a broadcasting schedule in a low-altitude satellite system, i. e., the satellite broadcast scheduling (SBS) problem, based on the recent modeling and computational methodology of factor graphs. Inspired by the huge success of the low density parity check (LDPC) codes in the field of error control coding, in this paper, we transform the SBS problem into an LDPC-like problem through a factor graph instead of using the conventional neural network approaches to solve the SBS problem. Based on a factor graph framework, the soft-information, describing the probability that each satellite will broadcast information to a terminal at a specific time slot, is exchanged among the local processing in the proposed framework via the sum-product algorithm to iteratively optimize the satellite broadcasting schedule. Numerical results show that the proposed approach not only can obtain optimal solution but also enjoys the low complexity suitable for integral-circuit implementation.

  15. A novel hybrid genetic algorithm to solve the make-to-order sequence-dependent flow-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Mirabi, Mohammad; Fatemi Ghomi, S. M. T.; Jolai, F.

    2014-04-01

    Flow-shop scheduling problem (FSP) deals with the scheduling of a set of n jobs that visit a set of m machines in the same order. As the FSP is NP-hard, there is no efficient algorithm to reach the optimal solution of the problem. To minimize the holding, delay and setup costs of large permutation flow-shop scheduling problems with sequence-dependent setup times on each machine, this paper develops a novel hybrid genetic algorithm (HGA) with three genetic operators. Proposed HGA applies a modified approach to generate a pool of initial solutions, and also uses an improved heuristic called the iterated swap procedure to improve the initial solutions. We consider the make-to-order production approach that some sequences between jobs are assumed as tabu based on maximum allowable setup cost. In addition, the results are compared to some recently developed heuristics and computational experimental results show that the proposed HGA performs very competitively with respect to accuracy and efficiency of solution.

  16. Routing and scheduling of hazardous materials shipments: algorithmic approaches to managing spent nuclear fuel transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cox, R.G.

    Much controversy surrounds government regulation of routing and scheduling of Hazardous Materials Transportation (HMT). Increases in operating costs must be balanced against expected benefits from local HMT bans and curfews when promulgating or preempting HMT regulations. Algorithmic approaches for evaluating HMT routing and scheduling regulatory policy are described. A review of current US HMT regulatory policy is presented to provide a context for the analysis. Next, a multiobjective shortest path algorithm to find the set of efficient routes under conflicting objectives is presented. This algorithm generates all efficient routes under any partial ordering in a single pass through the network.more » Also, scheduling algorithms are presented to estimate the travel time delay due to HMT curfews along a route. Algorithms are presented assuming either deterministic or stochastic travel times between curfew cities and also possible rerouting to avoid such cities. These algorithms are applied to the case study of US highway transport of spent nuclear fuel from reactors to permanent repositories. Two data sets were used. One data set included the US Interstate Highway System (IHS) network with reactor locations, possible repository sites, and 150 heavily populated areas (HPAs). The other data set contained estimates of the population residing with 0.5 miles of the IHS and the Eastern US. Curfew delay is dramatically reduced by optimally scheduling departure times unless inter-HPA travel times are highly uncertain. Rerouting shipments to avoid HPAs is a less efficient approach to reducing delay.« less

  17. Non preemptive soft real time scheduler: High deadline meeting rate on overload

    NASA Astrophysics Data System (ADS)

    Khalib, Zahereel Ishwar Abdul; Ahmad, R. Badlishah; El-Shaikh, Mohamed

    2015-05-01

    While preemptive scheduling has gain more attention among researchers, current work in non preemptive scheduling had shown promising result in soft real time jobs scheduling. In this paper we present a non preemptive scheduling algorithm meant for soft real time applications, which is capable of producing better performance during overload while maintaining excellent performance during normal load. The approach taken by this algorithm has shown more promising results compared to other algorithms including its immediate predecessor. We will present the analysis made prior to inception of the algorithm as well as simulation results comparing our algorithm named gutEDF with EDF and gEDF. We are convinced that grouping jobs utilizing pure dynamic parameters would produce better performance.

  18. Scheduling the resident 80-hour work week: an operations research algorithm.

    PubMed

    Day, T Eugene; Napoli, Joseph T; Kuo, Paul C

    2006-01-01

    The resident 80-hour work week requires that programs now schedule duty hours. Typically, scheduling is performed in an empirical "trial-and-error" fashion. However, this is a classic "scheduling" problem from the field of operations research (OR). It is similar to scheduling issues that airlines must face with pilots and planes routing through various airports at various times. The authors hypothesized that an OR approach using iterative computer algorithms could provide a rational scheduling solution. Institution-specific constraints of the residency problem were formulated. A total of 56 residents are rotating through 4 hospitals. Additional constraints were dictated by the Residency Review Committee (RRC) rules or the specific surgical service. For example, at Hospital 1, during the weekday hours between 6 am and 6 pm, there will be a PGY4 or PGY5 and a PGY2 or PGY3 on-duty to cover Service "A." A series of equations and logic statements was generated to satisfy all constraints and requirements. These were restated in the Optimization Programming Language used by the ILOG software suite for solving mixed integer programming problems. An integer programming solution was generated to this resource-constrained assignment problem. A total of 30,900 variables and 12,443 constraints were required. A total of man-hours of programming were used; computer run-time was 25.9 hours. A weekly schedule was generated for each resident that satisfied the RRC regulations while fulfilling all stated surgical service requirements. Each required between 64 and 80 weekly resident duty hours. The authors conclude that OR is a viable approach to schedule resident work hours. This technique is sufficiently robust to accommodate changes in resident numbers, service requirements, and service and hospital rotations.

  19. Prediction Based Proactive Thermal Virtual Machine Scheduling in Green Clouds

    PubMed Central

    Kinger, Supriya; Kumar, Rajesh; Sharma, Anju

    2014-01-01

    Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM) scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs) before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated. PMID:24737962

  20. Active Job Monitoring in Pilots

    NASA Astrophysics Data System (ADS)

    Kuehn, Eileen; Fischer, Max; Giffels, Manuel; Jung, Christopher; Petzold, Andreas

    2015-12-01

    Recent developments in high energy physics (HEP) including multi-core jobs and multi-core pilots require data centres to gain a deep understanding of the system to monitor, design, and upgrade computing clusters. Networking is a critical component. Especially the increased usage of data federations, for example in diskless computing centres or as a fallback solution, relies on WAN connectivity and availability. The specific demands of different experiments and communities, but also the need for identification of misbehaving batch jobs, requires an active monitoring. Existing monitoring tools are not capable of measuring fine-grained information at batch job level. This complicates network-aware scheduling and optimisations. In addition, pilots add another layer of abstraction. They behave like batch systems themselves by managing and executing payloads of jobs internally. The number of real jobs being executed is unknown, as the original batch system has no access to internal information about the scheduling process inside the pilots. Therefore, the comparability of jobs and pilots for predicting run-time behaviour or network performance cannot be ensured. Hence, identifying the actual payload is important. At the GridKa Tier 1 centre a specific tool is in use that allows the monitoring of network traffic information at batch job level. This contribution presents the current monitoring approach and discusses recent efforts and importance to identify pilots and their substructures inside the batch system. It will also show how to determine monitoring data of specific jobs from identified pilots. Finally, the approach is evaluated.

  1. Public health impact and cost-effectiveness of the RTS,S/AS01 malaria vaccine: a systematic comparison of predictions from four mathematical models.

    PubMed

    Penny, Melissa A; Verity, Robert; Bever, Caitlin A; Sauboin, Christophe; Galactionova, Katya; Flasche, Stefan; White, Michael T; Wenger, Edward A; Van de Velde, Nicolas; Pemberton-Ross, Peter; Griffin, Jamie T; Smith, Thomas A; Eckhoff, Philip A; Muhib, Farzana; Jit, Mark; Ghani, Azra C

    2016-01-23

    The phase 3 trial of the RTS,S/AS01 malaria vaccine candidate showed modest efficacy of the vaccine against Plasmodium falciparum malaria, but was not powered to assess mortality endpoints. Impact projections and cost-effectiveness estimates for longer timeframes than the trial follow-up and across a range of settings are needed to inform policy recommendations. We aimed to assess the public health impact and cost-effectiveness of routine use of the RTS,S/AS01 vaccine in African settings. We compared four malaria transmission models and their predictions to assess vaccine cost-effectiveness and impact. We used trial data for follow-up of 32 months or longer to parameterise vaccine protection in the group aged 5-17 months. Estimates of cases, deaths, and disability-adjusted life-years (DALYs) averted were calculated over a 15 year time horizon for a range of levels of Plasmodium falciparum parasite prevalence in 2-10 year olds (PfPR2-10; range 3-65%). We considered two vaccine schedules: three doses at ages 6, 7·5, and 9 months (three-dose schedule, 90% coverage) and including a fourth dose at age 27 months (four-dose schedule, 72% coverage). We estimated cost-effectiveness in the presence of existing malaria interventions for vaccine prices of US$2-10 per dose. In regions with a PfPR2-10 of 10-65%, RTS,S/AS01 is predicted to avert a median of 93,940 (range 20,490-126,540) clinical cases and 394 (127-708) deaths for the three-dose schedule, or 116,480 (31,450-160,410) clinical cases and 484 (189-859) deaths for the four-dose schedule, per 100,000 fully vaccinated children. A positive impact is also predicted at a PfPR2-10 of 5-10%, but there is little impact at a prevalence of lower than 3%. At $5 per dose and a PfPR2-10 of 10-65%, we estimated a median incremental cost-effectiveness ratio compared with current interventions of $30 (range 18-211) per clinical case averted and $80 (44-279) per DALY averted for the three-dose schedule, and of $25 (16-222) and $87 (48-244), respectively, for the four-dose schedule. Higher ICERs were estimated at low PfPR2-10 levels. We predict a significant public health impact and high cost-effectiveness of the RTS,S/AS01 vaccine across a wide range of settings. Decisions about implementation will need to consider levels of malaria burden, the cost-effectiveness and coverage of other malaria interventions, health priorities, financing, and the capacity of the health system to deliver the vaccine. PATH Malaria Vaccine Initiative; Bill & Melinda Gates Foundation; Global Good Fund; Medical Research Council; UK Department for International Development; GAVI, the Vaccine Alliance; WHO. Copyright © 2016 Penny et al. Open Access article distributed under the terms of CC BY. Published by Elsevier Ltd.. All rights reserved.

  2. Public health impact and cost-effectiveness of the RTS,S/AS01 malaria vaccine: a systematic comparison of predictions from four mathematical models

    PubMed Central

    Penny, Melissa A; Verity, Robert; Bever, Caitlin A; Sauboin, Christophe; Galactionova, Katya; Flasche, Stefan; White, Michael T; Wenger, Edward A; Van de Velde, Nicolas; Pemberton-Ross, Peter; Griffin, Jamie T; Smith, Thomas A; Eckhoff, Philip A; Muhib, Farzana; Jit, Mark; Ghani, Azra C

    2016-01-01

    Summary Background The phase 3 trial of the RTS,S/AS01 malaria vaccine candidate showed modest efficacy of the vaccine against Plasmodium falciparum malaria, but was not powered to assess mortality endpoints. Impact projections and cost-effectiveness estimates for longer timeframes than the trial follow-up and across a range of settings are needed to inform policy recommendations. We aimed to assess the public health impact and cost-effectiveness of routine use of the RTS,S/AS01 vaccine in African settings. Methods We compared four malaria transmission models and their predictions to assess vaccine cost-effectiveness and impact. We used trial data for follow-up of 32 months or longer to parameterise vaccine protection in the group aged 5–17 months. Estimates of cases, deaths, and disability-adjusted life-years (DALYs) averted were calculated over a 15 year time horizon for a range of levels of Plasmodium falciparum parasite prevalence in 2–10 year olds (PfPR2–10; range 3–65%). We considered two vaccine schedules: three doses at ages 6, 7·5, and 9 months (three-dose schedule, 90% coverage) and including a fourth dose at age 27 months (four-dose schedule, 72% coverage). We estimated cost-effectiveness in the presence of existing malaria interventions for vaccine prices of US$2–10 per dose. Findings In regions with a PfPR2–10 of 10–65%, RTS,S/AS01 is predicted to avert a median of 93 940 (range 20 490–126 540) clinical cases and 394 (127–708) deaths for the three-dose schedule, or 116 480 (31 450–160 410) clinical cases and 484 (189–859) deaths for the four-dose schedule, per 100 000 fully vaccinated children. A positive impact is also predicted at a PfPR2–10 of 5–10%, but there is little impact at a prevalence of lower than 3%. At $5 per dose and a PfPR2–10 of 10–65%, we estimated a median incremental cost-effectiveness ratio compared with current interventions of $30 (range 18–211) per clinical case averted and $80 (44–279) per DALY averted for the three-dose schedule, and of $25 (16–222) and $87 (48–244), respectively, for the four-dose schedule. Higher ICERs were estimated at low PfPR2–10 levels. Interpretation We predict a significant public health impact and high cost-effectiveness of the RTS,S/AS01 vaccine across a wide range of settings. Decisions about implementation will need to consider levels of malaria burden, the cost-effectiveness and coverage of other malaria interventions, health priorities, financing, and the capacity of the health system to deliver the vaccine. Funding PATH Malaria Vaccine Initiative; Bill & Melinda Gates Foundation; Global Good Fund; Medical Research Council; UK Department for International Development; GAVI, the Vaccine Alliance; WHO. PMID:26549466

  3. Defense AT&L (Volume 35, Number 5, September-October 2006)

    DTIC Science & Technology

    2006-10-01

    percent of production. The criti- Defense AT&L: September-October 2006 8 cal path elements driving the IOT &E schedule are not pro- duction hardware...reduced costs, and successful completion of work in the scheduled time. 30 The Commodity Approach to Aircraft Protection Systems Capt. Bill Chubb, USN The...piece of Littoral Combat Ship Two during the ship’s keel laying ceremony. The Navy’s second Littoral Combat Ship is scheduled for commissioning in

  4. Understanding Acquisition Cycle Time: Focusing the Research Problem

    DTIC Science & Technology

    2013-11-01

    Browning, Tyson R., and Steven D. Eppinger. “Modeling Impacts of Process Architecture on Cost and Schedule Risk in Product Development.” IEEE...2009. Clark, Kim, and Steven Wheelwright. Revolutionizing Development: Quantum Leaps in Speed, Efficiency and Quality. New York, NY: The Free Press...1992. Cross, Steven M. Data Analysis and its Impact on Predicting Schedule and Cost Risk. AFIT/GIR/ENC/06M-01. Wright-Patterson AFB OH: AFIT

  5. Benefit Opportunities for Integrated Surface and Airspace Departure Scheduling: A Study of Operations at Charlotte-Douglas International Airport

    NASA Technical Reports Server (NTRS)

    Coppenbarger, Rich; Jung, Yoon; Kozon, Tom; Farrahi, Amir; Malik, Wakar; Lee, Hanbong; Chevalley, Eric; Kistler, Matt

    2016-01-01

    NASA is collaborating with the FAA and aviation industry to develop and demonstrate new capabilities that integrate arrival, departure, and surface air-traffic operations. The concept relies on trajectory-based departure scheduling and collaborative decision making to reduce delays and uncertainties in taxi and climb operations. The paper describes the concept and benefit mechanisms aimed at improving flight efficiency and predictability while maintaining or improving operational throughput. The potential impact of the technology is studied and discussed through a quantitative analysis of relevant shortfalls at the site identified for initial deployment and demonstration in 2017: Charlotte-Douglas International Airport. Results from trajectory analysis indicate substantial opportunity to reduce taxi delays for both departures and arrivals by metering departures at the gate in a manner that maximizes throughput while adhering to takeoff restrictions due mostly to airspace constraints. Substantial taxi-out delay reduction is shown for flights subject to departure restrictions stemming from traffic flow management initiatives. Opportunities to improve the predictability of taxi, takeoff, and climb operations are examined and their potential impact on airline scheduling decisions and air-traffic forecasting is discussed. In addition, the potential to improve throughput with departure scheduling that maximizes use of available runway and airspace capacity is analyzed.

  6. Model-based optimization of G-CSF treatment during cytotoxic chemotherapy.

    PubMed

    Schirm, Sibylle; Engel, Christoph; Loibl, Sibylle; Loeffler, Markus; Scholz, Markus

    2018-02-01

    Although G-CSF is widely used to prevent or ameliorate leukopenia during cytotoxic chemotherapies, its optimal use is still under debate and depends on many therapy parameters such as dosing and timing of cytotoxic drugs and G-CSF, G-CSF pharmaceuticals used and individual risk factors of patients. We integrate available biological knowledge and clinical data regarding cell kinetics of bone marrow granulopoiesis, the cytotoxic effects of chemotherapy and pharmacokinetics and pharmacodynamics of G-CSF applications (filgrastim or pegfilgrastim) into a comprehensive model. The model explains leukocyte time courses of more than 70 therapy scenarios comprising 10 different cytotoxic drugs. It is applied to develop optimized G-CSF schedules for a variety of clinical scenarios. Clinical trial results showed validity of model predictions regarding alternative G-CSF schedules. We propose modifications of G-CSF treatment for the chemotherapies 'BEACOPP escalated' (Hodgkin's disease), 'ETC' (breast cancer), and risk-adapted schedules for 'CHOP-14' (aggressive non-Hodgkin's lymphoma in elderly patients). We conclude that we established a model of human granulopoiesis under chemotherapy which allows predictions of yet untested G-CSF schedules, comparisons between them, and optimization of filgrastim and pegfilgrastim treatment. As a general rule of thumb, G-CSF treatment should not be started too early and patients could profit from filgrastim treatment continued until the end of the chemotherapy cycle.

  7. Improvement to Airport Throughput Using Intelligent Arrival Scheduling and an Expanded Planning Horizon

    NASA Technical Reports Server (NTRS)

    Glaab, Patricia C.

    2012-01-01

    The first phase of this study investigated the amount of time a flight can be delayed or expedited within the Terminal Airspace using only speed changes. The Arrival Capacity Calculator analysis tool was used to predict the time adjustment envelope for standard descent arrivals and then for CDA arrivals. Results ranged from 0.77 to 5.38 minutes. STAR routes were configured for the ACES simulation, and a validation of the ACC results was conducted comparing the maximum predicted time adjustments to those seen in ACES. The final phase investigated full runway-to-runway trajectories using ACES. The radial distance used by the arrival scheduler was incrementally increased from 50 to 150 nautical miles (nmi). The increased Planning Horizon radii allowed the arrival scheduler to arrange, path stretch, and speed-adjust flights to more fully load the arrival stream. The average throughput for the high volume portion of the day increased from 30 aircraft per runway for the 50 nmi radius to 40 aircraft per runway for the 150 nmi radius for a traffic set representative of high volume 2018. The recommended radius for the arrival scheduler s Planning Horizon was found to be 130 nmi, which allowed more than 95% loading of the arrival stream.

  8. Bioreactor design for successive culture of anchorage-dependent cells operated in an automated manner.

    PubMed

    Kino-Oka, Masahiro; Ogawa, Natsuki; Umegaki, Ryota; Taya, Masahito

    2005-01-01

    A novel bioreactor system was designed to perform a series of batchwise cultures of anchorage-dependent cells by means of automated operations of medium change and passage for cell transfer. The experimental data on contamination frequency ensured the biological cleanliness in the bioreactor system, which facilitated the operations in a closed environment, as compared with that in flask culture system with manual handlings. In addition, the tools for growth prediction (based on growth kinetics) and real-time growth monitoring by measurement of medium components (based on small-volume analyzing machinery) were installed into the bioreactor system to schedule the operations of medium change and passage and to confirm that culture proceeds as scheduled, respectively. The successive culture of anchorage-dependent cells was conducted with the bioreactor running in an automated way. The automated bioreactor gave a successful culture performance with fair accordance to preset scheduling based on the information in the latest subculture, realizing 79- fold cell expansion for 169 h. In addition, the correlation factor between experimental data and scheduled values through the bioreactor performance was 0.998. It was concluded that the proposed bioreactor with the integration of the prediction and monitoring tools could offer a feasible system for the manufacturing process of cultured tissue products.

  9. Optimizing energy for a ‘green’ vaccine supply chain

    PubMed Central

    Lloyd, John; McCarney, Steve; Ouhichi, Ramzi; Lydon, Patrick; Zaffran, Michel

    2015-01-01

    This paper describes an approach piloted in the Kasserine region of Tunisia to increase the energy efficiency of the distribution of vaccines and temperature sensitive drugs. The objectives of an approach, known as the ‘net zero energy’ (NZE) supply chain were demonstrated within the first year of operation. The existing distribution system was modified to store vaccines and medicines in the same buildings and to transport them according to pre-scheduled and optimized delivery circuits. Electric utility vehicles, dedicated to the integrated delivery of vaccines and medicines, improved the regularity and reliability of the supply chains. Solar energy, linked to the electricity grid at regional and district stores, supplied over 100% of consumption meeting all energy needs for storage, cooling and transportation. Significant benefits to the quality and costs of distribution were demonstrated. Supply trips were scheduled, integrated and reliable, energy consumption was reduced, the recurrent cost of electricity was eliminated and the release of carbon to the atmosphere was reduced. Although the initial capital cost of scaling up implementation of NZE remain high today, commercial forecasts predict cost reduction for solar energy and electric vehicles that may permit a step-wise implementation over the next 7–10 years. Efficiency in the use of energy and in the deployment of transport is already a critical component of distribution logistics in both private and public sectors of industrialized countries. The NZE approach has an intensified rationale in countries where energy costs threaten the maintenance of public health services in areas of low population density. In these countries where the mobility of health personnel and timely arrival of supplies is at risk, NZE has the potential to reduce energy costs and release recurrent budget to other needs of service delivery while also improving the supply chain. PMID:25444811

  10. Compositions and their application to the analysis of choice.

    PubMed

    Jensen, Greg

    2014-07-01

    Descriptions of steady-state patterns of choice allocation under concurrent schedules of reinforcement have long relied on the "generalized matching law" (Baum, 1974), a log-odds power function. Although a powerful model in some contexts, a series of conflicting empirical results have cast its generality in doubt. The relevance and analytic relevance of matching models can be greatly expanded by considering them in terms of compositions (Aitchison, 1986). A composition encodes a set of ratios (e.g., 5:3:2) as a vector with a constant sum, and this constraint (called closure) restricts the data to a nonstandard sample space. By exploiting this sample space, unbiased estimates of model parameters can be obtained to predict behavior given any number of choice alternatives. Additionally, the compositional analysis of choice provides tools that can accommodate both violations of scale invariance and unequal discriminability of stimuli signaling schedules of reinforcement. In order to demonstrate how choice data can be analyzed using the compositional approach, data from three previously published studies are reanalyzed. Additionally, new data is reported comparing matching behavior given four, six, and eight response alternatives. © Society for the Experimental Analysis of Behavior.

  11. Methods to model and predict the ViewRay treatment deliveries to aid patient scheduling and treatment planning.

    PubMed

    Liu, Shi; Wu, Yu; Wooten, H Omar; Green, Olga; Archer, Brent; Li, Harold; Yang, Deshan

    2016-03-08

    A software tool is developed, given a new treatment plan, to predict treatment delivery time for radiation therapy (RT) treatments of patients on ViewRay magnetic resonance image-guided radiation therapy (MR-IGRT) delivery system. This tool is necessary for managing patient treatment scheduling in our clinic. The predicted treatment delivery time and the assessment of plan complexities could also be useful to aid treatment planning. A patient's total treatment delivery time, not including time required for localization, is modeled as the sum of four components: 1) the treatment initialization time; 2) the total beam-on time; 3) the gantry rotation time; and 4) the multileaf collimator (MLC) motion time. Each of the four components is predicted separately. The total beam-on time can be calculated using both the planned beam-on time and the decay-corrected dose rate. To predict the remain-ing components, we retrospectively analyzed the patient treatment delivery record files. The initialization time is demonstrated to be random since it depends on the final gantry angle of the previous treatment. Based on modeling the relationships between the gantry rotation angles and the corresponding rotation time, linear regression is applied to predict the gantry rotation time. The MLC motion time is calculated using the leaves delay modeling method and the leaf motion speed. A quantitative analysis was performed to understand the correlation between the total treatment time and the plan complexity. The proposed algorithm is able to predict the ViewRay treatment delivery time with the average prediction error 0.22min or 1.82%, and the maximal prediction error 0.89 min or 7.88%. The analysis has shown the correlation between the plan modulation (PM) factor and the total treatment delivery time, as well as the treatment delivery duty cycle. A possibility has been identified to significantly reduce MLC motion time by optimizing the positions of closed MLC pairs. The accuracy of the proposed prediction algorithm is sufficient to support patient treatment appointment scheduling. This developed software tool is currently applied in use on a daily basis in our clinic, and could also be used as an important indicator for treatment plan complexity.

  12. Development of standardized component\\0x2010based equipment specifications and transition plan into a predictive maintenance strategy.

    DOT National Transportation Integrated Search

    2015-12-01

    This project investigated INDOT equipment records and equipment industry standards to produce standard equipment specifications : and a predictive maintenance schedule for the more than 1100 single and tandem axle trucks in use at INDOT. The research...

  13. Conflict-free trajectory planning for air traffic control automation

    NASA Technical Reports Server (NTRS)

    Slattery, Rhonda; Green, Steve

    1994-01-01

    As the traffic demand continues to grow within the National Airspace System (NAS), the need for long-range planning (30 minutes plus) of arrival traffic increases greatly. Research into air traffic control (ATC) automation at ARC has led to the development of the Center-TRACON Automation System (CTAS). CTAS determines optimum landing schedules for arrival traffic and assists controllers in meeting those schedules safely and efficiently. One crucial element in the development of CTAS is the capability to perform long-range (20 minutes) and short-range (5 minutes) conflict prediction and resolution once landing schedules are determined. The determination of conflict-free trajectories within the Center airspace is particularly difficult because of large variations in speed and altitude. The paper describes the current design and implementation of the conflict prediction and resolution tools used to generate CTAS advisories in Center airspace. Conflict criteria (separation requirements) are defined and the process of separation prediction is described. The major portion of the paper will describe the current implementation of CTAS conflict resolution algorithms in terms of the degrees of freedom for resolutions as well as resolution search techniques. The tools described in this paper have been implemented in a research system designed to rapidly develop and evaluate prototype concepts and will form the basis for an operational ATC automation system.

  14. Coordinating space telescope operations in an integrated planning and scheduling architecture

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Smith, Stephen F.; Cesta, Amedeo; D'Aloisi, Daniela

    1992-01-01

    The Heuristic Scheduling Testbed System (HSTS), a software architecture for integrated planning and scheduling, is discussed. The architecture has been applied to the problem of generating observation schedules for the Hubble Space Telescope. This problem is representative of the class of problems that can be addressed: their complexity lies in the interaction of resource allocation and auxiliary task expansion. The architecture deals with this interaction by viewing planning and scheduling as two complementary aspects of the more general process of constructing behaviors of a dynamical system. The principal components of the software architecture are described, indicating how to model the structure and dynamics of a system, how to represent schedules at multiple levels of abstraction in the temporal database, and how the problem solving machinery operates. A scheduler for the detailed management of Hubble Space Telescope operations that has been developed within HSTS is described. Experimental performance results are given that indicate the utility and practicality of the approach.

  15. Increasing operating room productivity by duration categories and a newsvendor model.

    PubMed

    Lehtonen, Juha-Matti; Torkki, Paulus; Peltokorpi, Antti; Moilanen, Teemu

    2013-01-01

    Previous studies approach surgery scheduling mainly from the mathematical modeling perspective which is often hard to apply in a practical environment. The aim of this study is to develop a practical scheduling system that considers the advantages of both surgery categorization and newsvendor model to surgery scheduling. The research was carried out in a Finnish orthopaedic specialist centre that performs only joint replacement surgery. Four surgery categorization scenarios were defined and their productivity analyzed by simulation and newsvendor model. Detailed analyses of surgery durations and the use of more accurate case categories and their combinations in scheduling improved OR productivity 11.3 percent when compared to the base case. Planning to have one OR team to work longer led to remarkable decrease in scheduling inefficiency. In surgical services, productivity and cost-efficiency can be improved by utilizing historical data in case scheduling and by increasing flexibility in personnel management. The study increases the understanding of practical scheduling methods used to improve efficiency in surgical services.

  16. A space station onboard scheduling assistant

    NASA Technical Reports Server (NTRS)

    Brindle, A. F.; Anderson, B. H.

    1988-01-01

    One of the goals for the Space Station is to achieve greater autonomy, and have less reliance on ground commanding than previous space missions. This means that the crew will have to take an active role in scheduling and rescheduling their activities onboard, perhaps working from preliminary schedules generated on the ground. Scheduling is a time intensive task, whether performed manually or automatically, so the best approach to solving onboard scheduling problems may involve crew members working with an interactive software scheduling package. A project is described which investigates a system that uses knowledge based techniques for the rescheduling of experiments within the Materials Technology Laboratory of the Space Station. Particular attention is paid to: (1) methods for rapid response rescheduling to accommodate unplanned changes in resource availability, (2) the nature of the interface to the crew, (3) the representation of the many types of data within the knowledge base, and (4) the possibility of applying rule-based and constraint-based reasoning methods to onboard activity scheduling.

  17. Changing work, changing health: can real work-time flexibility promote health behaviors and well-being?

    PubMed

    Moen, Phyllis; Kelly, Erin L; Tranby, Eric; Huang, Qinlei

    2011-12-01

    This article investigates a change in the structuring of work time, using a natural experiment to test whether participation in a corporate initiative (Results Only Work Environment; ROWE) predicts corresponding changes in health-related outcomes. Drawing on job strain and stress process models, we theorize greater schedule control and reduced work-family conflict as key mechanisms linking this initiative with health outcomes. Longitudinal survey data from 659 employees at a corporate headquarters shows that ROWE predicts changes in health-related behaviors, including almost an extra hour of sleep on work nights. Increasing employees' schedule control and reducing their work-family conflict are key mechanisms linking the ROWE innovation with changes in employees' health behaviors; they also predict changes in well-being measures, providing indirect links between ROWE and well-being. This study demonstrates that organizational changes in the structuring of time can promote employee wellness, particularly in terms of prevention behaviors.

  18. T-L Plane Abstraction-Based Energy-Efficient Real-Time Scheduling for Multi-Core Wireless Sensors.

    PubMed

    Kim, Youngmin; Lee, Ki-Seong; Pham, Ngoc-Son; Lee, Sun-Ro; Lee, Chan-Gun

    2016-07-08

    Energy efficiency is considered as a critical requirement for wireless sensor networks. As more wireless sensor nodes are equipped with multi-cores, there are emerging needs for energy-efficient real-time scheduling algorithms. The T-L plane-based scheme is known to be an optimal global scheduling technique for periodic real-time tasks on multi-cores. Unfortunately, there has been a scarcity of studies on extending T-L plane-based scheduling algorithms to exploit energy-saving techniques. In this paper, we propose a new T-L plane-based algorithm enabling energy-efficient real-time scheduling on multi-core sensor nodes with dynamic power management (DPM). Our approach addresses the overhead of processor mode transitions and reduces fragmentations of the idle time, which are inherent in T-L plane-based algorithms. Our experimental results show the effectiveness of the proposed algorithm compared to other energy-aware scheduling methods on T-L plane abstraction.

  19. Scheduling IT Staff at a Bank: A Mathematical Programming Approach

    PubMed Central

    Labidi, M.; Mrad, M.; Gharbi, A.; Louly, M. A.

    2014-01-01

    We address a real-world optimization problem: the scheduling of a Bank Information Technologies (IT) staff. This problem can be defined as the process of constructing optimized work schedules for staff. In a general sense, it requires the allocation of suitably qualified staff to specific shifts to meet the demands for services of an organization while observing workplace regulations and attempting to satisfy individual work preferences. A monthly shift schedule is prepared to determine the shift duties of each staff considering shift coverage requirements, seniority-based workload rules, and staff work preferences. Due to the large number of conflicting constraints, a multiobjective programming model has been proposed to automate the schedule generation process. The suggested mathematical model has been implemented using Lingo software. The results indicate that high quality solutions can be obtained within a few seconds compared to the manually prepared schedules. PMID:24772032

  20. Scheduling IT staff at a bank: a mathematical programming approach.

    PubMed

    Labidi, M; Mrad, M; Gharbi, A; Louly, M A

    2014-01-01

    We address a real-world optimization problem: the scheduling of a Bank Information Technologies (IT) staff. This problem can be defined as the process of constructing optimized work schedules for staff. In a general sense, it requires the allocation of suitably qualified staff to specific shifts to meet the demands for services of an organization while observing workplace regulations and attempting to satisfy individual work preferences. A monthly shift schedule is prepared to determine the shift duties of each staff considering shift coverage requirements, seniority-based workload rules, and staff work preferences. Due to the large number of conflicting constraints, a multiobjective programming model has been proposed to automate the schedule generation process. The suggested mathematical model has been implemented using Lingo software. The results indicate that high quality solutions can be obtained within a few seconds compared to the manually prepared schedules.

  1. Scheduling Independent Partitions in Integrated Modular Avionics Systems

    PubMed Central

    Du, Chenglie; Han, Pengcheng

    2016-01-01

    Recently the integrated modular avionics (IMA) architecture has been widely adopted by the avionics industry due to its strong partition mechanism. Although the IMA architecture can achieve effective cost reduction and reliability enhancement in the development of avionics systems, it results in a complex allocation and scheduling problem. All partitions in an IMA system should be integrated together according to a proper schedule such that their deadlines will be met even under the worst case situations. In order to help provide a proper scheduling table for all partitions in IMA systems, we study the schedulability of independent partitions on a multiprocessor platform in this paper. We firstly present an exact formulation to calculate the maximum scaling factor and determine whether all partitions are schedulable on a limited number of processors. Then with a Game Theory analogy, we design an approximation algorithm to solve the scheduling problem of partitions, by allowing each partition to optimize its own schedule according to the allocations of the others. Finally, simulation experiments are conducted to show the efficiency and reliability of the approach proposed in terms of time consumption and acceptance ratio. PMID:27942013

  2. Scheduling nurses’ shifts at PGI Cikini Hospital

    NASA Astrophysics Data System (ADS)

    Nainggolan, J. C. T.; Kusumastuti, R. D.

    2018-03-01

    Hospitals play an essential role in the community by providing medical services to the public. In order to provide high quality medical services, hospitals must manage their resources (including nurses) effectively and efficiently. Scheduling of nurses’ work shifts, in particular, is crucial, and must be conducted carefully to ensure availability and fairness. This research discusses the job scheduling system for nurses in PGI Cikini Hospital, Jakarta with Goal Programming approach. The research objectives are to identify nurse scheduling criteria and find the best schedule that can meet the criteria. The model has hospital regulations (including government regulations) as hard constraints, and nurses’ preferences as soft constraints. We gather primary data (hospital regulations and nurses’ preferences) through interviews with three Head Nurses and distributing questionnaires to fifty nurses. The results show that on the best schedule, all hard constraints can be satisfied. However, only two out of four soft constraints are satisfied. Compared to current scheduling practice, the resulting schedule ensures the availability of nurses as it satisfies all hospital’s regulations and it has a higher level of fairness as it can accommodate some of the nurses’ preferences.

  3. Estimates of the absolute error and a scheme for an approximate solution to scheduling problems

    NASA Astrophysics Data System (ADS)

    Lazarev, A. A.

    2009-02-01

    An approach is proposed for estimating absolute errors and finding approximate solutions to classical NP-hard scheduling problems of minimizing the maximum lateness for one or many machines and makespan is minimized. The concept of a metric (distance) between instances of the problem is introduced. The idea behind the approach is, given the problem instance, to construct another instance for which an optimal or approximate solution can be found at the minimum distance from the initial instance in the metric introduced. Instead of solving the original problem (instance), a set of approximating polynomially/pseudopolynomially solvable problems (instances) are considered, an instance at the minimum distance from the given one is chosen, and the resulting schedule is then applied to the original instance.

  4. National Centers for Environmental Prediction

    Science.gov Websites

    / VISION | About EMC EMC > NOAH > IMPLEMENTATION SCHEDULLE Home Operational Products Experimental Data Verification Model Configuration Implementation Schedule Collaborators Documentation FAQ Code

  5. National Centers for Environmental Prediction

    Science.gov Websites

    / VISION | About EMC EMC > GEFS > IMPLEMENTATION SCHEDULLE Home Operational Products Experimental Data Verification Model Configuration Implementation Schedule Collaborators Documentation FAQ Code

  6. Research on schedulers for astronomical observatories

    NASA Astrophysics Data System (ADS)

    Colome, Josep; Colomer, Pau; Guàrdia, Josep; Ribas, Ignasi; Campreciós, Jordi; Coiffard, Thierry; Gesa, Lluis; Martínez, Francesc; Rodler, Florian

    2012-09-01

    The main task of a scheduler applied to astronomical observatories is the time optimization of the facility and the maximization of the scientific return. Scheduling of astronomical observations is an example of the classical task allocation problem known as the job-shop problem (JSP), where N ideal tasks are assigned to M identical resources, while minimizing the total execution time. A problem of higher complexity, called the Flexible-JSP (FJSP), arises when the tasks can be executed by different resources, i.e. by different telescopes, and it focuses on determining a routing policy (i.e., which machine to assign for each operation) other than the traditional scheduling decisions (i.e., to determine the starting time of each operation). In most cases there is no single best approach to solve the planning system and, therefore, various mathematical algorithms (Genetic Algorithms, Ant Colony Optimization algorithms, Multi-Objective Evolutionary algorithms, etc.) are usually considered to adapt the application to the system configuration and task execution constraints. The scheduling time-cycle is also an important ingredient to determine the best approach. A shortterm scheduler, for instance, has to find a good solution with the minimum computation time, providing the system with the capability to adapt the selected task to varying execution constraints (i.e., environment conditions). We present in this contribution an analysis of the task allocation problem and the solutions currently in use at different astronomical facilities. We also describe the schedulers for three different projects (CTA, CARMENES and TJO) where the conclusions of this analysis are applied to develop a suitable routine.

  7. Negotiating on location, timing, duration, and participant in agent-mediated joint activity-travel scheduling

    NASA Astrophysics Data System (ADS)

    Ma, Huiye; Ronald, Nicole; Arentze, Theo A.; Timmermans, Harry J. P.

    2013-10-01

    Agent-based simulation has become an important modeling approach in activity-travel analysis. Social activities account for a large amount of travel and have an important effect on activity-travel scheduling. Participants in joint activities usually have various options regarding location, participants, and timing and take different approaches to make their decisions. In this context, joint activity participation requires negotiation among agents involved, so that conflicts among the agents can be addressed. Existing mechanisms do not fully provide a solution when utility functions of agents are nonlinear and non-monotonic. Considering activity-travel scheduling in time and space as an application, we propose a novel negotiation approach, which takes into account these properties, such as continuous and discrete issues, and nonlinear and non-monotonic utility functions, by defining a concession strategy and a search mechanism. The results of experiments show that agents having these properties can negotiate efficiently. Furthermore, the negotiation procedure affects individuals’ choices of location, timing, duration, and participants.

  8. Adaptation to shift work: physiologically based modeling of the effects of lighting and shifts' start time.

    PubMed

    Postnova, Svetlana; Robinson, Peter A; Postnov, Dmitry D

    2013-01-01

    Shift work has become an integral part of our life with almost 20% of the population being involved in different shift schedules in developed countries. However, the atypical work times, especially the night shifts, are associated with reduced quality and quantity of sleep that leads to increase of sleepiness often culminating in accidents. It has been demonstrated that shift workers' sleepiness can be improved by a proper scheduling of light exposure and optimizing shifts timing. Here, an integrated physiologically-based model of sleep-wake cycles is used to predict adaptation to shift work in different light conditions and for different shift start times for a schedule of four consecutive days of work. The integrated model combines a model of the ascending arousal system in the brain that controls the sleep-wake switch and a human circadian pacemaker model. To validate the application of the integrated model and demonstrate its utility, its dynamics are adjusted to achieve a fit to published experimental results showing adaptation of night shift workers (n = 8) in conditions of either bright or regular lighting. Further, the model is used to predict the shift workers' adaptation to the same shift schedule, but for conditions not considered in the experiment. The model demonstrates that the intensity of shift light can be reduced fourfold from that used in the experiment and still produce good adaptation to night work. The model predicts that sleepiness of the workers during night shifts on a protocol with either bright or regular lighting can be significantly improved by starting the shift earlier in the night, e.g.; at 21:00 instead of 00:00. Finally, the study predicts that people of the same chronotype, i.e. with identical sleep times in normal conditions, can have drastically different responses to shift work depending on their intrinsic circadian and homeostatic parameters.

  9. Adaptation to Shift Work: Physiologically Based Modeling of the Effects of Lighting and Shifts’ Start Time

    PubMed Central

    Postnova, Svetlana; Robinson, Peter A.; Postnov, Dmitry D.

    2013-01-01

    Shift work has become an integral part of our life with almost 20% of the population being involved in different shift schedules in developed countries. However, the atypical work times, especially the night shifts, are associated with reduced quality and quantity of sleep that leads to increase of sleepiness often culminating in accidents. It has been demonstrated that shift workers’ sleepiness can be improved by a proper scheduling of light exposure and optimizing shifts timing. Here, an integrated physiologically-based model of sleep-wake cycles is used to predict adaptation to shift work in different light conditions and for different shift start times for a schedule of four consecutive days of work. The integrated model combines a model of the ascending arousal system in the brain that controls the sleep-wake switch and a human circadian pacemaker model. To validate the application of the integrated model and demonstrate its utility, its dynamics are adjusted to achieve a fit to published experimental results showing adaptation of night shift workers (n = 8) in conditions of either bright or regular lighting. Further, the model is used to predict the shift workers’ adaptation to the same shift schedule, but for conditions not considered in the experiment. The model demonstrates that the intensity of shift light can be reduced fourfold from that used in the experiment and still produce good adaptation to night work. The model predicts that sleepiness of the workers during night shifts on a protocol with either bright or regular lighting can be significantly improved by starting the shift earlier in the night, e.g.; at 21∶00 instead of 00∶00. Finally, the study predicts that people of the same chronotype, i.e. with identical sleep times in normal conditions, can have drastically different responses to shift work depending on their intrinsic circadian and homeostatic parameters. PMID:23308206

  10. Approach to transaction management for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Easton, C. R.; Cressy, Phil; Ohnesorge, T. E.; Hector, Garland

    1990-01-01

    The Space Station Freedom Manned Base (SSFMB) will support the operation of the many payloads that may be located within the pressurized modules or on external attachment points. The transaction management (TM) approach presented provides a set of overlapping features that will assure the effective and safe operation of the SSFMB and provide a schedule that makes potentially hazardous operations safe, allocates resources within the capability of the resource providers, and maintains an environment conducive to the operations planned. This approach provides for targets of opportunity and schedule adjustments that give the operators the flexibility to conduct a vast majority of their operations with no conscious involvement with the TM function.

  11. Provenance-aware optimization of workload for distributed data production

    NASA Astrophysics Data System (ADS)

    Makatun, Dzmitry; Lauret, Jérôme; Rudová, Hana; Šumbera, Michal

    2017-10-01

    Distributed data processing in High Energy and Nuclear Physics (HENP) is a prominent example of big data analysis. Having petabytes of data being processed at tens of computational sites with thousands of CPUs, standard job scheduling approaches either do not address well the problem complexity or are dedicated to one specific aspect of the problem only (CPU, network or storage). Previously we have developed a new job scheduling approach dedicated to distributed data production - an essential part of data processing in HENP (preprocessing in big data terminology). In this contribution, we discuss the load balancing with multiple data sources and data replication, present recent improvements made to our planner and provide results of simulations which demonstrate the advantage against standard scheduling policies for the new use case. Multi-source or provenance is common in computing models of many applications whereas the data may be copied to several destinations. The initial input data set would hence be already partially replicated to multiple locations and the task of the scheduler is to maximize overall computational throughput considering possible data movements and CPU allocation. The studies have shown that our approach can provide a significant gain in overall computational performance in a wide scope of simulations considering realistic size of computational Grid and various input data distribution.

  12. Magazine approach during a signal for food depends on Pavlovian, not instrumental, conditioning.

    PubMed

    Harris, Justin A; Andrew, Benjamin J; Kwok, Dorothy W S

    2013-04-01

    In the conditioned magazine approach paradigm, rats are exposed to a contingent relationship between a conditioned stimulus (CS) and the delivery of food (the unconditioned stimulus, US). As the rats learn the CS-US association, they make frequent anticipatory head entries into the food magazine (the conditioned response, CR) during the CS. Conventionally, this is considered to be a Pavlovian paradigm because food is contingent on the CS and not on the performance of CRs during the CS. However, because magazine entries during the CS are reliably followed by food, the increase in frequency of those responses may involve adventitious ("superstitious") instrumental conditioning. The existing evidence, from experiments using an omission schedule to eliminate the possibility of instrumental conditioning (B. J. Farwell & J. J. Ayres, 1979, Stimulus-reinforcer and response-reinforcer relations in the control of conditioned appetitive headpoking (goal tracking) in rats. Learning and Motivation, 10, 295-312; P. C. Holland, 1979, Differential effects of omission contingencies on various components of Pavlovian appetitive conditioned responding in rats. Journal of Experimental Psychology: Animal Behavior Processes, 5, 178-193), is ambiguous: rats acquire magazine CRs despite the omission schedule, demonstrating that the response does not depend on instrumental conditioning, but the response rate is greatly depressed compared with that of rats trained on a yoked schedule, consistent with a contribution from instrumental conditioning under normal (nonomission) schedules. Here we describe experiments in which rats were trained on feature-positive or feature-negative type discriminations between trials that were reinforced on an omission schedule versus trials reinforced on a yoked schedule. The experiments show that the difference in responding between omission and yoked schedules is due to suppression of responding under the omission schedule rather than an elevation of responding under the yoked schedule. We conclude that magazine responses during the CS are largely or entirely Pavlovian CRs.

  13. The Development of Patient Scheduling Groups for an Effective Appointment System

    PubMed Central

    2016-01-01

    Summary Background Patient access to care and long wait times has been identified as major problems in outpatient delivery systems. These aspects impact medical staff productivity, service quality, clinic efficiency, and health-care cost. Objectives This study proposed to redesign existing patient types into scheduling groups so that the total cost of clinic flow and scheduling flexibility was minimized. The optimal scheduling group aimed to improve clinic efficiency and accessibility. Methods The proposed approach used the simulation optimization technique and was demonstrated in a Primary Care physician clinic. Patient type included, emergency/urgent care (ER/UC), follow-up (FU), new patient (NP), office visit (OV), physical exam (PE), and well child care (WCC). One scheduling group was designed for this physician. The approach steps were to collect physician treatment time data for each patient type, form the possible scheduling groups, simulate daily clinic flow and patient appointment requests, calculate costs of clinic flow as well as appointment flexibility, and find the scheduling group that minimized the total cost. Results The cost of clinic flow was minimized at the scheduling group of four, an 8.3% reduction from the group of one. The four groups were: 1. WCC, 2. OV, 3. FU and ER/UC, and 4. PE and NP. The cost of flexibility was always minimized at the group of one. The total cost was minimized at the group of two. WCC was considered separate and the others were grouped together. The total cost reduction was 1.3% from the group of one. Conclusions This study provided an alternative method of redesigning patient scheduling groups to address the impact on both clinic flow and appointment accessibility. Balance between them ensured the feasibility to the recognized issues of patient service and access to care. The robustness of the proposed method on the changes of clinic conditions was also discussed. PMID:27081406

  14. The Dynamic Planner: The Sequencer, Scheduler, and Runway Allocator for Air Traffic Control Automation

    NASA Technical Reports Server (NTRS)

    Wong, Gregory L.; Denery, Dallas (Technical Monitor)

    2000-01-01

    The Dynamic Planner (DP) has been designed, implemented, and integrated into the Center-TRACON Automation System (CTAS) to assist Traffic Management Coordinators (TMCs), in real time, with the task of planning and scheduling arrival traffic approximately 35 to 200 nautical miles from the destination airport. The TMC may input to the DP a series of current and future scheduling constraints that reflect the operation and environmental conditions of the airspace. Under these constraints, the DP uses flight plans, track updates, and Estimated Time of Arrival (ETA) predictions to calculate optimal runway assignments and arrival schedules that help ensure an orderly, efficient, and conflict-free flow of traffic into the terminal area. These runway assignments and schedules can be shown directly to controllers or they can be used by other CTAS tools to generate advisories to the controllers. Additionally, the TMC and controllers may override the decisions made by the DP for tactical considerations. The DP will adapt to computations to accommodate these manual inputs.

  15. Blocking in Success: Plan Ahead for Big Dividends from a New Schedule.

    ERIC Educational Resources Information Center

    Cooper, Sylvia L.

    1996-01-01

    Examines the benefits of flexible scheduling and the initial steps used in exploring this approach. Discusses the problem of loss of instructional time and the use of an independent research period as a solution. Presents results from an external assessment, ACT score data, and CTBS scores. (DDR)

  16. A New Engine for Schools: The Flexible Scheduling Paradigm

    ERIC Educational Resources Information Center

    Snyder, Yaakov; Herer, Yale T.; Moore, Michael

    2012-01-01

    We present a new approach for the organization of schools, which we call the flexible scheduling paradigm (FSP). FSP improves student learning by dynamically redeploying teachers and other pedagogical resources to provide students with customized learning conditions over shorter time periods called "mini-terms" instead of semesters or years. By…

  17. Simplified Models for Accelerated Structural Prediction of Conjugated Semiconducting Polymers

    DOE PAGES

    Henry, Michael M.; Jones, Matthew L.; Oosterhout, Stefan D.; ...

    2017-11-08

    We perform molecular dynamics simulations of poly(benzodithiophene-thienopyrrolodione) (BDT-TPD) oligomers in order to evaluate the accuracy with which unoptimized molecular models can predict experimentally characterized morphologies. The predicted morphologies are characterized using simulated grazing-incidence X-ray scattering (GIXS) and compared to the experimental scattering patterns. We find that approximating the aromatic rings in BDT-TPD with rigid bodies, rather than combinations of bond, angle, and dihedral constraints, results in 14% lower computational cost and provides nearly equivalent structural predictions compared to the flexible model case. The predicted glass transition temperature of BDT-TPD (410 +/- 32 K) is found to be in agreement withmore » experiments. Predicted morphologies demonstrate short-range structural order due to stacking of the chain backbones (p-p stacking around 3.9 A), and long-range spatial correlations due to the self-organization of backbone stacks into 'ribbons' (lamellar ordering around 20.9 A), representing the best-to-date computational predictions of structure of complex conjugated oligomers. We find that expensive simulated annealing schedules are not needed to predict experimental structures here, with instantaneous quenches providing nearly equivalent predictions at a fraction of the computational cost of annealing. We therefore suggest utilizing rigid bodies and fast cooling schedules for high-throughput screening studies of semiflexible polymers and oligomers to utilize their significant computational benefits where appropriate.« less

  18. Simplified Models for Accelerated Structural Prediction of Conjugated Semiconducting Polymers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henry, Michael M.; Jones, Matthew L.; Oosterhout, Stefan D.

    We perform molecular dynamics simulations of poly(benzodithiophene-thienopyrrolodione) (BDT-TPD) oligomers in order to evaluate the accuracy with which unoptimized molecular models can predict experimentally characterized morphologies. The predicted morphologies are characterized using simulated grazing-incidence X-ray scattering (GIXS) and compared to the experimental scattering patterns. We find that approximating the aromatic rings in BDT-TPD with rigid bodies, rather than combinations of bond, angle, and dihedral constraints, results in 14% lower computational cost and provides nearly equivalent structural predictions compared to the flexible model case. The predicted glass transition temperature of BDT-TPD (410 +/- 32 K) is found to be in agreement withmore » experiments. Predicted morphologies demonstrate short-range structural order due to stacking of the chain backbones (p-p stacking around 3.9 A), and long-range spatial correlations due to the self-organization of backbone stacks into 'ribbons' (lamellar ordering around 20.9 A), representing the best-to-date computational predictions of structure of complex conjugated oligomers. We find that expensive simulated annealing schedules are not needed to predict experimental structures here, with instantaneous quenches providing nearly equivalent predictions at a fraction of the computational cost of annealing. We therefore suggest utilizing rigid bodies and fast cooling schedules for high-throughput screening studies of semiflexible polymers and oligomers to utilize their significant computational benefits where appropriate.« less

  19. A Fast-Time Simulation Tool for Analysis of Airport Arrival Traffic

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz; Meyn, Larry A.; Neuman, Frank

    2004-01-01

    The basic objective of arrival sequencing in air traffic control automation is to match traffic demand and airport capacity while minimizing delays. The performance of an automated arrival scheduling system, such as the Traffic Management Advisor developed by NASA for the FAA, can be studied by a fast-time simulation that does not involve running expensive and time-consuming real-time simulations. The fast-time simulation models runway configurations, the characteristics of arrival traffic, deviations from predicted arrival times, as well as the arrival sequencing and scheduling algorithm. This report reviews the development of the fast-time simulation method used originally by NASA in the design of the sequencing and scheduling algorithm for the Traffic Management Advisor. The utility of this method of simulation is demonstrated by examining the effect on delays of altering arrival schedules at a hub airport.

  20. Wind Energy Management System Integration Project Incorporating Wind Generation and Load Forecast Uncertainties into Power Grid Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makarov, Yuri V.; Huang, Zhenyu; Etingov, Pavel V.

    2010-09-01

    The power system balancing process, which includes the scheduling, real time dispatch (load following) and regulation processes, is traditionally based on deterministic models. Since the conventional generation needs time to be committed and dispatched to a desired megawatt level, the scheduling and load following processes use load and wind power production forecasts to achieve future balance between the conventional generation and energy storage on the one side, and system load, intermittent resources (such as wind and solar generation) and scheduled interchange on the other side. Although in real life the forecasting procedures imply some uncertainty around the load and windmore » forecasts (caused by forecast errors), only their mean values are actually used in the generation dispatch and commitment procedures. Since the actual load and intermittent generation can deviate from their forecasts, it becomes increasingly unclear (especially, with the increasing penetration of renewable resources) whether the system would be actually able to meet the conventional generation requirements within the look-ahead horizon, what the additional balancing efforts would be needed as we get closer to the real time, and what additional costs would be incurred by those needs. In order to improve the system control performance characteristics, maintain system reliability, and minimize expenses related to the system balancing functions, it becomes necessary to incorporate the predicted uncertainty ranges into the scheduling, load following, and, in some extent, into the regulation processes. It is also important to address the uncertainty problem comprehensively, by including all sources of uncertainty (load, intermittent generation, generators’ forced outages, etc.) into consideration. All aspects of uncertainty such as the imbalance size (which is the same as capacity needed to mitigate the imbalance) and generation ramping requirement must be taken into account. The latter unique features make this work a significant step forward toward the objective of incorporating of wind, solar, load, and other uncertainties into power system operations. In this report, a new methodology to predict the uncertainty ranges for the required balancing capacity, ramping capability and ramp duration is presented. Uncertainties created by system load forecast errors, wind and solar forecast errors, generation forced outages are taken into account. The uncertainty ranges are evaluated for different confidence levels of having the actual generation requirements within the corresponding limits. The methodology helps to identify system balancing reserve requirement based on a desired system performance levels, identify system “breaking points”, where the generation system becomes unable to follow the generation requirement curve with the user-specified probability level, and determine the time remaining to these potential events. The approach includes three stages: statistical and actual data acquisition, statistical analysis of retrospective information, and prediction of future grid balancing requirements for specified time horizons and confidence intervals. Assessment of the capacity and ramping requirements is performed using a specially developed probabilistic algorithm based on a histogram analysis incorporating all sources of uncertainty and parameters of a continuous (wind forecast and load forecast errors) and discrete (forced generator outages and failures to start up) nature. Preliminary simulations using California Independent System Operator (California ISO) real life data have shown the effectiveness of the proposed approach. A tool developed based on the new methodology described in this report will be integrated with the California ISO systems. Contractual work is currently in place to integrate the tool with the AREVA EMS system.« less

  1. Design and Scheduling of Microgrids using Benders Decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagarajan, Adarsh; Ayyanar, Raja

    2016-11-21

    The distribution feeder laterals in a distribution feeder with relatively high PV generation as compared to the load can be operated as microgrids to achieve reliability, power quality and economic benefits. However, renewable resources are intermittent and stochastic in nature. A novel approach for sizing and scheduling an energy storage system and microturbine for reliable operation of microgrids is proposed. The size and schedule of an energy storage system and microturbine are determined using Benders' decomposition, considering PV generation as a stochastic resource.

  2. Artificial intelligence techniques for scheduling Space Shuttle missions

    NASA Technical Reports Server (NTRS)

    Henke, Andrea L.; Stottler, Richard H.

    1994-01-01

    Planning and scheduling of NASA Space Shuttle missions is a complex, labor-intensive process requiring the expertise of experienced mission planners. We have developed a planning and scheduling system using combinations of artificial intelligence knowledge representations and planning techniques to capture mission planning knowledge and automate the multi-mission planning process. Our integrated object oriented and rule-based approach reduces planning time by orders of magnitude and provides planners with the flexibility to easily modify planning knowledge and constraints without requiring programming expertise.

  3. Urinary Biomarkers and Obstructive Sleep Apnea in Patients with Down Syndrome

    PubMed Central

    Elsharkawi, Ibrahim; Gozal, David; Macklin, Eric A.; Voelz, Lauren; Weintraub, Gil; Skotko, Brian G.

    2017-01-01

    Study Objectives The study aim was to compare urinary biomarkers in individuals with Down syndrome (DS) with and without obstructive sleep apnea (OSA) to those of age- and sex-matched neurotypically developing healthy controls (HC). We further investigated whether we could predict OSA in individuals with DS using these biomarkers. Methods Urine samples were collected from 58 individuals with DS the night before or the morning after their scheduled overnight polysomnogram or both, of whom 47 could be age- and sex-matched to a sample of 43 HC. Concentrations of 12 neurotransmitters were determined by enzyme-linked immunosorbent assay. Log-transformed creatinine-corrected assay levels were normalized. Normalized z-scores were compared between individuals with DS vs. HC, between individuals with DS with vs. without OSA, and to derive composite models to predict OSA. Results Most night-sampled urinary biomarkers were elevated among individuals with DS relative to matched HC. No urinary biomarker levels differed between individuals with DS with vs. without OSA. A combination of four urinary biomarkers predicted AHI > 1 with a positive predictive value of 90% and a negative predictive value of 68%. Conclusions Having DS, even in the absence of concurrent OSA, is associated with a different urinary biomarker profile when compared to HC. Therefore, while urinary biomarkers may be predictive of OSA in the general pediatric population, a different approach is needed in interpreting urinary biomarker assays in individuals with DS. Certain biomarkers also seem promising to be predictive of OSA in individuals with DS. PMID:28522103

  4. Using Forecasting to Predict Long-Term Resource Utilization for Web Services

    ERIC Educational Resources Information Center

    Yoas, Daniel W.

    2013-01-01

    Researchers have spent years understanding resource utilization to improve scheduling, load balancing, and system management through short-term prediction of resource utilization. Early research focused primarily on single operating systems; later, interest shifted to distributed systems and, finally, into web services. In each case researchers…

  5. Development of standardized component\\0x2010based equipment specifications and transition plan into a predictive maintenance strategy : final report.

    DOT National Transportation Integrated Search

    2015-12-01

    This project investigated INDOT equipment records and equipment industry standards to produce standard equipment specifications : and a predictive maintenance schedule for the more than 1100 single and tandem axle trucks in use at INDOT. The research...

  6. Airspace Technology Demonstration 2 (ATD-2) Phase 1 Concept of Use (ConUse)

    NASA Technical Reports Server (NTRS)

    Jung, Yoon; Engelland, Shawn; Capps, Richard; Coppenbarger, Rich; Hooey, Becky; Sharma, Shivanjli; Stevens, Lindsay; Verma, Savita; Lohr, Gary; Chevalley, Eric; hide

    2018-01-01

    This document presents an operational Concept of Use (ConUse) for the Phase 1 Baseline Integrated Arrival, Departure, and Surface (IADS) prototype system of NASA's Airspace Technology Demonstration 2 (ATD-2) sub-project, which began demonstration in 2017 at Charlotte Douglas International Airport (CLT). NASA is developing the IADS system under the ATD-2 sub-project in coordination with the Federal Aviation Administration (FAA) and aviation industry partners. The primary goal of ATD-2 sub-project is to improve the predictability and the operational efficiency of the air traffic system in metroplex environments, through the enhancement, development, and integration of the nation's most advanced and sophisticated arrival, departure, and surface prediction, scheduling, and management systems. The ATD-2 effort is a five-year research activity through 2020. The initial phase of the ATD-2 sub-project, which is the focus of this document, will demonstrate the Phase 1 Baseline IADS capability at CLT in 2017. The Phase 1 Baseline IADS capabilities of the ATD-2 sub-project consists of: (a) Strategic and tactical surface scheduling to improve efficiency and predictability of airport surface operations, (b) Tactical departure scheduling to enhance merging of departures into overhead traffic streams via accurate predictions of takeoff times and automated coordination between the Airport Traffic Control Tower (ATCT, or Tower) and the Air Route Traffic Control Center (ARTCC, or Center), (c) Improvements in departure surface demand predictions in Time Based Flow Management (TBFM), (d) A prototype Electronic Flight Data (EFD) system provided by the FAA via the Terminal Flight Data Manager (TFDM) early implementation effort, and (e) Improved situational awareness and demand predictions through integration with the Traffic Flow Management System (TFMS), TBFM, and TFDM (3Ts) for electronic data integration and exchange, and an on-screen dashboard displaying pertinent analytics in real-time. The surface scheduling and metering element of the capability is consistent with the Surface CDM Concept of Operations published in 2014 by the FAA Surface Operations Directorate.1 Upon successful demonstration of the Phase 1 Baseline IADS capability, follow-on demonstrations of the matured IADS traffic management capabilities will be conducted in the 2018-2020 timeframe. At the end of each phase of the demonstrations, NASA will transfer the ATD-2 sub-project technology to the FAA and industry partners.

  7. The effects of feeding unpredictability and classical conditioning on pre-release training of white-lipped peccary (Mammalia, Tayassuidae).

    PubMed

    Nogueira, Selene S C; Abreu, Shauana A; Peregrino, Helderes; Nogueira-Filho, Sérgio L G

    2014-01-01

    Some authors have suggested that environmental unpredictability, accompanied by some sort of signal for behavioral conditioning, can boost activity or foster exploratory behavior, which may increase post-release success in re-introduction programs. Thus, using white-lipped peccary (Tayassu pecari), a vulnerable Neotropical species, as a model, we evaluated an unpredictable feeding schedule. Associating this with the effect of classical conditioning on behavioral activities, we assessed the inclusion of this approach in pre-release training protocols. The experimental design comprised predictable feeding phases (control phases: C1, C2 and C3) and unpredictable feeding phases (U1- signaled and U2- non-signaled). The animals explored more during the signaled and non-signaled unpredictable phases and during the second control phase (C2) than during the other two predictable phases (C1 and C3). The peccaries also spent less time feeding during the signaled unpredictable phase (U1) and the following control phase (C2) than during the other phases. Moreover, they spent more time in aggressive encounters during U1 than the other experimental phases. However, the animals did not show differences in the time they spent on affiliative interactions or in the body weight change during the different phases. The signaled unpredictability, besides improving foraging behavior, showing a prolonged effect on the next control phase (C2), also increased the competition for food. The signaled feeding unpredictability schedule, mimicking wild conditions by eliciting the expression of naturalistic behaviors in pre-release training, may be essential to fully prepare them for survival in the wild.

  8. The use of patient factors to improve the prediction of operative duration using laparoscopic cholecystectomy.

    PubMed

    Thiels, Cornelius A; Yu, Denny; Abdelrahman, Amro M; Habermann, Elizabeth B; Hallbeck, Susan; Pasupathy, Kalyan S; Bingener, Juliane

    2017-01-01

    Reliable prediction of operative duration is essential for improving patient and care team satisfaction, optimizing resource utilization and reducing cost. Current operative scheduling systems are unreliable and contribute to costly over- and underestimation of operative time. We hypothesized that the inclusion of patient-specific factors would improve the accuracy in predicting operative duration. We reviewed all elective laparoscopic cholecystectomies performed at a single institution between 01/2007 and 06/2013. Concurrent procedures were excluded. Univariate analysis evaluated the effect of age, gender, BMI, ASA, laboratory values, smoking, and comorbidities on operative duration. Multivariable linear regression models were constructed using the significant factors (p < 0.05). The patient factors model was compared to the traditional surgical scheduling system estimates, which uses historical surgeon-specific and procedure-specific operative duration. External validation was done using the ACS-NSQIP database (n = 11,842). A total of 1801 laparoscopic cholecystectomy patients met inclusion criteria. Female sex was associated with reduced operative duration (-7.5 min, p < 0.001 vs. male sex) while increasing BMI (+5.1 min BMI 25-29.9, +6.9 min BMI 30-34.9, +10.4 min BMI 35-39.9, +17.0 min BMI 40 + , all p < 0.05 vs. normal BMI), increasing ASA (+7.4 min ASA III, +38.3 min ASA IV, all p < 0.01 vs. ASA I), and elevated liver function tests (+7.9 min, p < 0.01 vs. normal) were predictive of increased operative duration on univariate analysis. A model was then constructed using these predictive factors. The traditional surgical scheduling system was poorly predictive of actual operative duration (R 2  = 0.001) compared to the patient factors model (R 2  = 0.08). The model remained predictive on external validation (R 2  = 0.14).The addition of surgeon as a variable in the institutional model further improved predictive ability of the model (R 2  = 0.18). The use of routinely available pre-operative patient factors improves the prediction of operative duration during cholecystectomy.

  9. From non-preemptive to preemptive scheduling using synchronization synthesis.

    PubMed

    Černý, Pavol; Clarke, Edmund M; Henzinger, Thomas A; Radhakrishna, Arjun; Ryzhyk, Leonid; Samanta, Roopsha; Tarrach, Thorsten

    2017-01-01

    We present a computer-aided programming approach to concurrency. The approach allows programmers to program assuming a friendly, non-preemptive scheduler, and our synthesis procedure inserts synchronization to ensure that the final program works even with a preemptive scheduler. The correctness specification is implicit, inferred from the non-preemptive behavior. Let us consider sequences of calls that the program makes to an external interface. The specification requires that any such sequence produced under a preemptive scheduler should be included in the set of sequences produced under a non-preemptive scheduler. We guarantee that our synthesis does not introduce deadlocks and that the synchronization inserted is optimal w.r.t. a given objective function. The solution is based on a finitary abstraction, an algorithm for bounded language inclusion modulo an independence relation, and generation of a set of global constraints over synchronization placements. Each model of the global constraints set corresponds to a correctness-ensuring synchronization placement. The placement that is optimal w.r.t. the given objective function is chosen as the synchronization solution. We apply the approach to device-driver programming, where the driver threads call the software interface of the device and the API provided by the operating system. Our experiments demonstrate that our synthesis method is precise and efficient. The implicit specification helped us find one concurrency bug previously missed when model-checking using an explicit, user-provided specification. We implemented objective functions for coarse-grained and fine-grained locking and observed that different synchronization placements are produced for our experiments, favoring a minimal number of synchronization operations or maximum concurrency, respectively.

  10. PC-PVT 2.0: An updated platform for psychomotor vigilance task testing, analysis, prediction, and visualization.

    PubMed

    Reifman, Jaques; Kumar, Kamal; Khitrov, Maxim Y; Liu, Jianbo; Ramakrishnan, Sridhar

    2018-07-01

    The psychomotor vigilance task (PVT) has been widely used to assess the effects of sleep deprivation on human neurobehavioral performance. To facilitate research in this field, we previously developed the PC-PVT, a freely available software system analogous to the "gold-standard" PVT-192 that, in addition to allowing for simple visual reaction time (RT) tests, also allows for near real-time PVT analysis, prediction, and visualization in a personal computer (PC). Here we present the PC-PVT 2.0 for Windows 10 operating system, which has the capability to couple PVT tests of a study protocol with the study's sleep/wake and caffeine schedules, and make real-time individualized predictions of PVT performance for such schedules. We characterized the accuracy and precision of the software in measuring RT, using 44 distinct combinations of PC hardware system configurations. We found that 15 system configurations measured RTs with an average delay of less than 10 ms, an error comparable to that of the PVT-192. To achieve such small delays, the system configuration should always use a gaming mouse as the means to respond to visual stimuli. We recommend using a discrete graphical processing unit for desktop PCs and an external monitor for laptop PCs. This update integrates a study's sleep/wake and caffeine schedules with the testing software, facilitating testing and outcome visualization, and provides near-real-time individualized PVT predictions for any sleep-loss condition considering caffeine effects. The software, with its enhanced PVT analysis, visualization, and prediction capabilities, can be freely downloaded from https://pcpvt.bhsai.org. Published by Elsevier B.V.

  11. Assessing the ability of operational snow models to predict snowmelt runoff extremes (Invited)

    NASA Astrophysics Data System (ADS)

    Wood, A. W.; Restrepo, P. J.; Clark, M. P.

    2013-12-01

    In the western US, the snow accumulation and melt cycle of winter and spring plays a critical role in the region's water management strategies. Consequently, the ability to predict snowmelt runoff at time scales from days to seasons is a key input for decisions in reservoir management, whether for avoiding flood hazards or supporting environmental flows through the scheduling of releases in spring, or for allocating releases for multi-state water distribution in dry seasons of year (using reservoir systems to provide an invaluable buffer for many sectors against drought). Runoff forecasts thus have important benefits at both wet and dry extremes of the climatological spectrum. The importance of the prediction of the snow cycle motivates an assessment of the strengths and weaknesses of the US's central operational snow model, SNOW17, in contrast to process-modeling alternatives, as they relate to simulating observed snowmelt variability and extremes. To this end, we use a flexible modeling approach that enables an investigation of different choices in model structure, including model physics, parameterization and degree of spatiotemporal discretization. We draw from examples of recent extreme events in western US watersheds and an overall assessment of retrospective model performance to identify fruitful avenues for advancing the modeling basis for the operational prediction of snow-related runoff extremes.

  12. The onset of the solar active cycle 22

    NASA Technical Reports Server (NTRS)

    Ahluwalia, H. S.

    1989-01-01

    There is a great deal of interest in being able to predict the main characteristics of a solar activity cycle (SAC). One would like to know, for instance, how large the amplitude (R sub m) of a cycle is likely to be, i.e., the annual mean of the sunspot numbers at the maximum of SAC. Also, how long a cycle is likely to last, i.e., its period. It would also be interesting to be able to predict the details, like how steep the ascending phase of a cycle is likely to be. Questions like these are of practical importance to NASA in planning the launch schedule for the low altitude, expensive spacecrafts like the Hubble Space Telescope, the Space Station, etc. Also, one has to choose a proper orbit, so that once launched the threat of an atmospheric drag on the spacecraft is properly taken into account. Cosmic ray data seem to indicate that solar activity cycle 22 will surpass SAC 21 in activity. The value of R sub m for SAC 22 may approach that of SAC 19. It would be interesting to see whether this prediction is borne out. Researchers are greatly encouraged to proceed with the development of a comprehensive prediction model which includes information provided by cosmic ray data.

  13. Solving the flexible job shop problem by hybrid metaheuristics-based multiagent model

    NASA Astrophysics Data System (ADS)

    Nouri, Houssem Eddine; Belkahla Driss, Olfa; Ghédira, Khaled

    2018-03-01

    The flexible job shop scheduling problem (FJSP) is a generalization of the classical job shop scheduling problem that allows to process operations on one machine out of a set of alternative machines. The FJSP is an NP-hard problem consisting of two sub-problems, which are the assignment and the scheduling problems. In this paper, we propose how to solve the FJSP by hybrid metaheuristics-based clustered holonic multiagent model. First, a neighborhood-based genetic algorithm (NGA) is applied by a scheduler agent for a global exploration of the search space. Second, a local search technique is used by a set of cluster agents to guide the research in promising regions of the search space and to improve the quality of the NGA final population. The efficiency of our approach is explained by the flexible selection of the promising parts of the search space by the clustering operator after the genetic algorithm process, and by applying the intensification technique of the tabu search allowing to restart the search from a set of elite solutions to attain new dominant scheduling solutions. Computational results are presented using four sets of well-known benchmark literature instances. New upper bounds are found, showing the effectiveness of the presented approach.

  14. Task path planning, scheduling and learning for free-ranging robot systems

    NASA Technical Reports Server (NTRS)

    Wakefield, G. Steve

    1987-01-01

    The development of robotics applications for space operations is often restricted by the limited movement available to guided robots. Free ranging robots can offer greater flexibility than physically guided robots in these applications. Presented here is an object oriented approach to path planning and task scheduling for free-ranging robots that allows the dynamic determination of paths based on the current environment. The system also provides task learning for repetitive jobs. This approach provides a basis for the design of free-ranging robot systems which are adaptable to various environments and tasks.

  15. Vehicle Scheduling Schemes for Commercial and Emergency Logistics Integration

    PubMed Central

    Li, Xiaohui; Tan, Qingmei

    2013-01-01

    In modern logistics operations, large-scale logistics companies, besides active participation in profit-seeking commercial business, also play an essential role during an emergency relief process by dispatching urgently-required materials to disaster-affected areas. Therefore, an issue has been widely addressed by logistics practitioners and caught researchers' more attention as to how the logistics companies achieve maximum commercial profit on condition that emergency tasks are effectively and performed satisfactorily. In this paper, two vehicle scheduling models are proposed to solve the problem. One is a prediction-related scheme, which predicts the amounts of disaster-relief materials and commercial business and then accepts the business that will generate maximum profits; the other is a priority-directed scheme, which, firstly groups commercial and emergency business according to priority grades and then schedules both types of business jointly and simultaneously by arriving at the maximum priority in total. Moreover, computer-based simulations are carried out to evaluate the performance of these two models by comparing them with two traditional disaster-relief tactics in China. The results testify the feasibility and effectiveness of the proposed models. PMID:24391724

  16. Vehicle scheduling schemes for commercial and emergency logistics integration.

    PubMed

    Li, Xiaohui; Tan, Qingmei

    2013-01-01

    In modern logistics operations, large-scale logistics companies, besides active participation in profit-seeking commercial business, also play an essential role during an emergency relief process by dispatching urgently-required materials to disaster-affected areas. Therefore, an issue has been widely addressed by logistics practitioners and caught researchers' more attention as to how the logistics companies achieve maximum commercial profit on condition that emergency tasks are effectively and performed satisfactorily. In this paper, two vehicle scheduling models are proposed to solve the problem. One is a prediction-related scheme, which predicts the amounts of disaster-relief materials and commercial business and then accepts the business that will generate maximum profits; the other is a priority-directed scheme, which, firstly groups commercial and emergency business according to priority grades and then schedules both types of business jointly and simultaneously by arriving at the maximum priority in total. Moreover, computer-based simulations are carried out to evaluate the performance of these two models by comparing them with two traditional disaster-relief tactics in China. The results testify the feasibility and effectiveness of the proposed models.

  17. Operations mission planner

    NASA Technical Reports Server (NTRS)

    Biefeld, Eric; Cooper, Lynne

    1990-01-01

    The findings are documented of the OMP research task, which investigated the applicability of artificial intelligence (AI) technology in support of automated scheduling. The goals of the effort are summarized and the technical accomplishments are highlighted. The OMP task succeeded in identifying how AI technology could be applied and demonstrated an AI-based automated scheduling approach through the OMP prototypes.

  18. An Organizational and Qualitative Approach to Improving University Course Scheduling

    ERIC Educational Resources Information Center

    Hill, Duncan L.

    2010-01-01

    Focusing on the current timetabling process at the University of Toronto Mississauga (UTM), I apply David Wesson's theoretical framework in order to understand (1) how increasing enrollment interacts with a decentralized timetabling process to limit the flexibility of course schedules and (2) the resultant impact on educational quality. I then…

  19. Qualitative Timetabling: An Organizational and Qualitative Approach to Improving University Course Scheduling

    ERIC Educational Resources Information Center

    Hill, Duncan L.

    2008-01-01

    Focusing on the current timetabling process at the University of Toronto Mississauga, I apply David Wesson's theoretical framework in order to understand how increasing enrolment interacts with a decentralized timetabling process to limit the flexibility of course schedules, and the resultant impact on educational quality. I then apply Robert…

  20. Exact and Metaheuristic Approaches for a Bi-Objective School Bus Scheduling Problem.

    PubMed

    Chen, Xiaopan; Kong, Yunfeng; Dang, Lanxue; Hou, Yane; Ye, Xinyue

    2015-01-01

    As a class of hard combinatorial optimization problems, the school bus routing problem has received considerable attention in the last decades. For a multi-school system, given the bus trips for each school, the school bus scheduling problem aims at optimizing bus schedules to serve all the trips within the school time windows. In this paper, we propose two approaches for solving the bi-objective school bus scheduling problem: an exact method of mixed integer programming (MIP) and a metaheuristic method which combines simulated annealing with local search. We develop MIP formulations for homogenous and heterogeneous fleet problems respectively and solve the models by MIP solver CPLEX. The bus type-based formulation for heterogeneous fleet problem reduces the model complexity in terms of the number of decision variables and constraints. The metaheuristic method is a two-stage framework for minimizing the number of buses to be used as well as the total travel distance of buses. We evaluate the proposed MIP and the metaheuristic method on two benchmark datasets, showing that on both instances, our metaheuristic method significantly outperforms the respective state-of-the-art methods.

  1. A free market in telescope time?

    NASA Astrophysics Data System (ADS)

    Etherton, Jason; Steele, Iain A.; Mottram, Christopher J.

    2004-09-01

    As distributed systems are becoming more and more diverse in application there is a growing need for more intelligent resource scheduling. eSTAR Is a geographically distributed network of Grid-enabled telescopes, using grid middleware to provide telescope users with an authentication and authorisation method, allowing secure, remote access to such resources. The eSTAR paradigm is based upon this secure, single sign-on, giving astronomers or their agent proxies direct access to these telescopes. This concept, however, involves the complex issue of how to schedule observations stored within physically distributed media, on geographically distributed resources. This matter is complicated further by the varying degrees of constraints placed upon observations such as timeliness, atmospheric and meteorological conditions, and sky brightness to name a few. This paper discusses a free market approach to this scheduling problem, where astronomers are given credit, instead of time, from their respective TAGs to spend on telescopes as they see fit. This approach will ultimately provide a community-driven schedule, genuine indicators of the worth of specific telescope time and promote a more efficient use of that time, as well as demonstrating a 'survival of the fittest' type selection.

  2. Scheduling-capable autonomic manager for policy-based IT change management system

    NASA Astrophysics Data System (ADS)

    AbdelSalam, Hady S.; Maly, Kurt; Mukkamala, Ravi; Zubair, Mohammad; Kaminsky, David

    2010-11-01

    Managing large IT environments is expensive and labour intensive. Maintaining and upgrading with minimal disruption and administrative support has always been a challenging task for system administrators. One challenge faced by IT administrators is arriving at schedules for applying one or more change requests to one of the system components. Most of the time, the impact analysis of the proposed changes is done by humans and is often laborious and error-prone. Although this methodology might be suitable to handle changes that are planned way ahead in time, it is completely inappropriate for changes that need to be done sooner. In addition, such manual handling does not scale well with the size of the IT infrastructure. In this article, the focus is on the problem of scheduling change requests in the presence of organisational policies governing the use of its resources. The authors propose two approaches for change management scheduling and present the implementation details of two prototypes that prove the feasibility of the proposed approaches. Their implementation is integrated with an autonomic manager which they had described in their earlier work.

  3. Catheter-Related Bloodstream Infections in Patients on Emergent Hemodialysis.

    PubMed

    Rojas-Moreno, Christian A; Spiegel, Daniel; Yalamanchili, Venkata; Kuo, Elizabeth; Quinones, Henry; Sreeramoju, Pranavi V; Luby, James P

    2016-03-01

    This study had 2 objectives: (1) to describe the epidemiology of catheter-related bloodstream infections (CRBSI) in patients with end-stage renal disease (ESRD) who have no access to scheduled dialysis and (2) to evaluate whether a positive culture of the heparin-lock solution is associated with subsequent development of bacteremia. Retrospective observational cohort design for objective 1; and prospective cohort design for objective 2. The study was conducted in a 770-bed public academic tertiary hospital in Dallas, Texas. The participants were patients with ESRD undergoing scheduled or emergent hemodialysis. We reviewed the records of 147 patients who received hemodialysis between January 2011 and May 2011 and evaluated the rate of CRBSI in the previous 5 years. For the prospective study, we cultured the catheter heparin-lock solution in 62 consecutive patients between June 2012 and August 2012 and evaluated the incidence of CRBSI at 6 months. Of the 147 patients on emergent hemodialysis, 125 had a tunneled catheter, with a CRBSI rate of 2.61 per 1,000 catheter days. The predominant organisms were Gram-negative rods (GNR). In the prospective study, we found that the dialysis catheter was colonized more frequently in patients on emergent hemodialysis than in those on scheduled hemodialysis. Colonization with GNR or Staphylococcus aureus was associated with subsequent CRBSI at 6 months follow-up. Patients undergoing emergent hemodialysis via tunneled catheter are predisposed to Gram-negative CRBSI. Culturing the heparin-lock solution may predict subsequent episodes of CRBSI if it shows colonization with GNR or Staphylococcus aureus. Prevention approaches in this population need to be studied further.

  4. A human factors approach to range scheduling for satellite control

    NASA Technical Reports Server (NTRS)

    Wright, Cameron H. G.; Aitken, Donald J.

    1991-01-01

    Range scheduling for satellite control presents a classical problem: supervisory control of a large-scale dynamic system, with unwieldy amounts of interrelated data used as inputs to the decision process. Increased automation of the task, with the appropriate human-computer interface, is highly desirable. The development and user evaluation of a semi-automated network range scheduling system is described. The system incorporates a synergistic human-computer interface consisting of a large screen color display, voice input/output, a 'sonic pen' pointing device, a touchscreen color CRT, and a standard keyboard. From a human factors standpoint, this development represents the first major improvement in almost 30 years to the satellite control network scheduling task.

  5. Using Knowledge Base for Event-Driven Scheduling of Web Monitoring Systems

    NASA Astrophysics Data System (ADS)

    Kim, Yang Sok; Kang, Sung Won; Kang, Byeong Ho; Compton, Paul

    Web monitoring systems report any changes to their target web pages by revisiting them frequently. As they operate under significant resource constraints, it is essential to minimize revisits while ensuring minimal delay and maximum coverage. Various statistical scheduling methods have been proposed to resolve this problem; however, they are static and cannot easily cope with events in the real world. This paper proposes a new scheduling method that manages unpredictable events. An MCRDR (Multiple Classification Ripple-Down Rules) document classification knowledge base was reused to detect events and to initiate a prompt web monitoring process independent of a static monitoring schedule. Our experiment demonstrates that the approach improves monitoring efficiency significantly.

  6. Why Does Working Memory Capacity Predict Variation in Reading Comprehension? On the Influence of Mind Wandering and Executive Attention

    PubMed Central

    McVay, Jennifer C.; Kane, Michael J.

    2012-01-01

    Some people are better readers than others, and this variation in comprehension ability is predicted by measures of working memory capacity (WMC). The primary goal of this study was to investigate the mediating role of mind wandering experiences in the association between WMC and normal individual differences in reading comprehension, as predicted by the executive-attention theory of WMC (e.g., Engle & Kane, 2004). We used a latent-variable, structural-equation-model approach, testing skilled adult readers on three WMC span tasks, seven varied reading comprehension tasks, and three attention-control tasks. Mind wandering was assessed using experimenter-scheduled thought probes during four different tasks (two reading, two attention-control tasks). The results support the executive-attention theory of WMC. Mind wandering across the four tasks loaded onto a single latent factor, reflecting a stable individual difference. Most importantly, mind wandering was a significant mediator in the relationship between WMC and reading comprehension, suggesting that the WMC-comprehension correlation is driven, in part, by attention control over intruding thoughts. We discuss implications for theories of WMC, attention control, and reading comprehension. PMID:21875246

  7. Turbulent heat transfer prediction method for application to scramjet engines

    NASA Technical Reports Server (NTRS)

    Pinckney, S. Z.

    1974-01-01

    An integral method for predicting boundary layer development in turbulent flow regions on two-dimensional or axisymmetric bodies was developed. The method has the capability of approximating nonequilibrium velocity profiles as well as the local surface friction in the presence of a pressure gradient. An approach was developed for the problem of predicting the heat transfer in a turbulent boundary layer in the presence of a high pressure gradient. The solution was derived with particular emphasis on its applicability to supersonic combustion; thus, the effects of real gas flows were included. The resulting integrodifferential boundary layer method permits the estimation of cooling reguirements for scramjet engines. Theoretical heat transfer results are compared with experimental combustor and noncombustor heat transfer data. The heat transfer method was used in the development of engine design concepts which will produce an engine with reduced cooling requirements. The Langley scramjet engine module was designed by utilizing these design concepts and this engine design is discussed along with its corresponding cooling requirements. The heat transfer method was also used to develop a combustor cooling correlation for a combustor whose local properties are computed one dimensionally by assuming a linear area variation and a given heat release schedule.

  8. Multi-timescale power and energy assessment of lithium-ion battery and supercapacitor hybrid system using extended Kalman filter

    NASA Astrophysics Data System (ADS)

    Wang, Yujie; Zhang, Xu; Liu, Chang; Pan, Rui; Chen, Zonghai

    2018-06-01

    The power capability and maximum charge and discharge energy are key indicators for energy management systems, which can help the energy storage devices work in a suitable area and prevent them from over-charging and over-discharging. In this work, a model based power and energy assessment approach is proposed for the lithium-ion battery and supercapacitor hybrid system. The model framework of the lithium-ion battery and supercapacitor hybrid system is developed based on the equivalent circuit model, and the model parameters are identified by regression method. Explicit analyses of the power capability and maximum charge and discharge energy prediction with multiple constraints are elaborated. Subsequently, the extended Kalman filter is employed for on-board power capability and maximum charge and discharge energy prediction to overcome estimation error caused by system disturbance and sensor noise. The charge and discharge power capability, and the maximum charge and discharge energy are quantitatively assessed under both the dynamic stress test and the urban dynamometer driving schedule. The maximum charge and discharge energy prediction of the lithium-ion battery and supercapacitor hybrid system with different time scales are explored and discussed.

  9. Collaborative Scheduling Using JMS in a Mixed Java and .NET Environment

    NASA Technical Reports Server (NTRS)

    Wang, Yeou-Fang; Wax, Allan; Lam, Ray; Baldwin, John; Borden, Chet

    2006-01-01

    A collaborative framework/environment was proto-typed to prove the feasibility of scheduling space flight missions on NASA's Deep Space Network (DSN) in a distributed fashion. In this environment, effective collaboration relies on efficient communications among all flight mission and DSN scheduling users. There-fore, messaging becomes critical to timely event notification and data synchronization. In the prototype, a rapid messaging system using Java Message Service (JMS) in a mixed Java and .NET environment is established. This scheme allows both Java and .NET applications to communicate with each other for data synchronization and schedule negotiation. The JMS approach we used is based on a centralized messaging scheme. With proper use of a high speed messaging system, all users in this collaborative framework can communicate with each other to generate a schedule collaboratively to meet DSN and projects tracking needs.

  10. Patients' and clinicians' views on the optimum schedules for self-monitoring of blood pressure: a qualitative focus group and interview study.

    PubMed

    Grant, Sabrina; Hodgkinson, James A; Milner, Siobhan L; Martin, Una; Tompson, Alice; Hobbs, Fd Richard; Mant, Jonathan; McManus, Richard J; Greenfield, Sheila M

    2016-11-01

    Self-monitoring of blood pressure is common but guidance on how it should be carried out varies and it is currently unclear how such guidance is viewed. To explore patients' and healthcare professionals' (HCPs) views and experiences of the use of different self-monitoring regimens to determine what is acceptable and feasible, and to inform future recommendations. Thirteen focus groups and four HCP interviews were held, with a total of 66 participants (41 patients and 25 HCPs) from primary and secondary care with and without experience of self-monitoring. Standard and shortened self-monitoring protocols were both considered. Focus groups and interviews were recorded, transcribed verbatim, and analysed using the constant comparative method. Patients generally supported structured schedules but with sufficient flexibility to allow adaptation to individual routine. They preferred a shorter (3-day) schedule to longer (7-day) regimens. Although HCPs could describe benefits for patients of using a schedule, they were reluctant to recommend a specific schedule. Concerns surrounded the use of different schedules for diagnosis and subsequent monitoring. Appropriate education was seen as vital by all participants to enable a self-monitoring schedule to be followed at home. There is not a 'one size fits all approach' to developing the optimum protocol from the perspective of users and those implementing it. An approach whereby patients are asked to complete the minimum number of readings required for accurate blood pressure estimation in a flexible manner seems most likely to succeed. Informative advice and guidance should incorporate such flexibility for patients and professionals alike. © British Journal of General Practice 2016.

  11. Renaissance: A revolutionary approach for providing low-cost ground data systems

    NASA Technical Reports Server (NTRS)

    Butler, Madeline J.; Perkins, Dorothy C.; Zeigenfuss, Lawrence B.

    1996-01-01

    The NASA is changing its attention from large missions to a greater number of smaller missions with reduced development schedules and budgets. In relation to this, the Renaissance Mission Operations and Data Systems Directorate systems engineering process is presented. The aim of the Renaissance approach is to improve system performance, reduce cost and schedules and meet specific customer needs. The approach includes: the early involvement of the users to define the mission requirements and system architectures; the streamlining of management processes; the development of a flexible cost estimation capability, and the ability to insert technology. Renaissance-based systems demonstrate significant reuse of commercial off-the-shelf building blocks in an integrated system architecture.

  12. A genetic algorithm-based job scheduling model for big data analytics.

    PubMed

    Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei

    Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.

  13. Methodology for Software Reliability Prediction. Volume 2.

    DTIC Science & Technology

    1987-11-01

    The overall acquisition ,z program shall include the resources, schedule, management, structure , and controls necessary to ensure that specified AD...Independent Verification/Validation - Programming Team Structure - Educational Level of Team Members - Experience Level of Team Members * Methods Used...Prediction or Estimation Parameter Supported: Software - Characteristics 3. Objectives: Structured programming studies and Government Ur.’.. procurement

  14. Prediction and Uncertainty in Human Pavlovian to Instrumental Transfer

    ERIC Educational Resources Information Center

    Trick, Leanne; Hogarth, Lee; Duka, Theodora

    2011-01-01

    Attentional capture and behavioral control by conditioned stimuli have been dissociated in animals. The current study assessed this dissociation in humans. Participants were trained on a Pavlovian schedule in which 3 visual stimuli, A, B, and C, predicted the occurrence of an aversive noise with 90%, 50%, or 10% probability, respectively.…

  15. Guidelines for Estimating Cone and Seed Yields of Southern Pines

    Treesearch

    James P. Barnett

    1999-01-01

    Our ability to predict cone and seed yields of southern pines (Pinus spp.) prior to collection is important when scheduling and allocating resources. Many managers have enough historical data to predict their orchards' yield; but such data are generally unavailable for some species and for collections outside of orchards. Guidelines are...

  16. Workplace Policies and Mental Health among Working-Class, New Parents

    PubMed Central

    Perry-Jenkins, Maureen; Smith, JuliAnna Z.; Wadsworth, Lauren Page; Halpern, Hillary Paul

    2017-01-01

    Little research has explored linkages between workplace policies and mental health in working-class, employed parents, creating a gap in our knowledge of work-family issues across social class levels. The current U.S. study addresses this gap by employing hierarchical linear modeling techniques to examine how workplace policies and parental leave benefits predicted parents' depressive symptoms and anxiety in a sample of 125, low-income, dual-earner couples interviewed across the transition to parenthood. Descriptive analyses revealed that, on average, parents had few workplace policies, such as schedule flexibility or child care supports, available to them. Results revealed, however, that, when available, schedule flexibility was related to fewer depressive symptoms and less anxiety for new mothers. Greater child care supports predicted fewer depressive symptoms for fathers. In terms of crossover effects, longer maternal leave predicted declines in fathers' anxiety across the first year. Results are discussed with attention to how certain workplace policies may serve to alleviate new parents' lack of time and resources (minimize scarcity of resources) and, in turn, predict better mental health during the sensitive period of new parenthood. PMID:29242705

  17. Intelligent Resource Management for Local Area Networks: Approach and Evolution

    NASA Technical Reports Server (NTRS)

    Meike, Roger

    1988-01-01

    The Data Management System network is a complex and important part of manned space platforms. Its efficient operation is vital to crew, subsystems and experiments. AI is being considered to aid in the initial design of the network and to augment the management of its operation. The Intelligent Resource Management for Local Area Networks (IRMA-LAN) project is concerned with the application of AI techniques to network configuration and management. A network simulation was constructed employing real time process scheduling for realistic loads, and utilizing the IEEE 802.4 token passing scheme. This simulation is an integral part of the construction of the IRMA-LAN system. From it, a causal model is being constructed for use in prediction and deep reasoning about the system configuration. An AI network design advisor is being added to help in the design of an efficient network. The AI portion of the system is planned to evolve into a dynamic network management aid. The approach, the integrated simulation, project evolution, and some initial results are described.

  18. Providing an empirical basis for optimizing the verification and testing phases of software development

    NASA Technical Reports Server (NTRS)

    Briand, Lionel C.; Basili, Victor R.; Hetmanski, Christopher J.

    1992-01-01

    Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault density components so that the testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents an alternative approach for constructing such models that is intended to fulfill specific software engineering needs (i.e. dealing with partial/incomplete information and creating models that are easy to interpret). Our approach to classification is as follows: (1) to measure the software system to be considered; and (2) to build multivariate stochastic models for prediction. We present experimental results obtained by classifying FORTRAN components developed at the NASA/GSFC into two fault density classes: low and high. Also we evaluate the accuracy of the model and the insights it provides into the software process.

  19. Potential Fifty Percent Reduction in Saturation Diving Decompression Time Using a Combination of Intermittent Recompression and Exercise

    NASA Technical Reports Server (NTRS)

    Gernhardt, Michael I.; Abercromby, Andrew; Conklin, Johnny

    2007-01-01

    Conventional saturation decompression protocols use linear decompression rates that become progressively slower at shallower depths, consistent with free gas phase control vs. dissolved gas elimination kinetics. If decompression is limited by control of free gas phase, linear decompression is an inefficient strategy. The NASA prebreathe reduction program demonstrated that exercise during O2 prebreathe resulted in a 50% reduction (2 h vs. 4 h) in the saturation decompression time from 14.7 to 4.3 psi and a significant reduction in decompression sickness (DCS: 0 vs. 23.7%). Combining exercise with intermittent recompression, which controls gas phase growth and eliminates supersaturation before exercising, may enable more efficient saturation decompression schedules. A tissue bubble dynamics model (TBDM) was used in conjunction with a NASA exercise prebreathe model (NEPM) that relates tissue inert gas exchange rate constants to exercise (ml O2/kg-min), to develop a schedule for decompression from helium saturation at 400 fsw. The models provide significant prediction (p < 0.001) and goodness of fit with 430 cases of DCS in 6437 laboratory dives for TBDM (p = 0.77) and with 22 cases of DCS in 159 altitude exposures for NEPM (p = 0.70). The models have also been used operationally in over 25,000 dives (TBDM) and 40 spacewalks (NEPM). The standard U.S. Navy (USN) linear saturation decompression schedule from saturation at 400 fsw required 114.5 h with a maximum Bubble Growth Index (BGI(sub max)) of 17.5. Decompression using intermittent recompression combined with two 10 min exercise periods (75% VO2 (sub peak)) per day required 54.25 h (BGI(sub max): 14.7). Combined intermittent recompression and exercise resulted in a theoretical 53% (2.5 day) reduction in decompression time and theoretically lower DCS risk compared to the standard USN decompression schedule. These results warrant future decompression trials to evaluate the efficacy of this approach.

  20. Manage Your Cash

    ERIC Educational Resources Information Center

    Matthews, Kenneth M.

    1976-01-01

    Discusses formulas for planning school district investment and borrowing strategies based on a district's predicted cash flow and presents a sample investment/borrowing schedule developed from hypothetical cash-flow data. (JG)

  1. Multi-Satellite Observation Scheduling for Large Area Disaster Emergency Response

    NASA Astrophysics Data System (ADS)

    Niu, X. N.; Tang, H.; Wu, L. X.

    2018-04-01

    an optimal imaging plan, plays a key role in coordinating multiple satellites to monitor the disaster area. In the paper, to generate imaging plan dynamically according to the disaster relief, we propose a dynamic satellite task scheduling method for large area disaster response. First, an initial robust scheduling scheme is generated by a robust satellite scheduling model in which both the profit and the robustness of the schedule are simultaneously maximized. Then, we use a multi-objective optimization model to obtain a series of decomposing schemes. Based on the initial imaging plan, we propose a mixed optimizing algorithm named HA_NSGA-II to allocate the decomposing results thus to obtain an adjusted imaging schedule. A real disaster scenario, i.e., 2008 Wenchuan earthquake, is revisited in terms of rapid response using satellite resources and used to evaluate the performance of the proposed method with state-of-the-art approaches. We conclude that our satellite scheduling model can optimize the usage of satellite resources so as to obtain images in disaster response in a more timely and efficient manner.

  2. Energy efficient mechanisms for high-performance Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Alsaify, Baha'adnan

    2009-12-01

    Due to recent advances in microelectronics, the development of low cost, small, and energy efficient devices became possible. Those advances led to the birth of the Wireless Sensor Networks (WSNs). WSNs consist of a large set of sensor nodes equipped with communication capabilities, scattered in the area to monitor. Researchers focus on several aspects of WSNs. Such aspects include the quality of service the WSNs provide (data delivery delay, accuracy of data, etc...), the scalability of the network to contain thousands of sensor nodes (the terms node and sensor node are being used interchangeably), the robustness of the network (allowing the network to work even if a certain percentage of nodes fails), and making the energy consumption in the network as low as possible to prolong the network's lifetime. In this thesis, we present an approach that can be applied to the sensing devices that are scattered in an area for Sensor Networks. This work will use the well-known approach of using a awaking scheduling to extend the network's lifespan. We designed a scheduling algorithm that will reduce the delay's upper bound the reported data will experience, while at the same time keeps the advantages that are offered by the use of the awaking scheduling -- the energy consumption reduction which will lead to the increase in the network's lifetime. The wakeup scheduling is based on the location of the node relative to its neighbors and its distance from the Base Station (the terms Base Station and sink are being used interchangeably). We apply the proposed method to a set of simulated nodes using the "ONE Simulator". We test the performance of this approach with three other approaches -- Direct Routing technique, the well known LEACH algorithm, and a multi-parent scheduling algorithm. We demonstrate a good improvement on the network's quality of service and a reduction of the consumed energy.

  3. Toward an Autonomous Telescope Network: the TBT Scheduler

    NASA Astrophysics Data System (ADS)

    Racero, E.; Ibarra, A.; Ocaña, F.; de Lis, S. B.; Ponz, J. D.; Castillo, M.; Sánchez-Portal, M.

    2015-09-01

    Within the ESA SSA program, it is foreseen to deploy several robotic telescopes to provide surveillance and tracking services for hazardous objects. The TBT project will procure a validation platform for an autonomous optical observing system in a realistic scenario, consisting of two telescopes located in Spain and Australia, to collect representative test data for precursor SSA services. In this context, the planning and scheduling of the night consists of two software modules, the TBT Scheduler, that will allow the manual and autonomous planning of the night, and the control of the real-time response of the system, done by the RTS2 internal scheduler. The TBT Scheduler allocates tasks for both telescopes without human intervention. Every night it takes all the inputs needed and prepares the schedule following some predefined rules. The main purpose of the scheduler is the distribution of the time for follow-up of recently discovered targets and surveys. The TBT Scheduler considers the overall performance of the system, and combine follow-up with a priori survey strategies for both kind of objects. The strategy is defined according to the expected combined performance for both systems the upcoming night (weather, sky brightness, object accessibility and priority). Therefore, TBT Scheduler defines the global approach for the network and relies on the RTS2 internal scheduler for the final detailed distribution of tasks at each sensor.

  4. Nuclear Weapons: NNSA Has a New Approach to Managing the B61-12 Life Extension, but a Constrained Schedule and Other Risks Remain

    DTIC Science & Technology

    2016-02-01

    components. In 2010, they began an LEP to consolidate four versions of a legacy nuclear weapon, the B61 bomb , into a bomb called the B61-12 (see...Force Integrated Master Schedule BIMS Boeing Integrated Master Schedule B61 bomb B61 legacy bomb CD critical decision Cost Guide GAO Cost...are versions of the B61 bomb , an aircraft-delivered weapon that is a key component of the United States’ commitments to the North Atlantic Treaty

  5. On the distinction between open and closed economies.

    PubMed Central

    Timberlake, W; Peden, B F

    1987-01-01

    Open and closed economies have been assumed to produce opposite relations between responding and the programmed density of reward (the amount of reward divided by its cost). Experimental procedures that are treated as open economies typically dissociate responding and total reward by providing supplemental income outside the experimental session; procedures construed as closed economies do not. In an open economy responding is assumed to be directly related to reward density, whereas in a closed economy responding is assumed to be inversely related to reward density. In contrast to this predicted correlation between response-reward relations and type of economy, behavior regulation theory predicts both direct and inverse relations in both open and closed economies. Specifically, responding should be a bitonic function of reward density regardless of the type of economy and is dependent only on the ratio of the schedule terms rather than on their absolute size. These predictions were tested by four experiments in which pigeons' key pecking produced food on fixed-ratio and variable-interval schedules over a range of reward magnitudes and under several open- and closed-economy procedures. The results better supported the behavior regulation view by showing a general bitonic function between key pecking and food density in all conditions. In most cases, the absolute size of the schedule requirement and the magnitude of reward had no effect; equal ratios of these terms produced approximately equal responding. PMID:3625103

  6. Pre-exposure to environmental cues predictive of food availability elicits hypothalamic-pituitary-adrenal axis activation and increases operant responding for food in female rats.

    PubMed

    Cifani, Carlo; Zanoncelli, Alessandro; Tessari, Michela; Righetti, Claudio; Di Francesco, Carla; Ciccocioppo, Roberto; Massi, Maurizio; Melotto, Sergio

    2009-09-01

    The present study was undertaken to develop an animal model exploiting food cue-induced increased motivation to obtain food under operant self-administration conditions. To demonstrate the predictive validity of the model, rimonabant, fluoxetine, sibutramine and topiramate, administered 1 hour before the experiment, were tested. For 5 days, female Wistar rats were trained to self-administer standard 45 mg food pellets in one daily session (30 minutes) under FR1 (fixed ratio 1) schedule of reinforcement. Rats were then trained to an FR3 schedule and finally divided into two groups. The first group (control) was subjected to a standard 30 minutes FR3 food self-administration session. The second group was exposed to five presentations of levers and light for 10 seconds each (every 3 minutes in 15 minutes total). At the completion of this pre-session phase, a normal 30-minute session (as in the control group) started. Results showed that pre-exposure to environmental stimuli associated to food deliveries increased response for food when the session started. Corticosterone and adrenocorticotropic hormone plasma levels, measured after the 15-minute pre-exposure, were also significantly increased. No changes were observed for the other measured hormones (growth hormone, prolactin, thyroid-stimulating hormone, luteinizing hormone, insulin, amylin, gastric inhibitor polypeptide, ghrelin, leptin, peptide YY and pancreatic polypeptide). Rimonabant, sibutramine and fluoxetine significantly reduced food intake in both animals pre-exposed and in those not pre-exposed to food-associated cues. Topiramate selectively reduced feeding only in pre-exposed rats. The present study describes the development of a new animal model to investigate cue-induced increased motivation to obtain food. This model shows face and predictive validity, thus, supporting its usefulness in the investigation of new potential treatments of binge-related eating disorders. In addition, the present findings confirm that topiramate may represent an important pharmacotherapeutic approach to binge-related eating.

  7. T-L Plane Abstraction-Based Energy-Efficient Real-Time Scheduling for Multi-Core Wireless Sensors

    PubMed Central

    Kim, Youngmin; Lee, Ki-Seong; Pham, Ngoc-Son; Lee, Sun-Ro; Lee, Chan-Gun

    2016-01-01

    Energy efficiency is considered as a critical requirement for wireless sensor networks. As more wireless sensor nodes are equipped with multi-cores, there are emerging needs for energy-efficient real-time scheduling algorithms. The T-L plane-based scheme is known to be an optimal global scheduling technique for periodic real-time tasks on multi-cores. Unfortunately, there has been a scarcity of studies on extending T-L plane-based scheduling algorithms to exploit energy-saving techniques. In this paper, we propose a new T-L plane-based algorithm enabling energy-efficient real-time scheduling on multi-core sensor nodes with dynamic power management (DPM). Our approach addresses the overhead of processor mode transitions and reduces fragmentations of the idle time, which are inherent in T-L plane-based algorithms. Our experimental results show the effectiveness of the proposed algorithm compared to other energy-aware scheduling methods on T-L plane abstraction. PMID:27399722

  8. A Comparison of Techniques for Scheduling Earth-Observing Satellites

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna

    2004-01-01

    Scheduling observations by coordinated fleets of Earth Observing Satellites (EOS) involves large search spaces, complex constraints and poorly understood bottlenecks, conditions where evolutionary and related algorithms are often effective. However, there are many such algorithms and the best one to use is not clear. Here we compare multiple variants of the genetic algorithm: stochastic hill climbing, simulated annealing, squeaky wheel optimization and iterated sampling on ten realistically-sized EOS scheduling problems. Schedules are represented by a permutation (non-temperal ordering) of the observation requests. A simple deterministic scheduler assigns times and resources to each observation request in the order indicated by the permutation, discarding those that violate the constraints created by previously scheduled observations. Simulated annealing performs best. Random mutation outperform a more 'intelligent' mutator. Furthermore, the best mutator, by a small margin, was a novel approach we call temperature dependent random sampling that makes large changes in the early stages of evolution and smaller changes towards the end of search.

  9. A Baseline Load Schedule for the Manual Calibration of a Force Balance

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Gisler, R.

    2013-01-01

    A baseline load schedule for the manual calibration of a force balance was developed that takes current capabilities at the NASA Ames Balance Calibration Laboratory into account. The load schedule consists of 18 load series with a total of 194 data points. It was designed to satisfy six requirements: (i) positive and negative loadings should be applied for each load component; (ii) at least three loadings should be applied between 0 % and 100 % load capacity; (iii) normal and side force loadings should be applied at the forward gage location, the aft gage location, and the balance moment center; (iv) the balance should be used in UP and DOWN orientation to get axial force loadings; (v) the constant normal and side force approaches should be used to get the rolling moment loadings; (vi) rolling moment loadings should be obtained for 0, 90, 180, and 270 degrees balance orientation. Three different approaches are also reviewed that may be used to independently estimate the natural zeros of the balance. These three approaches provide gage output differences that may be used to estimate the weight of both the metric and non-metric part of the balance. Manual calibration data of NASA s MK29A balance and machine calibration data of NASA s MC60D balance are used to illustrate and evaluate different aspects of the proposed baseline load schedule design.

  10. Task scheduling in dataflow computer architectures

    NASA Technical Reports Server (NTRS)

    Katsinis, Constantine

    1994-01-01

    Dataflow computers provide a platform for the solution of a large class of computational problems, which includes digital signal processing and image processing. Many typical applications are represented by a set of tasks which can be repetitively executed in parallel as specified by an associated dataflow graph. Research in this area aims to model these architectures, develop scheduling procedures, and predict the transient and steady state performance. Researchers at NASA have created a model and developed associated software tools which are capable of analyzing a dataflow graph and predicting its runtime performance under various resource and timing constraints. These models and tools were extended and used in this work. Experiments using these tools revealed certain properties of such graphs that require further study. Specifically, the transient behavior at the beginning of the execution of a graph can have a significant effect on the steady state performance. Transformation and retiming of the application algorithm and its initial conditions can produce a different transient behavior and consequently different steady state performance. The effect of such transformations on the resource requirements or under resource constraints requires extensive study. Task scheduling to obtain maximum performance (based on user-defined criteria), or to satisfy a set of resource constraints, can also be significantly affected by a transformation of the application algorithm. Since task scheduling is performed by heuristic algorithms, further research is needed to determine if new scheduling heuristics can be developed that can exploit such transformations. This work has provided the initial development for further long-term research efforts. A simulation tool was completed to provide insight into the transient and steady state execution of a dataflow graph. A set of scheduling algorithms was completed which can operate in conjunction with the modeling and performance tools previously developed. Initial studies on the performance of these algorithms were done to examine the effects of application algorithm transformations as measured by such quantities as number of processors, time between outputs, time between input and output, communication time, and memory size.

  11. Pharmacokinetic Studies in Neonates: The Utility of an Opportunistic Sampling Design.

    PubMed

    Leroux, Stéphanie; Turner, Mark A; Guellec, Chantal Barin-Le; Hill, Helen; van den Anker, Johannes N; Kearns, Gregory L; Jacqz-Aigrain, Evelyne; Zhao, Wei

    2015-12-01

    The use of an opportunistic (also called scavenged) sampling strategy in a prospective pharmacokinetic study combined with population pharmacokinetic modelling has been proposed as an alternative strategy to conventional methods for accomplishing pharmacokinetic studies in neonates. However, the reliability of this approach in this particular paediatric population has not been evaluated. The objective of the present study was to evaluate the performance of an opportunistic sampling strategy for a population pharmacokinetic estimation, as well as dose prediction, and compare this strategy with a predetermined pharmacokinetic sampling approach. Three population pharmacokinetic models were derived for ciprofloxacin from opportunistic blood samples (SC model), predetermined (i.e. scheduled) samples (TR model) and all samples (full model used to previously characterize ciprofloxacin pharmacokinetics), using NONMEM software. The predictive performance of developed models was evaluated in an independent group of patients. Pharmacokinetic data from 60 newborns were obtained with a total of 430 samples available for analysis; 265 collected at predetermined times and 165 that were scavenged from those obtained as part of clinical care. All datasets were fit using a two-compartment model with first-order elimination. The SC model could identify the most significant covariates and provided reasonable estimates of population pharmacokinetic parameters (clearance and steady-state volume of distribution) compared with the TR and full models. Their predictive performances were further confirmed in an external validation by Bayesian estimation, and showed similar results. Monte Carlo simulation based on area under the concentration-time curve from zero to 24 h (AUC24)/minimum inhibitory concentration (MIC) using either the SC or the TR model gave similar dose prediction for ciprofloxacin. Blood samples scavenged in the course of caring for neonates can be used to estimate ciprofloxacin pharmacokinetic parameters and therapeutic dose requirements.

  12. Modelling fatigue and the use of fatigue models in work settings.

    PubMed

    Dawson, Drew; Ian Noy, Y; Härmä, Mikko; Akerstedt, Torbjorn; Belenky, Gregory

    2011-03-01

    In recent years, theoretical models of the sleep and circadian system developed in laboratory settings have been adapted to predict fatigue and, by inference, performance. This is typically done using the timing of prior sleep and waking or working hours as the primary input and the time course of the predicted variables as the primary output. The aim of these models is to provide employers, unions and regulators with quantitative information on the likely average level of fatigue, or risk, associated with a given pattern of work and sleep with the goal of better managing the risk of fatigue-related errors and accidents/incidents. The first part of this review summarises the variables known to influence workplace fatigue and draws attention to the considerable variability attributable to individual and task variables not included in current models. The second part reviews the current fatigue models described in the scientific and technical literature and classifies them according to whether they predict fatigue directly by using the timing of prior sleep and wake (one-step models) or indirectly by using work schedules to infer an average sleep-wake pattern that is then used to predict fatigue (two-step models). The third part of the review looks at the current use of fatigue models in field settings by organizations and regulators. Given their limitations it is suggested that the current generation of models may be appropriate for use as one element in a fatigue risk management system. The final section of the review looks at the future of these models and recommends a standardised approach for their use as an element of the 'defenses-in-depth' approach to fatigue risk management. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. Water Quality Projects Summary for the Mid-Columbia and Cumberland River Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, Kevin M.; Witt, Adam M.; Hadjerioua, Boualem

    Scheduling and operational control of hydropower systems is accompanied with a keen awareness of the management of water use, environmental effects, and policy, especially within the context of strict water rights policy and generation maximization. This is a multi-objective problem for many hydropower systems, including the Cumberland and Mid-Columbia river systems. Though each of these two systems have distinct operational philosophies, hydrologic characteristics, and system dynamics, they both share a responsibility to effectively manage hydropower and the environment, which requires state-of-the art improvements in the approaches and applications for water quality modeling. The Department of Energy and Oak Ridge Nationalmore » Laboratory have developed tools for total dissolved gas (TDG) prediction on the Mid-Columbia River and a decision-support system used for hydropower generation and environmental optimization on the Cumberland River. In conjunction with IIHR - Hydroscience & Engineering, The University of Iowa and University of Colorado s Center for Advanced Decision Support for Water and Environmental Systems (CADSWES), ORNL has managed the development of a TDG predictive methodology at seven dams along the Mid-Columbia River and has enabled the ability to utilize this methodology for optimization of operations at these projects with the commercially available software package Riverware. ORNL has also managed the collaboration with Vanderbilt University and Lipscomb University to develop a state-of-the art method for reducing high-fidelity water quality modeling results into surrogate models which can be used effectively within the context of optimization efforts to maximize generation for a reservoir system based on environmental and policy constraints. The novel contribution of these efforts is the ability to predict water quality conditions with simplified methodologies at the same level of accuracy as more complex and resource intensive computing methods. These efforts were designed to incorporate well into existing hydropower and reservoir system scheduling models, with runtimes that are comparable to existing software tools. In addition, the transferability of these tools to assess other systems is enhanced due the use of simplistic and easily attainable values for inputs, straight-forward calibration of predictive equation coefficients, and standardized comparison of traditionally familiar outputs.« less

  14. NASA/NSF Antarctic Science Working Group

    NASA Technical Reports Server (NTRS)

    Stoklosa, Janis H.

    1990-01-01

    A collection of viewgraphs on NASA's Life Sciences Biomedical Programs is presented. They show the structure of the Life Sciences Division; the tentative space exploration schedule from the present to 2018; the biomedical programs with their objectives, research elements, and methodological approaches; validation models; proposed Antarctic research as an analog for space exploration; and the Science Working Group's schedule of events.

  15. Impacts of Scheduling Recess before Lunch in Elementary Schools: A Case Study Approach of Plate Waste and Perceived Behaviors

    ERIC Educational Resources Information Center

    Strohbehn, Catherine H.; Strohbehn, Garth W.; Lanningham-Foster, Lorraine; Litchfield, Ruth A.; Scheidel, Carrie; Delger, Patti

    2016-01-01

    Purpose/Objectives: Recess Before Lunch (RBL) for elementary students is considered a best practice related to increased nutrient intakes at lunch, decreased afternoon behavioral issues, and increased afternoon learning efficiency; however, school characteristics, such as amount of time for lunch, offer vs. serve, and scheduling factors can…

  16. 75 FR 32773 - Auction of 218-219 MHz Service and Phase II 220 MHz Service Licenses Scheduled for December 7...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-09

    ... licenses included in Auction 89 using the Commission's standard simultaneous multiple-round auction format... sequential bidding rounds. The initial bidding schedule will be announced in a public notice to be released.... For Auction 89, the Bureau proposes to employ a simultaneous stopping rule approach. A simultaneous...

  17. Treatment of hyperthyroidism with radioiodine targeted activity: A comparison between two dosimetric methods.

    PubMed

    Amato, Ernesto; Campennì, Alfredo; Leotta, Salvatore; Ruggeri, Rosaria M; Baldari, Sergio

    2016-06-01

    Radioiodine therapy is an effective and safe treatment of hyperthyroidism due to Graves' disease, toxic adenoma, toxic multinodular goiter. We compared the outcomes of a traditional calculation method based on an analytical fit of the uptake curve and subsequent dose calculation with the MIRD approach, and an alternative computation approach based on a formulation implemented in a public-access website, searching for the best timing of radioiodine uptake measurements in pre-therapeutic dosimetry. We report about sixty-nine hyperthyroid patients that were treated after performing a pre-therapeutic dosimetry calculated by fitting a six-point uptake curve (3-168h). In order to evaluate the results of the radioiodine treatment, patients were followed up to sixty-four months after treatment (mean 47.4±16.9). Patient dosimetry was then retrospectively recalculated with the two above-mentioned methods. Several time schedules for uptake measurements were considered, with different timings and total number of points. Early time schedules, sampling uptake up to 48h, do not allow to set-up an accurate treatment plan, while schedules including the measurement at one week give significantly better results. The analytical fit procedure applied to the three-point time schedule 3(6)-24-168h gave results significantly more accurate than the website approach exploiting either the same schedule, or the single measurement at 168h. Consequently, the best strategy among the ones considered is to sample the uptake at 3(6)-24-168h, and carry out an analytical fit of the curve, while extra measurements at 48 and 72h lead only marginal improvements in the accuracy of therapeutic activity determination. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  18. Automated observation scheduling for the VLT

    NASA Technical Reports Server (NTRS)

    Johnston, Mark D.

    1988-01-01

    It is becoming increasingly evident that, in order to optimize the observing efficiency of large telescopes, some changes will be required in the way observations are planned and executed. Not all observing programs require the presence of the astronomer at the telescope: for those programs which permit service observing it is possible to better match planned observations to conditions at the telescope. This concept of flexible scheduling has been proposed for the VLT: based on current and predicted environmental and instrumental observations which make the most efficient possible use of valuable time. A similar kind of observation scheduling is already necessary for some space observatories, such as Hubble Space Telescope (HST). Space Telescope Science Institute is presently developing scheduling tools for HST, based on the use of artificial intelligence software development techniques. These tools could be readily adapted for ground-based telescope scheduling since they address many of the same issues. The concept are described on which the HST tools are based, their implementation, and what would be required to adapt them for use with the VLT and other ground-based observatories.

  19. Task and Participant Scheduling of Trading Platforms in Vehicular Participatory Sensing Networks

    PubMed Central

    Shi, Heyuan; Song, Xiaoyu; Gu, Ming; Sun, Jiaguang

    2016-01-01

    The vehicular participatory sensing network (VPSN) is now becoming more and more prevalent, and additionally has shown its great potential in various applications. A general VPSN consists of many tasks from task, publishers, trading platforms and a crowd of participants. Some literature treats publishers and the trading platform as a whole, which is impractical since they are two independent economic entities with respective purposes. For a trading platform in markets, its purpose is to maximize the profit by selecting tasks and recruiting participants who satisfy the requirements of accepted tasks, rather than to improve the quality of each task. This scheduling problem for a trading platform consists of two parts: which tasks should be selected and which participants to be recruited? In this paper, we investigate the scheduling problem in vehicular participatory sensing with the predictable mobility of each vehicle. A genetic-based trading scheduling algorithm (GTSA) is proposed to solve the scheduling problem. Experiments with a realistic dataset of taxi trajectories demonstrate that GTSA algorithm is efficient for trading platforms to gain considerable profit in VPSN. PMID:27916807

  20. Task and Participant Scheduling of Trading Platforms in Vehicular Participatory Sensing Networks.

    PubMed

    Shi, Heyuan; Song, Xiaoyu; Gu, Ming; Sun, Jiaguang

    2016-11-28

    The vehicular participatory sensing network (VPSN) is now becoming more and more prevalent, and additionally has shown its great potential in various applications. A general VPSN consists of many tasks from task, publishers, trading platforms and a crowd of participants. Some literature treats publishers and the trading platform as a whole, which is impractical since they are two independent economic entities with respective purposes. For a trading platform in markets, its purpose is to maximize the profit by selecting tasks and recruiting participants who satisfy the requirements of accepted tasks, rather than to improve the quality of each task. This scheduling problem for a trading platform consists of two parts: which tasks should be selected and which participants to be recruited? In this paper, we investigate the scheduling problem in vehicular participatory sensing with the predictable mobility of each vehicle. A genetic-based trading scheduling algorithm (GTSA) is proposed to solve the scheduling problem. Experiments with a realistic dataset of taxi trajectories demonstrate that GTSA algorithm is efficient for trading platforms to gain considerable profit in VPSN.

  1. Value centric approaches to the design, operations and maintenance of wind turbines

    NASA Astrophysics Data System (ADS)

    Khadabadi, Madhur Aravind

    Wind turbine maintenance is emerging as an unexpectedly high component of turbine operating cost, and there is an increasing interest in managing this cost. This thesis presents an alternative view of maintenance as a value-driver, and develops an optimization algorithm to evaluate the value delivered by different maintenance techniques. I view maintenance as an operation that moves the turbine to an improved state in which it can generate more power and, thus, earn more revenue. To implement this approach, I model the stochastic deterioration of the turbine in two dimensions: the deterioration rate, and the extent of deterioration, and then use maintenance to improve the state of the turbine. The value of the turbine is the difference between the revenue from to the power generation and the costs incurred in operation and maintenance. With a focus on blade deterioration, I evaluate the value delivered by implementing two different maintenance schemes, predictive maintenance and scheduled maintenance. An example of predictive maintenance technique is the use of Condition Monitoring Systems to precisely detect deterioration. I model Condition Monitoring System (CMS) of different degrees of fidelity, where a higher fidelity CMS would allow the blade state to be determined with a higher precision. The same model is then applied for the scheduled maintenance technique. The improved state information obtained from these techniques is then used to derive an optimal maintenance strategy. The difference between the value of the turbine with and without the inspection type can be interpreted as the value of the inspection. The results indicate that a higher fidelity (and more expensive) inspection method does not necessarily yield the highest value, and, that there is an optimal level of fidelity that results in maximum value. The results also aim to inform the operator of the impact of regional parameters such as wind speed, variance and maintenance costs to the optimal maintenance strategy. The contributions of this work are twofold. First, I present a practical approach to wind turbine valuation that takes operating and market conditions into account. This work should therefore be useful to wind farm operators, investors and decision makers. Second, I show how the value of a maintenance scheme can be explicitly assessed for different conditions.

  2. Transmission overhaul estimates for partial and full replacement at repair

    NASA Technical Reports Server (NTRS)

    Savage, M.; Lewicki, D. G.

    1991-01-01

    Timely transmission overhauls increase in-flight service reliability greater than the calculated design reliabilities of the individual aircraft transmission components. Although necessary for aircraft safety, transmission overhauls contribute significantly to aircraft expense. Predictions of a transmission's maintenance needs at the design stage should enable the development of more cost effective and reliable transmissions in the future. The frequency is estimated of overhaul along with the number of transmissions or components needed to support the overhaul schedule. Two methods based on the two parameter Weibull statistical distribution for component life are used to estimate the time between transmission overhauls. These methods predict transmission lives for maintenance schedules which repair the transmission with a complete system replacement or repair only failed components of the transmission. An example illustrates the methods.

  3. Methods to model and predict the ViewRay treatment deliveries to aid patient scheduling and treatment planning

    PubMed Central

    Liu, Shi; Wu, Yu; Wooten, H. Omar; Green, Olga; Archer, Brent; Li, Harold

    2016-01-01

    A software tool is developed, given a new treatment plan, to predict treatment delivery time for radiation therapy (RT) treatments of patients on ViewRay magnetic resonance image‐guided radiation therapy (MR‐IGRT) delivery system. This tool is necessary for managing patient treatment scheduling in our clinic. The predicted treatment delivery time and the assessment of plan complexities could also be useful to aid treatment planning. A patient's total treatment delivery time, not including time required for localization, is modeled as the sum of four components: 1) the treatment initialization time; 2) the total beam‐on time; 3) the gantry rotation time; and 4) the multileaf collimator (MLC) motion time. Each of the four components is predicted separately. The total beam‐on time can be calculated using both the planned beam‐on time and the decay‐corrected dose rate. To predict the remain‐ing components, we retrospectively analyzed the patient treatment delivery record files. The initialization time is demonstrated to be random since it depends on the final gantry angle of the previous treatment. Based on modeling the relationships between the gantry rotation angles and the corresponding rotation time, linear regression is applied to predict the gantry rotation time. The MLC motion time is calculated using the leaves delay modeling method and the leaf motion speed. A quantitative analysis was performed to understand the correlation between the total treatment time and the plan complexity. The proposed algorithm is able to predict the ViewRay treatment delivery time with the average prediction error 0.22 min or 1.82%, and the maximal prediction error 0.89 min or 7.88%. The analysis has shown the correlation between the plan modulation (PM) factor and the total treatment delivery time, as well as the treatment delivery duty cycle. A possibility has been identified to significantly reduce MLC motion time by optimizing the positions of closed MLC pairs. The accuracy of the proposed prediction algorithm is sufficient to support patient treatment appointment scheduling. This developed software tool is currently applied in use on a daily basis in our clinic, and could also be used as an important indicator for treatment plan complexity. PACS number(s): 87.55.N PMID:27074472

  4. Wind-tunnel based definition of the AFE aerothermodynamic environment. [Aeroassist Flight Experiment

    NASA Technical Reports Server (NTRS)

    Miller, Charles G.; Wells, W. L.

    1992-01-01

    The Aeroassist Flight Experiment (AFE), scheduled to be performed in 1994, will serve as a precursor for aeroassisted space transfer vehicles (ASTV's) and is representative of entry concepts being considered for missions to Mars. Rationale for the AFE is reviewed briefly as are the various experiments carried aboard the vehicle. The approach used to determine hypersonic aerodynamic and aerothermodynamic characteristics over a wide range of simulation parameters in ground-based facilities is presented. Facilities, instrumentation and test procedures employed in the establishment of the data base are discussed. Measurements illustrating the effects of hypersonic simulation parameters, particularly normal-shock density ratio (an important parameter for hypersonic blunt bodies), and attitude on aerodynamic and aerothermodynamic characteristics are presented, and predictions from computational fluid dynamic (CFD) computer codes are compared with measurement.

  5. Outsourcing and scheduling for a two-machine flow shop with release times

    NASA Astrophysics Data System (ADS)

    Ahmadizar, Fardin; Amiri, Zeinab

    2018-03-01

    This article addresses a two-machine flow shop scheduling problem where jobs are released intermittently and outsourcing is allowed. The first operations of outsourced jobs are processed by the first subcontractor, they are transported in batches to the second subcontractor for processing their second operations, and finally they are transported back to the manufacturer. The objective is to select a subset of jobs to be outsourced, to schedule both the in-house and the outsourced jobs, and to determine a transportation plan for the outsourced jobs so as to minimize the sum of the makespan and the outsourcing and transportation costs. Two mathematical models of the problem and several necessary optimality conditions are presented. A solution approach is then proposed by incorporating the dominance properties with an ant colony algorithm. Finally, computational experiments are conducted to evaluate the performance of the models and solution approach.

  6. Changing Work, Changing Health: Can Real Work-Time Flexibility Promote Health Behaviors and Well-Being?

    PubMed Central

    Moen, Phyllis; Kelly, Erin L.; Tranby, Eric; Huang, Qinlei

    2012-01-01

    This article investigates a change in the structuring of work time, using a natural experiment to test whether participation in a corporate initiative (Results Only Work Environment; ROWE) predicts corresponding changes in health-related outcomes. Drawing on job strain and stress process models, we theorize greater schedule control and reduced work-family conflict as key mechanisms linking this initiative with health outcomes. Longitudinal survey data from 659 employees at a corporate headquarters shows that ROWE predicts changes in health-related behaviors, including almost an extra hour of sleep on work nights. Increasing employees’ schedule control and reducing their work-family conflict are key mechanisms linking the ROWE innovation with changes in employees’ health behaviors; they also predict changes in well-being measures, providing indirect links between ROWE and well-being. This study demonstrates that organizational changes in the structuring of time can promote employee wellness, particularly in terms of prevention behaviors. PMID:22144731

  7. Peer-to-peer Cooperative Scheduling Architecture for National Grid Infrastructure

    NASA Astrophysics Data System (ADS)

    Matyska, Ludek; Ruda, Miroslav; Toth, Simon

    For some ten years, the Czech National Grid Infrastructure MetaCentrum uses a single central PBSPro installation to schedule jobs across the country. This centralized approach keeps a full track about all the clusters, providing support for jobs spanning several sites, implementation for the fair-share policy and better overall control of the grid environment. Despite a steady progress in the increased stability and resilience to intermittent very short network failures, growing number of sites and processors makes this architecture, with a single point of failure and scalability limits, obsolete. As a result, a new scheduling architecture is proposed, which relies on higher autonomy of clusters. It is based on a peer to peer network of semi-independent schedulers for each site or even cluster. Each scheduler accepts jobs for the whole infrastructure, cooperating with other schedulers on implementation of global policies like central job accounting, fair-share, or submission of jobs across several sites. The scheduling system is integrated with the Magrathea system to support scheduling of virtual clusters, including the setup of their internal network, again eventually spanning several sites. On the other hand, each scheduler is local to one of several clusters and is able to directly control and submit jobs to them even if the connection of other scheduling peers is lost. In parallel to the change of the overall architecture, the scheduling system itself is being replaced. Instead of PBSPro, chosen originally for its declared support of large scale distributed environment, the new scheduling architecture is based on the open-source Torque system. The implementation and support for the most desired properties in PBSPro and Torque are discussed and the necessary modifications to Torque to support the MetaCentrum scheduling architecture are presented, too.

  8. An investigation of the use of temporal decomposition in space mission scheduling

    NASA Technical Reports Server (NTRS)

    Bullington, Stanley E.; Narayanan, Venkat

    1994-01-01

    This research involves an examination of techniques for solving scheduling problems in long-duration space missions. The mission timeline is broken up into several time segments, which are then scheduled incrementally. Three methods are presented for identifying the activities that are to be attempted within these segments. The first method is a mathematical model, which is presented primarily to illustrate the structure of the temporal decomposition problem. Since the mathematical model is bound to be computationally prohibitive for realistic problems, two heuristic assignment procedures are also presented. The first heuristic method is based on dispatching rules for activity selection, and the second heuristic assigns performances of a model evenly over timeline segments. These heuristics are tested using a sample Space Station mission and a Spacelab mission. The results are compared with those obtained by scheduling the missions without any problem decomposition. The applicability of this approach to large-scale mission scheduling problems is also discussed.

  9. Job Scheduling in a Heterogeneous Grid Environment

    NASA Technical Reports Server (NTRS)

    Shan, Hong-Zhang; Smith, Warren; Oliker, Leonid; Biswas, Rupak

    2004-01-01

    Computational grids have the potential for solving large-scale scientific problems using heterogeneous and geographically distributed resources. However, a number of major technical hurdles must be overcome before this potential can be realized. One problem that is critical to effective utilization of computational grids is the efficient scheduling of jobs. This work addresses this problem by describing and evaluating a grid scheduling architecture and three job migration algorithms. The architecture is scalable and does not assume control of local site resources. The job migration policies use the availability and performance of computer systems, the network bandwidth available between systems, and the volume of input and output data associated with each job. An extensive performance comparison is presented using real workloads from leading computational centers. The results, based on several key metrics, demonstrate that the performance of our distributed migration algorithms is significantly greater than that of a local scheduling framework and comparable to a non-scalable global scheduling approach.

  10. Uncertainty management by relaxation of conflicting constraints in production process scheduling

    NASA Technical Reports Server (NTRS)

    Dorn, Juergen; Slany, Wolfgang; Stary, Christian

    1992-01-01

    Mathematical-analytical methods as used in Operations Research approaches are often insufficient for scheduling problems. This is due to three reasons: the combinatorial complexity of the search space, conflicting objectives for production optimization, and the uncertainty in the production process. Knowledge-based techniques, especially approximate reasoning and constraint relaxation, are promising ways to overcome these problems. A case study from an industrial CIM environment, namely high-grade steel production, is presented to demonstrate how knowledge-based scheduling with the desired capabilities could work. By using fuzzy set theory, the applied knowledge representation technique covers the uncertainty inherent in the problem domain. Based on this knowledge representation, a classification of jobs according to their importance is defined which is then used for the straightforward generation of a schedule. A control strategy which comprises organizational, spatial, temporal, and chemical constraints is introduced. The strategy supports the dynamic relaxation of conflicting constraints in order to improve tentative schedules.

  11. A dynamic scheduling algorithm for singe-arm two-cluster tools with flexible processing times

    NASA Astrophysics Data System (ADS)

    Li, Xin; Fung, Richard Y. K.

    2018-02-01

    This article presents a dynamic algorithm for job scheduling in two-cluster tools producing multi-type wafers with flexible processing times. Flexible processing times mean that the actual times for processing wafers should be within given time intervals. The objective of the work is to minimize the completion time of the newly inserted wafer. To deal with this issue, a two-cluster tool is decomposed into three reduced single-cluster tools (RCTs) in a series based on a decomposition approach proposed in this article. For each single-cluster tool, a dynamic scheduling algorithm based on temporal constraints is developed to schedule the newly inserted wafer. Three experiments have been carried out to test the dynamic scheduling algorithm proposed, comparing with the results the 'earliest starting time' heuristic (EST) adopted in previous literature. The results show that the dynamic algorithm proposed in this article is effective and practical.

  12. Scheduling logic for Miles-In-Trail traffic management

    NASA Technical Reports Server (NTRS)

    Synnestvedt, Robert G.; Swenson, Harry; Erzberger, Heinz

    1995-01-01

    This paper presents an algorithm which can be used for scheduling arrival air traffic in an Air Route Traffic Control Center (ARTCC or Center) entering a Terminal Radar Approach Control (TRACON) Facility . The algorithm aids a Traffic Management Coordinator (TMC) in deciding how to restrict traffic while the traffic expected to arrive in the TRACON exceeds the TRACON capacity. The restrictions employed fall under the category of Miles-in-Trail, one of two principal traffic separation techniques used in scheduling arrival traffic . The algorithm calculates aircraft separations for each stream of aircraft destined to the TRACON. The calculations depend upon TRACON characteristics, TMC preferences, and other parameters adapted to the specific needs of scheduling traffic in a Center. Some preliminary results of traffic simulations scheduled by this algorithm are presented, and conclusions are drawn as to the effectiveness of using this algorithm in different traffic scenarios.

  13. Scheduling Earth Observing Fleets Using Evolutionary Algorithms: Problem Description and Approach

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Morris, Robert; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We describe work in progress concerning multi-instrument, multi-satellite scheduling. Most, although not all, Earth observing instruments currently in orbit are unique. In the relatively near future, however, we expect to see fleets of Earth observing spacecraft, many carrying nearly identical instruments. This presents a substantially new scheduling challenge. Inspired by successful commercial applications of evolutionary algorithms in scheduling domains, this paper presents work in progress regarding the use of evolutionary algorithms to solve a set of Earth observing related model problems. Both the model problems and the software are described. Since the larger problems will require substantial computation and evolutionary algorithms are embarrassingly parallel, we discuss our parallelization techniques using dedicated and cycle-scavenged workstations.

  14. Stabilization process of human population: a descriptive approach.

    PubMed

    Kayani, A K; Krotki, K J

    1981-01-01

    An attempt is made to inquire into the process of stabilization of a human population. The same age distribution distorted by past variations in fertility is subjected to several fixed schedules of fertility. The schedules are different from each other monotonically over a narrow range. The primary concern is with the process, almost year by year, through which the populations become stable. There is particular interest in the differential impact in the same original age distribution of the narrowly different fixed fertility schedules. The exercise is prepared in 3 stages: general background of the process of stabilization; methodology and data used; and analysis and discussion of the stabilization process. Among the several approaches through which the analysis of stable population is possible, 2 are popular: the integral equation and the projection matrix. In this presentation the interest is in evaluating the effects of fertility on the stabilization process of a population. Therefore, only 1 initial age distribution and only 1 life table but a variety of narrowly different schedules of fertility have been used. Specifically, the U.S. 1963 female population is treated as the initial population. The process of stabilization is viewed in the light of the changes in the slopes between 2 successive age groups of an age distribution. A high fertility schedule with the given initial age distribution and mortality level overcomes the oscillations more quickly than the low fertility schedule. Simulation confirms the intuitively expected positive relationship between the mean of the slope and the level of fertility. The variance of the slope distribution is an indicator of the aging of the distribution.

  15. Supplement to The User's Guide for The Stand Prognosis Model-version 5.0

    Treesearch

    William R. Wykoff

    1986-01-01

    Differences between Prognosis Model versions 4.0 and 5.0 are described. Additions to version 5.0 include an event monitor that schedules activities contingent on stand characteristics, a regeneration establishment model that predicts the structure of the regeneration stand following treatment, and a COVER model that predicts shrub development and total canopy cover....

  16. Real-time control systems: feedback, scheduling and robustness

    NASA Astrophysics Data System (ADS)

    Simon, Daniel; Seuret, Alexandre; Sename, Olivier

    2017-08-01

    The efficient control of real-time distributed systems, where continuous components are governed through digital devices and communication networks, needs a careful examination of the constraints arising from the different involved domains inside co-design approaches. Thanks to the robustness of feedback control, both new control methodologies and slackened real-time scheduling schemes are proposed beyond the frontiers between these traditionally separated fields. A methodology to design robust aperiodic controllers is provided, where the sampling interval is considered as a control variable of the system. Promising experimental results are provided to show the feasibility and robustness of the approach.

  17. Drug scheduling of cancer chemotherapy based on natural actor-critic approach.

    PubMed

    Ahn, Inkyung; Park, Jooyoung

    2011-11-01

    Recently, reinforcement learning methods have drawn significant interests in the area of artificial intelligence, and have been successfully applied to various decision-making problems. In this paper, we study the applicability of the NAC (natural actor-critic) approach, a state-of-the-art reinforcement learning method, to the drug scheduling of cancer chemotherapy for an ODE (ordinary differential equation)-based tumor growth model. ODE-based cancer dynamics modeling is an active research area, and many different mathematical models have been proposed. Among these, we use the model proposed by de Pillis and Radunskaya (2003), which considers the growth of tumor cells and their interaction with normal cells and immune cells. The NAC approach is applied to this ODE model with the goal of minimizing the tumor cell population and the drug amount while maintaining the adequate population levels of normal cells and immune cells. In the framework of the NAC approach, the drug dose is regarded as the control input, and the reward signal is defined as a function of the control input and the cell populations of tumor cells, normal cells, and immune cells. According to the control policy found by the NAC approach, effective drug scheduling in cancer chemotherapy for the considered scenarios has turned out to be close to the strategy of continuing drug injection from the beginning until an appropriate time. Also, simulation results showed that the NAC approach can yield better performance than conventional pulsed chemotherapy. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  18. Practice expenses in the MFS (Medicare fee schedule): the service-class approach.

    PubMed

    Latimer, E A; Kane, N M

    1995-01-01

    The practice expense component of the Medicare fee schedule (MFS), which is currently based on historical charges and rewards physician procedures at the expense of cognitive services, is due to be changed by January 1, 1998. The Physician Payment Review Commission (PPRC) and others have proposed microcosting direct costs and allocating all indirect costs on a common basis, such as physician time or work plus direct costs. Without altering the treatment of direct costs, the service-class approach disaggregates indirect costs into six practice function costs. The practice function costs are then allocated to classes of services using cost-accounting and statistical methods. This approach would make the practice expense component more resource-based than other proposed alternatives.

  19. A Smart Irrigation Approach Aided by Monitoring Surface Soil Moisture using Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Wienhold, K. J.; Li, D.; Fang, N. Z.

    2017-12-01

    Soil moisture is a critical component in the optimization of irrigation scheduling in water resources management. Unmanned Aerial Vehicles (UAV) equipped with multispectral sensors represent an emerging technology capable of detecting and estimating soil moisture for irrigation and crop management. This study demonstrates a method of using a UAV as an optical and thermal remote sensing platform combined with genetic programming to derive high-resolution, surface soil moisture (SSM) estimates. The objective is to evaluate the feasibility of spatially-variable irrigation management for a golf course (about 50 acres) in North Central Texas. Multispectral data is collected over the course of one month in the visible, near infrared and longwave infrared spectrums using a UAV capable of rapid and safe deployment for daily estimates. The accuracy of the model predictions is quantified using a time domain reflectometry (TDR) soil moisture sensor and a holdout validation test set. The model produces reasonable estimates for SSM with an average coefficient of correlation (r) = 0.87 and coefficient of determination of (R2) = 0.76. The study suggests that the derived SSM estimates be used to better inform irrigation scheduling decisions for lightly vegetated areas such as the turf or native roughs found on golf courses.

  20. Mission Operations Planning and Scheduling System (MOPSS)

    NASA Technical Reports Server (NTRS)

    Wood, Terri; Hempel, Paul

    2011-01-01

    MOPSS is a generic framework that can be configured on the fly to support a wide range of planning and scheduling applications. It is currently used to support seven missions at Goddard Space Flight Center (GSFC) in roles that include science planning, mission planning, and real-time control. Prior to MOPSS, each spacecraft project built its own planning and scheduling capability to plan satellite activities and communications and to create the commands to be uplinked to the spacecraft. This approach required creating a data repository for storing planning and scheduling information, building user interfaces to display data, generating needed scheduling algorithms, and implementing customized external interfaces. Complex scheduling problems that involved reacting to multiple variable situations were analyzed manually. Operators then used the results to add commands to the schedule. Each architecture was unique to specific satellite requirements. MOPSS is an expert system that automates mission operations and frees the flight operations team to concentrate on critical activities. It is easily reconfigured by the flight operations team as the mission evolves. The heart of the system is a custom object-oriented data layer mapped onto an Oracle relational database. The combination of these two technologies allows a user or system engineer to capture any type of scheduling or planning data in the system's generic data storage via a GUI.

Top