Sample records for static priority scheduling

  1. Applying dynamic priority scheduling scheme to static systems of pinwheel task model in power-aware scheduling.

    PubMed

    Seol, Ye-In; Kim, Young-Kuk

    2014-01-01

    Power-aware scheduling reduces CPU energy consumption in hard real-time systems through dynamic voltage scaling (DVS). In this paper, we deal with pinwheel task model which is known as static and predictable task model and could be applied to various embedded or ubiquitous systems. In pinwheel task model, each task's priority is static and its execution sequence could be predetermined. There have been many static approaches to power-aware scheduling in pinwheel task model. But, in this paper, we will show that the dynamic priority scheduling results in power-aware scheduling could be applied to pinwheel task model. This method is more effective than adopting the previous static priority scheduling methods in saving energy consumption and, for the system being still static, it is more tractable and applicable to small sized embedded or ubiquitous computing. Also, we introduce a novel power-aware scheduling algorithm which exploits all slacks under preemptive earliest-deadline first scheduling which is optimal in uniprocessor system. The dynamic priority method presented in this paper could be applied directly to static systems of pinwheel task model. The simulation results show that the proposed algorithm with the algorithmic complexity of O(n) reduces the energy consumption by 10-80% over the existing algorithms.

  2. Applying Dynamic Priority Scheduling Scheme to Static Systems of Pinwheel Task Model in Power-Aware Scheduling

    PubMed Central

    2014-01-01

    Power-aware scheduling reduces CPU energy consumption in hard real-time systems through dynamic voltage scaling (DVS). In this paper, we deal with pinwheel task model which is known as static and predictable task model and could be applied to various embedded or ubiquitous systems. In pinwheel task model, each task's priority is static and its execution sequence could be predetermined. There have been many static approaches to power-aware scheduling in pinwheel task model. But, in this paper, we will show that the dynamic priority scheduling results in power-aware scheduling could be applied to pinwheel task model. This method is more effective than adopting the previous static priority scheduling methods in saving energy consumption and, for the system being still static, it is more tractable and applicable to small sized embedded or ubiquitous computing. Also, we introduce a novel power-aware scheduling algorithm which exploits all slacks under preemptive earliest-deadline first scheduling which is optimal in uniprocessor system. The dynamic priority method presented in this paper could be applied directly to static systems of pinwheel task model. The simulation results show that the proposed algorithm with the algorithmic complexity of O(n) reduces the energy consumption by 10–80% over the existing algorithms. PMID:25121126

  3. Expert Design Advisor

    DTIC Science & Technology

    1990-10-01

    to economic, technological, spatial or logistic concerns, or involve training, man-machine interfaces, or integration into existing systems. Once the...probabilistic reasoning, mixed analysis- and simulation-oriented, mixed computation- and communication-oriented, nonpreemptive static priority...scheduling base, nonrandomized, preemptive static priority scheduling base, randomized, simulation-oriented, and static scheduling base. The selection of both

  4. Compilation time analysis to minimize run-time overhead in preemptive scheduling on multiprocessors

    NASA Astrophysics Data System (ADS)

    Wauters, Piet; Lauwereins, Rudy; Peperstraete, J.

    1994-10-01

    This paper describes a scheduling method for hard real-time Digital Signal Processing (DSP) applications, implemented on a multi-processor. Due to the very high operating frequencies of DSP applications (typically hundreds of kHz) runtime overhead should be kept as small as possible. Because static scheduling introduces very little run-time overhead it is used as much as possible. Dynamic pre-emption of tasks is allowed if and only if it leads to better performance in spite of the extra run-time overhead. We essentially combine static scheduling with dynamic pre-emption using static priorities. Since we are dealing with hard real-time applications we must be able to guarantee at compile-time that all timing requirements will be satisfied at run-time. We will show that our method performs at least as good as any static scheduling method. It also reduces the total amount of dynamic pre-emptions compared with run time methods like deadline monotonic scheduling.

  5. Priority in Process Algebras

    NASA Technical Reports Server (NTRS)

    Cleaveland, Rance; Luettgen, Gerald; Natarajan, V.

    1999-01-01

    This paper surveys the semantic ramifications of extending traditional process algebras with notions of priority that allow for some transitions to be given precedence over others. These enriched formalisms allow one to model system features such as interrupts, prioritized choice, or real-time behavior. Approaches to priority in process algebras can be classified according to whether the induced notion of preemption on transitions is global or local and whether priorities are static or dynamic. Early work in the area concentrated on global pre-emption and static priorities and led to formalisms for modeling interrupts and aspects of real-time, such as maximal progress, in centralized computing environments. More recent research has investigated localized notions of pre-emption in which the distribution of systems is taken into account, as well as dynamic priority approaches, i.e., those where priority values may change as systems evolve. The latter allows one to model behavioral phenomena such as scheduling algorithms and also enables the efficient encoding of real-time semantics. Technically, this paper studies the different models of priorities by presenting extensions of Milner's Calculus of Communicating Systems (CCS) with static and dynamic priority as well as with notions of global and local pre- emption. In each case the operational semantics of CCS is modified appropriately, behavioral theories based on strong and weak bisimulation are given, and related approaches for different process-algebraic settings are discussed.

  6. Utilization Bound of Non-preemptive Fixed Priority Schedulers

    NASA Astrophysics Data System (ADS)

    Park, Moonju; Chae, Jinseok

    It is known that the schedulability of a non-preemptive task set with fixed priority can be determined in pseudo-polynomial time. However, since Rate Monotonic scheduling is not optimal for non-preemptive scheduling, the applicability of existing polynomial time tests that provide sufficient schedulability conditions, such as Liu and Layland's bound, is limited. This letter proposes a new sufficient condition for non-preemptive fixed priority scheduling that can be used for any fixed priority assignment scheme. It is also shown that the proposed schedulability test has a tighter utilization bound than existing test methods.

  7. Segment Fixed Priority Scheduling for Self Suspending Real Time Tasks

    DTIC Science & Technology

    2016-08-11

    Segment-Fixed Priority Scheduling for Self-Suspending Real -Time Tasks Junsung Kim, Department of Electrical and Computer Engineering, Carnegie...4 2.1 Application of a Multi-Segment Self-Suspending Real -Time Task Model ............................. 5 3 Fixed Priority Scheduling...1 Figure 2: A multi-segment self-suspending real -time task model

  8. A new scheduling algorithm for parallel sparse LU factorization with static pivoting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grigori, Laura; Li, Xiaoye S.

    2002-08-20

    In this paper we present a static scheduling algorithm for parallel sparse LU factorization with static pivoting. The algorithm is divided into mapping and scheduling phases, using the symmetric pruned graphs of L' and U to represent dependencies. The scheduling algorithm is designed for driving the parallel execution of the factorization on a distributed-memory architecture. Experimental results and comparisons with SuperLU{_}DIST are reported after applying this algorithm on real world application matrices on an IBM SP RS/6000 distributed memory machine.

  9. How do employees prioritise when they schedule their own shifts?

    PubMed

    Nabe-Nielsen, Kirsten; Lund, Henrik; Ajslev, Jeppe Z; Hansen, Åse Marie; Albertsen, Karen; Hvid, Helge; Garde, Anne Helene

    2013-01-01

    We investigated how employees prioritised when they scheduled their own shifts and whether priorities depended on age, gender, educational level, cohabitation and health status. We used cross-sectional questionnaire data from the follow-up survey of an intervention study investigating the effect of self-scheduling (n = 317). Intervention group participants were asked about their priorities when scheduling their own shifts succeeded by 17 items covering family/private life, economy, job content, health and sleep. At least half of the participants reported that they were giving high priority to their family life, having consecutive time off, leisure-time activities, rest between shifts, sleep, regularity of their everyday life, health and that the work schedule balanced. Thus, employees consider both their own and the workplace's needs when they have the opportunity to schedule their own shifts. Age, gender, cohabitation and health status were all significantly associated with at least one of these priorities. Intervention studies report limited health effects of self-scheduling. Therefore, we investigated to what extent employees prioritise their health and recuperation when scheduling their own shifts. We found that employees not only consider both their health and family but also the workplace's needs when they schedule their own shifts.

  10. 15 CFR 700.14 - Preferential scheduling.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE NATIONAL SECURITY INDUSTRIAL BASE REGULATIONS DEFENSE PRIORITIES AND ALLOCATIONS SYSTEM Industrial Priorities § 700.14 Preferential scheduling. (a) A...

  11. Hard real-time beam scheduler enables adaptive images in multi-probe systems

    NASA Astrophysics Data System (ADS)

    Tobias, Richard J.

    2014-03-01

    Real-time embedded-system concepts were adapted to allow an imaging system to responsively control the firing of multiple probes. Large-volume, operator-independent (LVOI) imaging would increase the diagnostic utility of ultrasound. An obstacle to this innovation is the inability of current systems to drive multiple transducers dynamically. Commercial systems schedule scanning with static lists of beams to be fired and processed; here we allow an imager to adapt to changing beam schedule demands, as an intelligent response to incoming image data. An example of scheduling changes is demonstrated with a flexible duplex mode two-transducer application mimicking LVOI imaging. Embedded-system concepts allow an imager to responsively control the firing of multiple probes. Operating systems use powerful dynamic scheduling algorithms, such as fixed priority preemptive scheduling. Even real-time operating systems lack the timing constraints required for ultrasound. Particularly for Doppler modes, events must be scheduled with sub-nanosecond precision, and acquired data is useless without this requirement. A successful scheduler needs unique characteristics. To get close to what would be needed in LVOI imaging, we show two transducers scanning different parts of a subjects leg. When one transducer notices flow in a region where their scans overlap, the system reschedules the other transducer to start flow mode and alter its beams to get a view of the observed vessel and produce a flow measurement. The second transducer does this in a focused region only. This demonstrates key attributes of a successful LVOI system, such as robustness against obstructions and adaptive self-correction.

  12. Sources of unbounded priority inversions in real-time systems and a comparative study of possible solutions

    NASA Technical Reports Server (NTRS)

    Davari, Sadegh; Sha, Lui

    1992-01-01

    In the design of real-time systems, tasks are often assigned priorities. Preemptive priority driven schedulers are used to schedule tasks to meet the timing requirements. Priority inversion is the term used to describe the situation when a higher priority task's execution is delayed by lower priority tasks. Priority inversion can occur when there is contention for resources among tasks of different priorities. The duration of priority inversion could be long enough to cause tasks to miss their dead lines. Priority inversion cannot be completely eliminated. However, it is important to identify sources of priority inversion and minimize the duration of priority inversion. In this paper, a comprehensive review of the problem of and solutions to unbounded priority inversion is presented.

  13. A Scheduling Algorithm for Cloud Computing System Based on the Driver of Dynamic Essential Path.

    PubMed

    Xie, Zhiqiang; Shao, Xia; Xin, Yu

    2016-01-01

    To solve the problem of task scheduling in the cloud computing system, this paper proposes a scheduling algorithm for cloud computing based on the driver of dynamic essential path (DDEP). This algorithm applies a predecessor-task layer priority strategy to solve the problem of constraint relations among task nodes. The strategy assigns different priority values to every task node based on the scheduling order of task node as affected by the constraint relations among task nodes, and the task node list is generated by the different priority value. To address the scheduling order problem in which task nodes have the same priority value, the dynamic essential long path strategy is proposed. This strategy computes the dynamic essential path of the pre-scheduling task nodes based on the actual computation cost and communication cost of task node in the scheduling process. The task node that has the longest dynamic essential path is scheduled first as the completion time of task graph is indirectly influenced by the finishing time of task nodes in the longest dynamic essential path. Finally, we demonstrate the proposed algorithm via simulation experiments using Matlab tools. The experimental results indicate that the proposed algorithm can effectively reduce the task Makespan in most cases and meet a high quality performance objective.

  14. A Scheduling Algorithm for Cloud Computing System Based on the Driver of Dynamic Essential Path

    PubMed Central

    Xie, Zhiqiang; Shao, Xia; Xin, Yu

    2016-01-01

    To solve the problem of task scheduling in the cloud computing system, this paper proposes a scheduling algorithm for cloud computing based on the driver of dynamic essential path (DDEP). This algorithm applies a predecessor-task layer priority strategy to solve the problem of constraint relations among task nodes. The strategy assigns different priority values to every task node based on the scheduling order of task node as affected by the constraint relations among task nodes, and the task node list is generated by the different priority value. To address the scheduling order problem in which task nodes have the same priority value, the dynamic essential long path strategy is proposed. This strategy computes the dynamic essential path of the pre-scheduling task nodes based on the actual computation cost and communication cost of task node in the scheduling process. The task node that has the longest dynamic essential path is scheduled first as the completion time of task graph is indirectly influenced by the finishing time of task nodes in the longest dynamic essential path. Finally, we demonstrate the proposed algorithm via simulation experiments using Matlab tools. The experimental results indicate that the proposed algorithm can effectively reduce the task Makespan in most cases and meet a high quality performance objective. PMID:27490901

  15. Evaluation and Selection of Best Priority Sequencing Rule in Job Shop Scheduling using Hybrid MCDM Technique

    NASA Astrophysics Data System (ADS)

    Kiran Kumar, Kalla; Nagaraju, Dega; Gayathri, S.; Narayanan, S.

    2017-05-01

    Priority Sequencing Rules provide the guidance for the order in which the jobs are to be processed at a workstation. The application of different priority rules in job shop scheduling gives different order of scheduling. More experimentation needs to be conducted before a final choice is made to know the best priority sequencing rule. Hence, a comprehensive method of selecting the right choice is essential in managerial decision making perspective. This paper considers seven different priority sequencing rules in job shop scheduling. For evaluation and selection of the best priority sequencing rule, a set of eight criteria are considered. The aim of this work is to demonstrate the methodology of evaluating and selecting the best priority sequencing rule by using hybrid multi criteria decision making technique (MCDM), i.e., analytical hierarchy process (AHP) with technique for order preference by similarity to ideal solution (TOPSIS). The criteria weights are calculated by using AHP whereas the relative closeness values of all priority sequencing rules are computed based on TOPSIS with the help of data acquired from the shop floor of a manufacturing firm. Finally, from the findings of this work, the priority sequencing rules are ranked from most important to least important. The comprehensive methodology presented in this paper is very much essential for the management of a workstation to choose the best priority sequencing rule among the available alternatives for processing the jobs with maximum benefit.

  16. Proportional fair scheduling algorithm based on traffic in satellite communication system

    NASA Astrophysics Data System (ADS)

    Pan, Cheng-Sheng; Sui, Shi-Long; Liu, Chun-ling; Shi, Yu-Xin

    2018-02-01

    In the satellite communication network system, in order to solve the problem of low system capacity and user fairness in multi-user access to satellite communication network in the downlink, combined with the characteristics of user data service, an algorithm study on throughput capacity and user fairness scheduling is proposed - Proportional Fairness Algorithm Based on Traffic(B-PF). The algorithm is improved on the basis of the proportional fairness algorithm in the wireless communication system, taking into account the user channel condition and caching traffic information. The user outgoing traffic is considered as the adjustment factor of the scheduling priority and presents the concept of traffic satisfaction. Firstly,the algorithm calculates the priority of the user according to the scheduling algorithm and dispatches the users with the highest priority. Secondly, when a scheduled user is the business satisfied user, the system dispatches the next priority user. The simulation results show that compared with the PF algorithm, B-PF can improve the system throughput, the business satisfaction and fairness.

  17. Operating room scheduling using hybrid clustering priority rule and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Santoso, Linda Wahyuni; Sinawan, Aisyah Ashrinawati; Wijaya, Andi Rahadiyan; Sudiarso, Andi; Masruroh, Nur Aini; Herliansyah, Muhammad Kusumawan

    2017-11-01

    Operating room is a bottleneck resource in most hospitals so that operating room scheduling system will influence the whole performance of the hospitals. This research develops a mathematical model of operating room scheduling for elective patients which considers patient priority with limit number of surgeons, operating rooms, and nurse team. Clustering analysis was conducted to the data of surgery durations using hierarchical and non-hierarchical methods. The priority rule of each resulting cluster was determined using Shortest Processing Time method. Genetic Algorithm was used to generate daily operating room schedule which resulted in the lowest values of patient waiting time and nurse overtime. The computational results show that this proposed model reduced patient waiting time by approximately 32.22% and nurse overtime by approximately 32.74% when compared to actual schedule.

  18. Conflict-Aware Scheduling Algorithm

    NASA Technical Reports Server (NTRS)

    Wang, Yeou-Fang; Borden, Chester

    2006-01-01

    conflict-aware scheduling algorithm is being developed to help automate the allocation of NASA s Deep Space Network (DSN) antennas and equipment that are used to communicate with interplanetary scientific spacecraft. The current approach for scheduling DSN ground resources seeks to provide an equitable distribution of tracking services among the multiple scientific missions and is very labor intensive. Due to the large (and increasing) number of mission requests for DSN services, combined with technical and geometric constraints, the DSN is highly oversubscribed. To help automate the process, and reduce the DSN and spaceflight project labor effort required for initiating, maintaining, and negotiating schedules, a new scheduling algorithm is being developed. The scheduling algorithm generates a "conflict-aware" schedule, where all requests are scheduled based on a dynamic priority scheme. The conflict-aware scheduling algorithm allocates all requests for DSN tracking services while identifying and maintaining the conflicts to facilitate collaboration and negotiation between spaceflight missions. These contrast with traditional "conflict-free" scheduling algorithms that assign tracks that are not in conflict and mark the remainder as unscheduled. In the case where full schedule automation is desired (based on mission/event priorities, fairness, allocation rules, geometric constraints, and ground system capabilities/ constraints), a conflict-free schedule can easily be created from the conflict-aware schedule by removing lower priority items that are in conflict.

  19. A Conceptual Level Design for a Static Scheduler for Hard Real-Time Systems

    DTIC Science & Technology

    1988-03-01

    The design of hard real - time systems is gaining a great deal of attention in the software engineering field as more and more real-world processes are...for these hard real - time systems . PSDL, as an executable design language, is supported by an execution support system consisting of a static scheduler, dynamic scheduler, and translator.

  20. Static Schedulers for Embedded Real-Time Systems

    DTIC Science & Technology

    1989-12-01

    Because of the need for having efficient scheduling algorithms in large scale real time systems , software engineers put a lot of effort on developing...provide static schedulers for he Embedded Real Time Systems with single processor using Ada programming language. The independent nonpreemptable...support the Computer Aided Rapid Prototyping for Embedded Real Time Systems so that we determine whether the system, as designed, meets the required

  1. A bi-objective integer programming model for partly-restricted flight departure scheduling

    PubMed Central

    Guan, Wei; Zhang, Wenyi; Jiang, Shixiong; Fan, Lingling

    2018-01-01

    The normal studies on air traffic departure scheduling problem (DSP) mainly deal with an independent airport in which the departure traffic is not affected by surrounded airports, which, however, is not a consistent case. In reality, there still exist cases where several commercial airports are closely located and one of them possesses a higher priority. During the peak hours, the departure activities of the lower-priority airports are usually required to give way to those of higher-priority airport. These giving-way requirements can inflict a set of changes on the modeling of departure scheduling problem with respect to the lower-priority airports. To the best of our knowledge, studies on DSP under this condition are scarce. Accordingly, this paper develops a bi-objective integer programming model to address the flight departure scheduling of the partly-restricted (e.g., lower-priority) one among several adjacent airports. An adapted tabu search algorithm is designed to solve the current problem. It is demonstrated from the case study of Tianjin Binhai International Airport in China that the proposed method can obviously improve the operation efficiency, while still realizing superior equity and regularity among restricted flows. PMID:29715299

  2. A bi-objective integer programming model for partly-restricted flight departure scheduling.

    PubMed

    Zhong, Han; Guan, Wei; Zhang, Wenyi; Jiang, Shixiong; Fan, Lingling

    2018-01-01

    The normal studies on air traffic departure scheduling problem (DSP) mainly deal with an independent airport in which the departure traffic is not affected by surrounded airports, which, however, is not a consistent case. In reality, there still exist cases where several commercial airports are closely located and one of them possesses a higher priority. During the peak hours, the departure activities of the lower-priority airports are usually required to give way to those of higher-priority airport. These giving-way requirements can inflict a set of changes on the modeling of departure scheduling problem with respect to the lower-priority airports. To the best of our knowledge, studies on DSP under this condition are scarce. Accordingly, this paper develops a bi-objective integer programming model to address the flight departure scheduling of the partly-restricted (e.g., lower-priority) one among several adjacent airports. An adapted tabu search algorithm is designed to solve the current problem. It is demonstrated from the case study of Tianjin Binhai International Airport in China that the proposed method can obviously improve the operation efficiency, while still realizing superior equity and regularity among restricted flows.

  3. Static-dynamic hybrid communication scheduling and control co-design for networked control systems.

    PubMed

    Wen, Shixi; Guo, Ge

    2017-11-01

    In this paper, the static-dynamic hybrid communication scheduling and control co-design is proposed for the networked control systems (NCSs) to solve the capacity limitation of the wireless communication network. The analytical most regular binary sequences (MRBSs) are used as the communication scheduling function for NCSs. When the communication conflicts yielded in the binary sequence MRBSs, a dynamic scheduling strategy is proposed to on-line reallocate the medium access status for each plant. Under such static-dynamic hybrid scheduling policy, plants in NCSs are described as the non-uniform sampled-control systems, whose controller have a group of controller gains and switch according to the sampling interval yielded by the binary sequence. A useful communication scheduling and control co-design framework is proposed for the NCSs to simultaneously decide the controller gains and the parameters used to generate the communication sequences MRBS. Numerical example and realistic example are respectively given to demonstrate the effectiveness of the proposed co-design method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  4. QoS Differential Scheduling in Cognitive-Radio-Based Smart Grid Networks: An Adaptive Dynamic Programming Approach.

    PubMed

    Yu, Rong; Zhong, Weifeng; Xie, Shengli; Zhang, Yan; Zhang, Yun

    2016-02-01

    As the next-generation power grid, smart grid will be integrated with a variety of novel communication technologies to support the explosive data traffic and the diverse requirements of quality of service (QoS). Cognitive radio (CR), which has the favorable ability to improve the spectrum utilization, provides an efficient and reliable solution for smart grid communications networks. In this paper, we study the QoS differential scheduling problem in the CR-based smart grid communications networks. The scheduler is responsible for managing the spectrum resources and arranging the data transmissions of smart grid users (SGUs). To guarantee the differential QoS, the SGUs are assigned to have different priorities according to their roles and their current situations in the smart grid. Based on the QoS-aware priority policy, the scheduler adjusts the channels allocation to minimize the transmission delay of SGUs. The entire transmission scheduling problem is formulated as a semi-Markov decision process and solved by the methodology of adaptive dynamic programming. A heuristic dynamic programming (HDP) architecture is established for the scheduling problem. By the online network training, the HDP can learn from the activities of primary users and SGUs, and adjust the scheduling decision to achieve the purpose of transmission delay minimization. Simulation results illustrate that the proposed priority policy ensures the low transmission delay of high priority SGUs. In addition, the emergency data transmission delay is also reduced to a significantly low level, guaranteeing the differential QoS in smart grid.

  5. Cost-efficient scheduling of FAST observations

    NASA Astrophysics Data System (ADS)

    Luo, Qi; Zhao, Laiping; Yu, Ce; Xiao, Jian; Sun, Jizhou; Zhu, Ming; Zhong, Yi

    2018-03-01

    A cost-efficient schedule for the Five-hundred-meter Aperture Spherical radio Telescope (FAST) requires to maximize the number of observable proposals and the overall scientific priority, and minimize the overall slew-cost generated by telescope shifting, while taking into account the constraints including the astronomical objects visibility, user-defined observable times, avoiding Radio Frequency Interference (RFI). In this contribution, first we solve the problem of maximizing the number of observable proposals and scientific priority by modeling it as a Minimum Cost Maximum Flow (MCMF) problem. The optimal schedule can be found by any MCMF solution algorithm. Then, for minimizing the slew-cost of the generated schedule, we devise a maximally-matchable edges detection-based method to reduce the problem size, and propose a backtracking algorithm to find the perfect matching with minimum slew-cost. Experiments on a real dataset from NASA/IPAC Extragalactic Database (NED) show that, the proposed scheduler can increase the usage of available times with high scientific priority and reduce the slew-cost significantly in a very short time.

  6. An advanced approach to traditional round robin CPU scheduling algorithm to prioritize processes with residual burst time nearest to the specified time quantum

    NASA Astrophysics Data System (ADS)

    Swaraj Pati, Mythili N.; Korde, Pranav; Dey, Pallav

    2017-11-01

    The purpose of this paper is to introduce an optimised variant to the round robin scheduling algorithm. Every algorithm works in its own way and has its own merits and demerits. The proposed algorithm overcomes the shortfalls of the existing scheduling algorithms in terms of waiting time, turnaround time, throughput and number of context switches. The algorithm is pre-emptive and works based on the priority of the associated processes. The priority is decided on the basis of the remaining burst time of a particular process, that is; lower the burst time, higher the priority and higher the burst time, lower the priority. To complete the execution, a time quantum is initially specified. In case if the burst time of a particular process is less than 2X of the specified time quantum but more than 1X of the specified time quantum; the process is given high priority and is allowed to execute until it completes entirely and finishes. Such processes do not have to wait for their next burst cycle.

  7. Dedicated heterogeneous node scheduling including backfill scheduling

    DOEpatents

    Wood, Robert R [Livermore, CA; Eckert, Philip D [Livermore, CA; Hommes, Gregg [Pleasanton, CA

    2006-07-25

    A method and system for job backfill scheduling dedicated heterogeneous nodes in a multi-node computing environment. Heterogeneous nodes are grouped into homogeneous node sub-pools. For each sub-pool, a free node schedule (FNS) is created so that the number of to chart the free nodes over time. For each prioritized job, using the FNS of sub-pools having nodes useable by a particular job, to determine the earliest time range (ETR) capable of running the job. Once determined for a particular job, scheduling the job to run in that ETR. If the ETR determined for a lower priority job (LPJ) has a start time earlier than a higher priority job (HPJ), then the LPJ is scheduled in that ETR if it would not disturb the anticipated start times of any HPJ previously scheduled for a future time. Thus, efficient utilization and throughput of such computing environments may be increased by utilizing resources otherwise remaining idle.

  8. Modeling heterogeneous processor scheduling for real time systems

    NASA Technical Reports Server (NTRS)

    Leathrum, J. F.; Mielke, R. R.; Stoughton, J. W.

    1994-01-01

    A new model is presented to describe dataflow algorithms implemented in a multiprocessing system. Called the resource/data flow graph (RDFG), the model explicitly represents cyclo-static processor schedules as circuits of processor arcs which reflect the order that processors execute graph nodes. The model also allows the guarantee of meeting hard real-time deadlines. When unfolded, the model identifies statically the processor schedule. The model therefore is useful for determining the throughput and latency of systems with heterogeneous processors. The applicability of the model is demonstrated using a space surveillance algorithm.

  9. Performance analysis of a large-grain dataflow scheduling paradigm

    NASA Technical Reports Server (NTRS)

    Young, Steven D.; Wills, Robert W.

    1993-01-01

    A paradigm for scheduling computations on a network of multiprocessors using large-grain data flow scheduling at run time is described and analyzed. The computations to be scheduled must follow a static flow graph, while the schedule itself will be dynamic (i.e., determined at run time). Many applications characterized by static flow exist, and they include real-time control and digital signal processing. With the advent of computer-aided software engineering (CASE) tools for capturing software designs in dataflow-like structures, macro-dataflow scheduling becomes increasingly attractive, if not necessary. For parallel implementations, using the macro-dataflow method allows the scheduling to be insulated from the application designer and enables the maximum utilization of available resources. Further, by allowing multitasking, processor utilizations can approach 100 percent while they maintain maximum speedup. Extensive simulation studies are performed on 4-, 8-, and 16-processor architectures that reflect the effects of communication delays, scheduling delays, algorithm class, and multitasking on performance and speedup gains.

  10. Using Knowledge Base for Event-Driven Scheduling of Web Monitoring Systems

    NASA Astrophysics Data System (ADS)

    Kim, Yang Sok; Kang, Sung Won; Kang, Byeong Ho; Compton, Paul

    Web monitoring systems report any changes to their target web pages by revisiting them frequently. As they operate under significant resource constraints, it is essential to minimize revisits while ensuring minimal delay and maximum coverage. Various statistical scheduling methods have been proposed to resolve this problem; however, they are static and cannot easily cope with events in the real world. This paper proposes a new scheduling method that manages unpredictable events. An MCRDR (Multiple Classification Ripple-Down Rules) document classification knowledge base was reused to detect events and to initiate a prompt web monitoring process independent of a static monitoring schedule. Our experiment demonstrates that the approach improves monitoring efficiency significantly.

  11. Scheduling job shop - A case study

    NASA Astrophysics Data System (ADS)

    Abas, M.; Abbas, A.; Khan, W. A.

    2016-08-01

    The scheduling in job shop is important for efficient utilization of machines in the manufacturing industry. There are number of algorithms available for scheduling of jobs which depend on machines tools, indirect consumables and jobs which are to be processed. In this paper a case study is presented for scheduling of jobs when parts are treated on available machines. Through time and motion study setup time and operation time are measured as total processing time for variety of products having different manufacturing processes. Based on due dates different level of priority are assigned to the jobs and the jobs are scheduled on the basis of priority. In view of the measured processing time, the times for processing of some new jobs are estimated and for efficient utilization of the machines available an algorithm is proposed and validated.

  12. Impact Analysis of Flow Shaping in Ethernet-AVB/TSN and AFDX from Network Calculus and Simulation Perspective

    PubMed Central

    He, Feng; Zhao, Lin; Li, Ershuai

    2017-01-01

    Ethernet-AVB/TSN (Audio Video Bridging/Time-Sensitive Networking) and AFDX (Avionics Full DupleX switched Ethernet) are switched Ethernet technologies, which are both candidates for real-time communication in the context of transportation systems. AFDX implements a fixed priority scheduling strategy with two priority levels. Ethernet-AVB/TSN supports a similar fixed priority scheduling with an additional Credit-Based Shaper (CBS) mechanism. Besides, TSN can support time-triggered scheduling strategy. One direct effect of CBS mechanism is to increase the delay of its flows while decreasing the delay of other priority ones. The former effect can be seen as the shaping restriction and the latter effect can be seen as the shaping benefit from CBS. The goal of this paper is to investigate the impact of CBS on different priority flows, especially on the intermediate priority ones, as well as the effect of CBS bandwidth allocation. It is based on a performance comparison of AVB/TSN and AFDX by simulation in an automotive case study. Furthermore, the shaping benefit is modeled based on integral operation from network calculus perspective. Combing with the analysis of shaping restriction and shaping benefit, some configuration suggestions on the setting of CBS bandwidth are given. Results show that the effect of CBS depends on flow loads and CBS configurations. A larger load of high priority flows in AVB tends to a better performance for the intermediate priority flows when compared with AFDX. Shaping benefit can be explained and calculated according to the changing from the permitted maximum burst. PMID:28531158

  13. Modernizing sports facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dustin, R.

    Modernization and renovation of sports facilities challenge the design team to balance a number of requirements: spectator and owner expectations, existing building and site conditions, architectural layouts, code and legislation issues, time constraints and budget issues. System alternatives are evaluated and selected based on the relative priorities of these requirements. These priorities are unique to each project. At Alexander Memorial Coliseum, project schedules, construction funds and facility usage became the priorities. The ACC basketball schedule and arrival of the Centennial Olympics dictated the construction schedule. Initiation and success of the project depended on the commitment of the design team tomore » meet coliseum funding levels established three years ago. Analysis of facility usage and system alternative capabilities drove the design team to select a system that met the project requirements and will maximize the benefits to the owner and spectators for many years to come.« less

  14. A subjective scheduler for subjective dedicated networks

    NASA Astrophysics Data System (ADS)

    Suherman; Fakhrizal, Said Reza; Al-Akaidi, Marwan

    2017-09-01

    Multiple access technique is one of important techniques within medium access layer in TCP/IP protocol stack. Each network technology implements the selected access method. Priority can be implemented in those methods to differentiate services. Some internet networks are dedicated for specific purpose. Education browsing or tutorial video accesses are preferred in a library hotspot, while entertainment and sport contents could be subjects of limitation. Current solution may use IP address filter or access list. This paper proposes subjective properties of users or applications are used for priority determination in multiple access techniques. The NS-2 simulator is employed to evaluate the method. A video surveillance network using WiMAX is chosen as the object. Subjective priority is implemented on WiMAX scheduler based on traffic properties. Three different traffic sources from monitoring video: palace, park, and market are evaluated. The proposed subjective scheduler prioritizes palace monitoring video that results better quality, xx dB than the later monitoring spots.

  15. Strategic workload management and decision biases in aviation

    NASA Technical Reports Server (NTRS)

    Raby, Mireille; Wickens, Christopher D.

    1994-01-01

    Thirty pilots flew three simulated landing approaches under conditions of low, medium, and high workload. Workload conditions were created by varying time pressure and external communications requirements. Our interest was in how the pilots strategically managed or adapted to the increasing workload. We independently assessed the pilot's ranking of the priority of different discrete tasks during the approach and landing. Pilots were found to sacrifice some aspects of primary flight control as workload increased. For discrete tasks, increasing workload increased the amount of time in performing the high priority tasks, decreased the time in performing those of lowest priority, and did not affect duration of performance episodes or optimality of scheduling of tasks of any priority level. Individual differences analysis revealed that high-performing subjects scheduled discrete tasks earlier in the flight and shifted more often between different activities.

  16. 15 CFR 200.110 - Priorities and time of completion.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 15 Commerce and Foreign Trade 1 2011-01-01 2011-01-01 false Priorities and time of completion. 200..., SERVICES, PROCEDURES, AND FEES § 200.110 Priorities and time of completion. Schedule work assignments for calibrations and other tests will generally be made in the order in which confirmed requests are received...

  17. Knowledge-based systems for power management

    NASA Technical Reports Server (NTRS)

    Lollar, L. F.

    1992-01-01

    NASA-Marshall's Electrical Power Branch has undertaken the development of expert systems in support of further advancements in electrical power system automation. Attention is given to the features (1) of the Fault Recovery and Management Expert System, (2) a resource scheduler or Master of Automated Expert Scheduling Through Resource Orchestration, and (3) an adaptive load-priority manager, or Load Priority List Management System. The characteristics of an advisory battery manager for the Hubble Space Telescope, designated the 'nickel-hydrogen expert system', are also noted.

  18. A task scheduler framework for self-powered wireless sensors.

    PubMed

    Nordman, Mikael M

    2003-10-01

    The cost and inconvenience of cabling is a factor limiting widespread use of intelligent sensors. Recent developments in short-range, low-power radio seem to provide an opening to this problem, making development of wireless sensors feasible. However, for these sensors the energy availability is a main concern. The common solution is either to use a battery or to harvest ambient energy. The benefit of harvested ambient energy is that the energy feeder can be considered as lasting a lifetime, thus it saves the user from concerns related to energy management. The problem is, however, the unpredictability and unsteady behavior of ambient energy sources. This becomes a main concern for sensors that run multiple tasks at different priorities. This paper proposes a new scheduler framework that enables the reliable assignment of task priorities and scheduling in sensors powered by ambient energy. The framework being based on environment parameters, virtual queues, and a state machine with transition conditions, dynamically manages task execution according to priorities. The framework is assessed in a test system powered by a solar panel. The results show the functionality of the framework and how task execution reliably is handled without violating the priority scheme that has been assigned to it.

  19. Active Solution Space and Search on Job-shop Scheduling Problem

    NASA Astrophysics Data System (ADS)

    Watanabe, Masato; Ida, Kenichi; Gen, Mitsuo

    In this paper we propose a new searching method of Genetic Algorithm for Job-shop scheduling problem (JSP). The coding method that represent job number in order to decide a priority to arrange a job to Gannt Chart (called the ordinal representation with a priority) in JSP, an active schedule is created by using left shift. We define an active solution at first. It is solution which can create an active schedule without using left shift, and set of its defined an active solution space. Next, we propose an algorithm named Genetic Algorithm with active solution space search (GA-asol) which can create an active solution while solution is evaluated, in order to search the active solution space effectively. We applied it for some benchmark problems to compare with other method. The experimental results show good performance.

  20. Strategies GeoCape Intelligent Observation Studies @ GSFC

    NASA Technical Reports Server (NTRS)

    Cappelaere, Pat; Frye, Stu; Moe, Karen; Mandl, Dan; LeMoigne, Jacqueline; Flatley, Tom; Geist, Alessandro

    2015-01-01

    This presentation provides information a summary of the tradeoff studies conducted for GeoCape by the GSFC team in terms of how to optimize GeoCape observation efficiency. Tradeoffs include total ground scheduling with simple priorities, ground scheduling with cloud forecast, ground scheduling with sub-area forecast, onboard scheduling with onboard cloud detection and smart onboard scheduling and onboard image processing. The tradeoffs considered optimzing cost, downlink bandwidth and total number of images acquired.

  1. Design and implementation of priority and time-window based traffic scheduling and routing-spectrum allocation mechanism in elastic optical networks

    NASA Astrophysics Data System (ADS)

    Wang, Honghuan; Xing, Fangyuan; Yin, Hongxi; Zhao, Nan; Lian, Bizhan

    2016-02-01

    With the explosive growth of network services, the reasonable traffic scheduling and efficient configuration of network resources have an important significance to increase the efficiency of the network. In this paper, an adaptive traffic scheduling policy based on the priority and time window is proposed and the performance of this algorithm is evaluated in terms of scheduling ratio. The routing and spectrum allocation are achieved by using the Floyd shortest path algorithm and establishing a node spectrum resource allocation model based on greedy algorithm, which is proposed by us. The fairness index is introduced to improve the capability of spectrum configuration. The results show that the designed traffic scheduling strategy can be applied to networks with multicast and broadcast functionalities, and makes them get real-time and efficient response. The scheme of node spectrum configuration improves the frequency resource utilization and gives play to the efficiency of the network.

  2. Scheduling algorithm for data relay satellite optical communication based on artificial intelligent optimization

    NASA Astrophysics Data System (ADS)

    Zhao, Wei-hu; Zhao, Jing; Zhao, Shang-hong; Li, Yong-jun; Wang, Xiang; Dong, Yi; Dong, Chen

    2013-08-01

    Optical satellite communication with the advantages of broadband, large capacity and low power consuming broke the bottleneck of the traditional microwave satellite communication. The formation of the Space-based Information System with the technology of high performance optical inter-satellite communication and the realization of global seamless coverage and mobile terminal accessing are the necessary trend of the development of optical satellite communication. Considering the resources, missions and restraints of Data Relay Satellite Optical Communication System, a model of optical communication resources scheduling is established and a scheduling algorithm based on artificial intelligent optimization is put forwarded. According to the multi-relay-satellite, multi-user-satellite, multi-optical-antenna and multi-mission with several priority weights, the resources are scheduled reasonable by the operation: "Ascertain Current Mission Scheduling Time" and "Refresh Latter Mission Time-Window". The priority weight is considered as the parameter of the fitness function and the scheduling project is optimized by the Genetic Algorithm. The simulation scenarios including 3 relay satellites with 6 optical antennas, 12 user satellites and 30 missions, the simulation result reveals that the algorithm obtain satisfactory results in both efficiency and performance and resources scheduling model and the optimization algorithm are suitable in multi-relay-satellite, multi-user-satellite, and multi-optical-antenna recourses scheduling problem.

  3. Synchronization of Leisure Conflicts in the Family Schedule.

    ERIC Educational Resources Information Center

    Kelly, John R.

    The model of the family as a set of contingent careers is brought into focus by Wilbert Moore's suggestions that in schedule synchronization the family takes priority in claiming time, the wife-mother asserts such claims and arranges the schedule, and common leisure activities symbolize the solidarity of the unit. The perception of parents of the…

  4. PLAStiCC: Predictive Look-Ahead Scheduling for Continuous dataflows on Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumbhare, Alok; Simmhan, Yogesh; Prasanna, Viktor K.

    2014-05-27

    Scalable stream processing and continuous dataflow systems are gaining traction with the rise of big data due to the need for processing high velocity data in near real time. Unlike batch processing systems such as MapReduce and workflows, static scheduling strategies fall short for continuous dataflows due to the variations in the input data rates and the need for sustained throughput. The elastic resource provisioning of cloud infrastructure is valuable to meet the changing resource needs of such continuous applications. However, multi-tenant cloud resources introduce yet another dimension of performance variability that impacts the application’s throughput. In this paper wemore » propose PLAStiCC, an adaptive scheduling algorithm that balances resource cost and application throughput using a prediction-based look-ahead approach. It not only addresses variations in the input data rates but also the underlying cloud infrastructure. In addition, we also propose several simpler static scheduling heuristics that operate in the absence of accurate performance prediction model. These static and adaptive heuristics are evaluated through extensive simulations using performance traces obtained from public and private IaaS clouds. Our results show an improvement of up to 20% in the overall profit as compared to the reactive adaptation algorithm.« less

  5. Performance comparison of token ring protocols for hard-real-time communication

    NASA Technical Reports Server (NTRS)

    Kamat, Sanjay; Zhao, Wei

    1992-01-01

    The ability to guarantee the deadlines of synchronous messages while maintaining a good aggregate throughput is an important consideration in the design of distributed real-time systems. In this paper, we study two token ring protocols, the priority driven protocol and the timed token protocol, for their suitability for hard real-time systems. Both these protocols use a token to control access to the transmission medium. In a priority driven protocol, messages are assigned priorities and the protocol ensures that messages are transmitted in the order of their priorities. Timed token protocols do not provide for priority arbitration but ensure that the maximum access delay for a station is bounded. For both protocols, we first derive the schedulability conditions under which the transmission deadlines of a given set of synchronous messages can be guaranteed. Subsequently, we use these schedulability conditions to quantitatively compare the average case behavior of the protocols. This comparison demonstrates that each of the protocols has its domain of superior performance and neither dominates the other for the entire range of operating conditions.

  6. Priority scheme planning for the robust SSM/PMAD testbed

    NASA Technical Reports Server (NTRS)

    Elges, Michael R.; Ashworth, Barry R.

    1991-01-01

    Whenever mixing priorities of manually controlled resources with those of autonomously controlled resources, the space station module power management and distribution (SSM/PMAD) environment requires cooperating expert system interaction between the planning function and the priority manager. The elements and interactions of the SSM/PMAD planning and priority management functions are presented. Their adherence to cooperating for common achievement are described. In the SSM/PMAD testbed these actions are guided by having a system planning function, KANT, which has insight to the executing system and its automated database. First, the user must be given access to all information which may have an effect on the desired outcome. Second, the fault manager element, FRAMES, must be informed as to the change so that correct diagnoses and operations take place if and when faults occur. Third, some element must engage as mediator for selection of resources and actions to be added or removed at the user's request. This is performed by the priority manager, LPLMS. Lastly, the scheduling mechanism, MAESTRO, must provide future schedules adhering to the user modified resource base.

  7. PWFQ: a priority-based weighted fair queueing algorithm for the downstream transmission of EPON

    NASA Astrophysics Data System (ADS)

    Xu, Sunjuan; Ye, Jiajun; Zou, Junni

    2005-11-01

    In the downstream direction of EPON, all ethernet frames share one downlink channel from the OLT to destination ONUs. To guarantee differentiated services, a scheduling algorithm is needed to solve the link-sharing issue. In this paper, we first review the classical WFQ algorithm and point out the shortcomings existing in the fair queueing principle of WFQ algorithm for EPON. Then we propose a novel scheduling algorithm called Priority-based WFQ (PWFQ) algorithm which distributes bandwidth based on priority. PWFQ algorithm can guarantee the quality of real-time services whether under light load or under heavy load. Simulation results also show that PWFQ algorithm not only can improve delay performance of real-time services, but can also meet the worst-case delay bound requirements.

  8. Range and mission scheduling automation using combined AI and operations research techniques

    NASA Technical Reports Server (NTRS)

    Arbabi, Mansur; Pfeifer, Michael

    1987-01-01

    Ground-based systems for Satellite Command, Control, and Communications (C3) operations require a method for planning, scheduling and assigning the range resources such as: antenna systems scattered around the world, communications systems, and personnel. The method must accommodate user priorities, last minute changes, maintenance requirements, and exceptions from nominal requirements. Described are computer programs which solve 24 hour scheduling problems, using heuristic algorithms and a real time interactive scheduling process.

  9. Smart sensing to drive real-time loads scheduling algorithm in a domotic architecture

    NASA Astrophysics Data System (ADS)

    Santamaria, Amilcare Francesco; Raimondo, Pierfrancesco; De Rango, Floriano; Vaccaro, Andrea

    2014-05-01

    Nowadays the focus on power consumption represent a very important factor regarding the reduction of power consumption with correlated costs and the environmental sustainability problems. Automatic control load based on power consumption and use cycle represents the optimal solution to costs restraint. The purpose of these systems is to modulate the power request of electricity avoiding an unorganized work of the loads, using intelligent techniques to manage them based on real time scheduling algorithms. The goal is to coordinate a set of electrical loads to optimize energy costs and consumptions based on the stipulated contract terms. The proposed algorithm use two new main notions: priority driven loads and smart scheduling loads. The priority driven loads can be turned off (stand by) according to a priority policy established by the user if the consumption exceed a defined threshold, on the contrary smart scheduling loads are scheduled in a particular way to don't stop their Life Cycle (LC) safeguarding the devices functions or allowing the user to freely use the devices without the risk of exceeding the power threshold. The algorithm, using these two kind of notions and taking into account user requirements, manages loads activation and deactivation allowing the completion their operation cycle without exceeding the consumption threshold in an off-peak time range according to the electricity fare. This kind of logic is inspired by industrial lean manufacturing which focus is to minimize any kind of power waste optimizing the available resources.

  10. Vehicle Scheduling Schemes for Commercial and Emergency Logistics Integration

    PubMed Central

    Li, Xiaohui; Tan, Qingmei

    2013-01-01

    In modern logistics operations, large-scale logistics companies, besides active participation in profit-seeking commercial business, also play an essential role during an emergency relief process by dispatching urgently-required materials to disaster-affected areas. Therefore, an issue has been widely addressed by logistics practitioners and caught researchers' more attention as to how the logistics companies achieve maximum commercial profit on condition that emergency tasks are effectively and performed satisfactorily. In this paper, two vehicle scheduling models are proposed to solve the problem. One is a prediction-related scheme, which predicts the amounts of disaster-relief materials and commercial business and then accepts the business that will generate maximum profits; the other is a priority-directed scheme, which, firstly groups commercial and emergency business according to priority grades and then schedules both types of business jointly and simultaneously by arriving at the maximum priority in total. Moreover, computer-based simulations are carried out to evaluate the performance of these two models by comparing them with two traditional disaster-relief tactics in China. The results testify the feasibility and effectiveness of the proposed models. PMID:24391724

  11. Vehicle scheduling schemes for commercial and emergency logistics integration.

    PubMed

    Li, Xiaohui; Tan, Qingmei

    2013-01-01

    In modern logistics operations, large-scale logistics companies, besides active participation in profit-seeking commercial business, also play an essential role during an emergency relief process by dispatching urgently-required materials to disaster-affected areas. Therefore, an issue has been widely addressed by logistics practitioners and caught researchers' more attention as to how the logistics companies achieve maximum commercial profit on condition that emergency tasks are effectively and performed satisfactorily. In this paper, two vehicle scheduling models are proposed to solve the problem. One is a prediction-related scheme, which predicts the amounts of disaster-relief materials and commercial business and then accepts the business that will generate maximum profits; the other is a priority-directed scheme, which, firstly groups commercial and emergency business according to priority grades and then schedules both types of business jointly and simultaneously by arriving at the maximum priority in total. Moreover, computer-based simulations are carried out to evaluate the performance of these two models by comparing them with two traditional disaster-relief tactics in China. The results testify the feasibility and effectiveness of the proposed models.

  12. CaLRS: A Critical-Aware Shared LLC Request Scheduling Algorithm on GPGPU

    PubMed Central

    Ma, Jianliang; Meng, Jinglei; Chen, Tianzhou; Wu, Minghui

    2015-01-01

    Ultra high thread-level parallelism in modern GPUs usually introduces numerous memory requests simultaneously. So there are always plenty of memory requests waiting at each bank of the shared LLC (L2 in this paper) and global memory. For global memory, various schedulers have already been developed to adjust the request sequence. But we find few work has ever focused on the service sequence on the shared LLC. We measured that a big number of GPU applications always queue at LLC bank for services, which provide opportunity to optimize the service order on LLC. Through adjusting the GPU memory request service order, we can improve the schedulability of SM. So we proposed a critical-aware shared LLC request scheduling algorithm (CaLRS) in this paper. The priority representative of memory request is critical for CaLRS. We use the number of memory requests that originate from the same warp but have not been serviced when they arrive at the shared LLC bank to represent the criticality of each warp. Experiments show that the proposed scheme can boost the SM schedulability effectively by promoting the scheduling priority of the memory requests with high criticality and improves the performance of GPU indirectly. PMID:25729772

  13. User modeling techniques for enhanced usability of OPSMODEL operations simulation software

    NASA Technical Reports Server (NTRS)

    Davis, William T.

    1991-01-01

    The PC based OPSMODEL operations software for modeling and simulation of space station crew activities supports engineering and cost analyses and operations planning. Using top-down modeling, the level of detail required in the data base can be limited to being commensurate with the results required of any particular analysis. To perform a simulation, a resource environment consisting of locations, crew definition, equipment, and consumables is first defined. Activities to be simulated are then defined as operations and scheduled as desired. These operations are defined within a 1000 level priority structure. The simulation on OPSMODEL, then, consists of the following: user defined, user scheduled operations executing within an environment of user defined resource and priority constraints. Techniques for prioritizing operations to realistically model a representative daily scenario of on-orbit space station crew activities are discussed. The large number of priority levels allows priorities to be assigned commensurate with the detail necessary for a given simulation. Several techniques for realistic modeling of day-to-day work carryover are also addressed.

  14. An Analysis of Task-Scheduling for a Generic Avionics Mission Computer

    DTIC Science & Technology

    2006-04-01

    3 3. 1. 3 Response Time Analysis........................................................................... 8 3. 2 Non - Preemptive ...Fixed Priority Scheduling ...................................................... 10 3. 2. 1 Simple Non - Preemptive Response Time Test...10 3. 2. 2 Non - Preemptive Response Time Test .................................................. 12 3. 3 Asynchronous Fixed

  15. 76 FR 46856 - Mail Classification Schedule Change

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-03

    ... POSTAL REGULATORY COMMISSION [Docket No. MC2011-26; Order No. 777] Mail Classification Schedule... recently-filed Postal Service request regarding classification changes to Priority Mail packaging. This...). SUPPLEMENTARY INFORMATION: On July 26, 2011, the Postal Service filed a notice of two classification changes...

  16. Web server for priority ordered multimedia services

    NASA Astrophysics Data System (ADS)

    Celenk, Mehmet; Godavari, Rakesh K.; Vetnes, Vermund

    2001-10-01

    In this work, our aim is to provide finer priority levels in the design of a general-purpose Web multimedia server with provisions of the CM services. The type of services provided include reading/writing a web page, downloading/uploading an audio/video stream, navigating the Web through browsing, and interactive video teleconferencing. The selected priority encoding levels for such operations follow the order of admin read/write, hot page CM and Web multicasting, CM read, Web read, CM write and Web write. Hot pages are the most requested CM streams (e.g., the newest movies, video clips, and HDTV channels) and Web pages (e.g., portal pages of the commercial Internet search engines). Maintaining a list of these hot Web pages and CM streams in a content addressable buffer enables a server to multicast hot streams with lower latency and higher system throughput. Cold Web pages and CM streams are treated as regular Web and CM requests. Interactive CM operations such as pause (P), resume (R), fast-forward (FF), and rewind (RW) have to be executed without allocation of extra resources. The proposed multimedia server model is a part of the distributed network with load balancing schedulers. The SM is connected to an integrated disk scheduler (IDS), which supervises an allocated disk manager. The IDS follows the same priority handling as the SM, and implements a SCAN disk-scheduling method for an improved disk access and a higher throughput. Different disks are used for the Web and CM services in order to meet the QoS requirements of CM services. The IDS ouput is forwarded to an Integrated Transmission Scheduler (ITS). The ITS creates a priority ordered buffering of the retrieved Web pages and CM data streams that are fed into an auto regressive moving average (ARMA) based traffic shaping circuitry before being transmitted through the network.

  17. Production scheduling and rescheduling with genetic algorithms.

    PubMed

    Bierwirth, C; Mattfeld, D C

    1999-01-01

    A general model for job shop scheduling is described which applies to static, dynamic and non-deterministic production environments. Next, a Genetic Algorithm is presented which solves the job shop scheduling problem. This algorithm is tested in a dynamic environment under different workload situations. Thereby, a highly efficient decoding procedure is proposed which strongly improves the quality of schedules. Finally, this technique is tested for scheduling and rescheduling in a non-deterministic environment. It is shown by experiment that conventional methods of production control are clearly outperformed at reasonable run-time costs.

  18. An approach to rescheduling activities based on determination of priority and disruptivity

    NASA Technical Reports Server (NTRS)

    Sponsler, Jeffrey L.; Johnston, Mark D.

    1990-01-01

    A constraint-based scheduling system called SPIKE is being used to create long term schedules for the Hubble Space Telescope. Feedback for the spacecraft or from other ground support systems may invalidate some scheduling decisions and those activities concerned must be reconsidered. A function rescheduling priority is defined which for a given activity performs a heuristic analysis and produces a relative numerical value which is used to rank all such entities in the order that they should be rescheduled. A function disruptivity is also defined that is used to place a relative numeric value on how much a pre-existing schedule would be changed in order to reschedule an activity. Using these functions, two algorithms (a stochastic neural network approach and an exhaustive search approach) are proposed to find the best place to reschedule an activity. Prototypes were implemented and preliminary testing reveals that the exhaustive technique produces only marginally better results at much greater computational cost.

  19. New preemptive scheduling for OBS networks considering cascaded wavelength conversion

    NASA Astrophysics Data System (ADS)

    Gao, Xingbo; Bassiouni, Mostafa A.; Li, Guifang

    2009-05-01

    In this paper we introduce a new preemptive scheduling technique for next generation optical burst-switched networks considering the impact of cascaded wavelength conversions. It has been shown that when optical bursts are transmitted all optically from source to destination, each wavelength conversion performed along the lightpath may cause certain signal-to-noise deterioration. If the distortion of the signal quality becomes significant enough, the receiver would not be able to recover the original data. Accordingly, subject to this practical impediment, we improve a recently proposed fair channel scheduling algorithm to deal with the fairness problem and aim at burst loss reduction simultaneously in optical burst switching. In our scheme, the dynamic priority associated with each burst is based on a constraint threshold and the number of already conducted wavelength conversions among other factors for this burst. When contention occurs, a new arriving superior burst may preempt another scheduled one according to their priorities. Extensive simulation results have shown that the proposed scheme further improves fairness and achieves burst loss reduction as well.

  20. An Optimal Static Scheduling Algorithm for Hard Real-Time Systems Specified in a Prototyping Language

    DTIC Science & Technology

    1989-12-01

    to construct because the mechanism is a dispatching procedure. Since all nonpreemptive schedules are contained in the set of all preemptive schedules...the optimal value of T’.. in the preemptive case is at least a lower bound on the optimal T., for the nonpreemptive schedules. This principle is the...adapt to changes in the enviro.nment. In hard real-time systems, tasks are also distinguished as preemptable and nonpreemptable . A task is preemptable

  1. Flight evaluation of an engine static pressure noseprobe in an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Foote, C. H.; Jaekel, R. F.

    1981-01-01

    The flight testing of an inlet static pressure probe and instrumented inlet case produced results consistent with sea-level and altitude stand testing. The F-15 flight test verified the basic relationship of total to static pressure ratio versus corrected airflow and automatic distortion downmatch with the engine pressure ratio control mode. Additionally, the backup control inlet case statics demonstrated sufficient accuracy for backup control fuel flow scheduling, and the station 6 manifolded production probe was in agreement with the flight test station 6 tota pressure probes.

  2. Microgravity

    NASA Image and Video Library

    1998-09-30

    The Electrostatic Levitator (ESL) Facility established at Marshall Space Flight Center (MSFC) supports NASA's Microgravity Materials Science Research Program. NASA materials science investigations include ground-based, flight definition and flight projects. Flight definition projects, with demanding science concept review schedules, receive highest priority for scheduling experiment time in the Electrostatic Levitator (ESL) Facility.

  3. 14 CFR 1215.109 - Scheduling user service.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Scheduling user service. 1215.109 Section 1215.109 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION TRACKING AND DATA RELAY... highest priority: (i) Launch, reentry, landing of the STS Shuttle, or other NASA launches. (ii) NASA...

  4. 14 CFR 1215.109 - Scheduling user service.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 5 2012-01-01 2012-01-01 false Scheduling user service. 1215.109 Section 1215.109 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION TRACKING AND DATA RELAY... highest priority: (i) Launch, reentry, landing of the STS Shuttle, or other NASA launches. (ii) NASA...

  5. 14 CFR 1215.109 - Scheduling user service.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Scheduling user service. 1215.109 Section 1215.109 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION TRACKING AND DATA RELAY... highest priority: (i) Launch, reentry, landing of the STS Shuttle, or other NASA launches. (ii) NASA...

  6. Multi-vehicle mobility allowance shuttle transit (MAST) system : an analytical model to select the fleet size and a scheduling heuristic.

    DOT National Transportation Integrated Search

    2012-06-01

    The mobility allowance shuttle transit (MAST) system is a hybrid transit system in which vehicles are : allowed to deviate from a fixed route to serve flexible demand. A mixed integer programming (MIP) : formulation for the static scheduling problem ...

  7. 75 FR 9879 - Office of Innovation and Improvement; Overview Information Magnet Schools Assistance Program...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-04

    ... narrative is where you, the applicant, address the selection criteria and two of the competitive preference priorities that reviewers use to evaluate your application. The two competitive preference priorities that... person listed in this notice at least two weeks before the scheduled meeting date. Although we will...

  8. System and Method for Network Bandwidth, Buffers and Timing Management Using Hybrid Scheduling of Traffic with Different Priorities and Guarantees

    NASA Technical Reports Server (NTRS)

    Bonk, Ted (Inventor); Hall, Brendan (Inventor); Smithgall, William Todd (Inventor); Varadarajan, Srivatsan (Inventor); DeLay, Benjamin F. (Inventor)

    2017-01-01

    Systems and methods for network bandwidth, buffers and timing management using hybrid scheduling of traffic with different priorities and guarantees are provided. In certain embodiments, a method of managing network scheduling and configuration comprises, for each transmitting end station, reserving one exclusive buffer for each virtual link to be transmitted from the transmitting end station; for each receiving end station, reserving exclusive buffers for each virtual link to be received at the receiving end station; and for each switch, reserving a exclusive buffer for each virtual link to be received at an input port of the switch. The method further comprises determining if each respective transmitting end station, receiving end station, and switch has sufficient capability to support the reserved buffers; and reporting buffer infeasibility if each respective transmitting end station, receiving end station, and switch does not have sufficient capability to support the reserved buffers.

  9. Resource-constrained scheduling with hard due windows and rejection penalties

    NASA Astrophysics Data System (ADS)

    Garcia, Christopher

    2016-09-01

    This work studies a scheduling problem where each job must be either accepted and scheduled to complete within its specified due window, or rejected altogether. Each job has a certain processing time and contributes a certain profit if accepted or penalty cost if rejected. There is a set of renewable resources, and no resource limit can be exceeded at any time. Each job requires a certain amount of each resource when processed, and the objective is to maximize total profit. A mixed-integer programming formulation and three approximation algorithms are presented: a priority rule heuristic, an algorithm based on the metaheuristic for randomized priority search and an evolutionary algorithm. Computational experiments comparing these four solution methods were performed on a set of generated benchmark problems covering a wide range of problem characteristics. The evolutionary algorithm outperformed the other methods in most cases, often significantly, and never significantly underperformed any method.

  10. A Generic and Target Architecture For Command and Control Information Systems

    DTIC Science & Technology

    1991-09-01

    forces, logistics, and optimum routing of forces to destination; supports development of the force, material and personnel 9 lists, schedules , and...recommendations T.5, T.6, and T.73 for Telefax. Teletex, Textfax, and Telefax are not currently scheduled to become a part of GOSIP. In the 1995-1997 time...defining application interfaces to the func- tional areas that impact resource management, for example, priority scheduling , real-time files, and

  11. Static Scheduler for Hard Real-Time Tasks on Multiprocessor Systems

    DTIC Science & Technology

    1992-09-01

    Foundation of Computer Science, 1980 . [SIM83] Simons, B., "Multiprocessor Scheduling of Unit-Time Jobs with Arbitrary Release Times and Deadlines", SIAM...Research Office Attn: Dr. David Hislop P. O. Box 12211 Research Triangle Park, NC 27709-2211 31. Persistent Data Systems 75 W. Chapel Ridge Road Attn: Dr

  12. Effective preemptive scheduling scheme for optical burst-switched networks with cascaded wavelength conversion consideration

    NASA Astrophysics Data System (ADS)

    Gao, Xingbo

    2010-03-01

    We introduce a new preemptive scheduling technique for next-generation optical burst switching (OBS) networks considering the impact of cascaded wavelength conversions. It has been shown that when optical bursts are transmitted all optically from source to destination, each wavelength conversion performed along the lightpath may cause certain signal-to-noise deterioration. If the distortion of the signal quality becomes significant enough, the receiver would not be able to recover the original data. Accordingly, subject to this practical impediment, we improve a recently proposed fair channel scheduling algorithm to deal with the fairness problem and aim at burst loss reduction simultaneously in OBS environments. In our scheme, the dynamic priority associated with each burst is based on a constraint threshold and the number of already conducted wavelength conversions among other factors for this burst. When contention occurs, a new arriving superior burst may preempt another scheduled one according to their priorities. Extensive simulation results have shown that the proposed scheme further improves fairness and achieves burst loss reduction as well.

  13. A Dynamic Scheduling Method of Earth-Observing Satellites by Employing Rolling Horizon Strategy

    PubMed Central

    Dishan, Qiu; Chuan, He; Jin, Liu; Manhao, Ma

    2013-01-01

    Focused on the dynamic scheduling problem for earth-observing satellites (EOS), an integer programming model is constructed after analyzing the main constraints. The rolling horizon (RH) strategy is proposed according to the independent arriving time and deadline of the imaging tasks. This strategy is designed with a mixed triggering mode composed of periodical triggering and event triggering, and the scheduling horizon is decomposed into a series of static scheduling intervals. By optimizing the scheduling schemes in each interval, the dynamic scheduling of EOS is realized. We also propose three dynamic scheduling algorithms by the combination of the RH strategy and various heuristic algorithms. Finally, the scheduling results of different algorithms are compared and the presented methods in this paper are demonstrated to be efficient by extensive experiments. PMID:23690742

  14. A dynamic scheduling method of Earth-observing satellites by employing rolling horizon strategy.

    PubMed

    Dishan, Qiu; Chuan, He; Jin, Liu; Manhao, Ma

    2013-01-01

    Focused on the dynamic scheduling problem for earth-observing satellites (EOS), an integer programming model is constructed after analyzing the main constraints. The rolling horizon (RH) strategy is proposed according to the independent arriving time and deadline of the imaging tasks. This strategy is designed with a mixed triggering mode composed of periodical triggering and event triggering, and the scheduling horizon is decomposed into a series of static scheduling intervals. By optimizing the scheduling schemes in each interval, the dynamic scheduling of EOS is realized. We also propose three dynamic scheduling algorithms by the combination of the RH strategy and various heuristic algorithms. Finally, the scheduling results of different algorithms are compared and the presented methods in this paper are demonstrated to be efficient by extensive experiments.

  15. A comparison of multiprocessor scheduling methods for iterative data flow architectures

    NASA Technical Reports Server (NTRS)

    Storch, Matthew

    1993-01-01

    A comparative study is made between the Algorithm to Architecture Mapping Model (ATAMM) and three other related multiprocessing models from the published literature. The primary focus of all four models is the non-preemptive scheduling of large-grain iterative data flow graphs as required in real-time systems, control applications, signal processing, and pipelined computations. Important characteristics of the models such as injection control, dynamic assignment, multiple node instantiations, static optimum unfolding, range-chart guided scheduling, and mathematical optimization are identified. The models from the literature are compared with the ATAMM for performance, scheduling methods, memory requirements, and complexity of scheduling and design procedures.

  16. Scheduling elective surgeries: the tradeoff among bed capacity, waiting patients and operating room utilization using goal programming.

    PubMed

    Li, Xiangyong; Rafaliya, N; Baki, M Fazle; Chaouch, Ben A

    2017-03-01

    Scheduling of surgeries in the operating rooms under limited competing resources such as surgical and nursing staff, anesthesiologist, medical equipment, and recovery beds in surgical wards is a complicated process. A well-designed schedule should be concerned with the welfare of the entire system by allocating the available resources in an efficient and effective manner. In this paper, we develop an integer linear programming model in a manner useful for multiple goals for optimally scheduling elective surgeries based on the availability of surgeons and operating rooms over a time horizon. In particular, the model is concerned with the minimization of the following important goals: (1) the anticipated number of patients waiting for service; (2) the underutilization of operating room time; (3) the maximum expected number of patients in the recovery unit; and (4) the expected range (the difference between maximum and minimum expected number) of patients in the recovery unit. We develop two goal programming (GP) models: lexicographic GP model and weighted GP model. The lexicographic GP model schedules operating rooms when various preemptive priority levels are given to these four goals. A numerical study is conducted to illustrate the optimal master-surgery schedule obtained from the models. The numerical results demonstrate that when the available number of surgeons and operating rooms is known without error over the planning horizon, the proposed models can produce good schedules and priority levels and preference weights of four goals affect the resulting schedules. The results quantify the tradeoffs that must take place as the preemptive-weights of the four goals are changed.

  17. Dynamic Scheduling for Web Monitoring Crawler

    DTIC Science & Technology

    2009-02-27

    researches on static scheduling methods , but they are not included in this project, because this project mainly focuses on the event-driven...pages from public search engines. This research aims to propose various query generation methods using MCRDR knowledge base and evaluates them to...South Wales Professor Hiroshi Motoda/Osaka University Dr. John Salerno, Air Force Research Laboratory/Information Directorate Report

  18. A Multi-Objective Compounded Local Mobile Cloud Architecture Using Priority Queues to Process Multiple Jobs.

    PubMed

    Wei, Xiaohui; Sun, Bingyi; Cui, Jiaxu; Xu, Gaochao

    2016-01-01

    As a result of the greatly increased use of mobile devices, the disadvantages of portable devices have gradually begun to emerge. To solve these problems, the use of mobile cloud computing assisted by cloud data centers has been proposed. However, cloud data centers are always very far from the mobile requesters. In this paper, we propose an improved multi-objective local mobile cloud model: Compounded Local Mobile Cloud Architecture with Dynamic Priority Queues (LMCpri). This new architecture could briefly store jobs that arrive simultaneously at the cloudlet in different priority positions according to the result of auction processing, and then execute partitioning tasks on capable helpers. In the Scheduling Module, NSGA-II is employed as the scheduling algorithm to shorten processing time and decrease requester cost relative to PSO and sequential scheduling. The simulation results show that the number of iteration times that is defined to 30 is the best choice of the system. In addition, comparing with LMCque, LMCpri is able to effectively accommodate a requester who would like his job to be executed in advance and shorten execution time. Finally, we make a comparing experiment between LMCpri and cloud assisting architecture, and the results reveal that LMCpri presents a better performance advantage than cloud assisting architecture.

  19. A Multi-Objective Compounded Local Mobile Cloud Architecture Using Priority Queues to Process Multiple Jobs

    PubMed Central

    Wei, Xiaohui; Sun, Bingyi; Cui, Jiaxu; Xu, Gaochao

    2016-01-01

    As a result of the greatly increased use of mobile devices, the disadvantages of portable devices have gradually begun to emerge. To solve these problems, the use of mobile cloud computing assisted by cloud data centers has been proposed. However, cloud data centers are always very far from the mobile requesters. In this paper, we propose an improved multi-objective local mobile cloud model: Compounded Local Mobile Cloud Architecture with Dynamic Priority Queues (LMCpri). This new architecture could briefly store jobs that arrive simultaneously at the cloudlet in different priority positions according to the result of auction processing, and then execute partitioning tasks on capable helpers. In the Scheduling Module, NSGA-II is employed as the scheduling algorithm to shorten processing time and decrease requester cost relative to PSO and sequential scheduling. The simulation results show that the number of iteration times that is defined to 30 is the best choice of the system. In addition, comparing with LMCque, LMCpri is able to effectively accommodate a requester who would like his job to be executed in advance and shorten execution time. Finally, we make a comparing experiment between LMCpri and cloud assisting architecture, and the results reveal that LMCpri presents a better performance advantage than cloud assisting architecture. PMID:27419854

  20. Scheduling revisited workstations in integrated-circuit fabrication

    NASA Technical Reports Server (NTRS)

    Kline, Paul J.

    1992-01-01

    The cost of building new semiconductor wafer fabrication factories has grown rapidly, and a state-of-the-art fab may cost 250 million dollars or more. Obtaining an acceptable return on this investment requires high productivity from the fabrication facilities. This paper describes the Photo Dispatcher system which was developed to make machine-loading recommendations on a set of key fab machines. Dispatching policies that generally perform well in job shops (e.g., Shortest Remaining Processing Time) perform poorly for workstations such as photolithography which are visited several times by the same lot of silicon wafers. The Photo Dispatcher evaluates the history of workloads throughout the fab and identifies bottleneck areas. The scheduler then assigns priorities to lots depending on where they are headed after photolithography. These priorities are designed to avoid starving bottleneck workstations and to give preference to lots that are headed to areas where they can be processed with minimal waiting. Other factors considered by the scheduler to establish priorities are the nearness of a lot to the end of its process flow and the time that the lot has already been waiting in queue. Simulations that model the equipment and products in one of Texas Instrument's wafer fabs show the Photo Dispatcher can produce a 10 percent improvement in the time required to fabricate integrated circuits.

  1. Job Priorities on Peregrine | High-Performance Computing | NREL

    Science.gov Websites

    allocation when run with qos=high. Requesting a Node Reservation If you are doing work that requires real scheduler more efficiently plan resources for larger jobs. When projects reach their allocation limit, jobs associated with those projects will run at very low priority, which will ensure that these jobs run only when

  2. Packet Scheduling Mechanism to Improve Quality of Short Flows and Low-Rate Flows

    NASA Astrophysics Data System (ADS)

    Yokota, Kenji; Asaka, Takuya; Takahashi, Tatsuro

    In recent years elephant flows are increasing by expansion of peer-to-peer (P2P) applications on the Internet. As a result, bandwidth is occupied by specific users triggering unfair resource allocation. The main packet-scheduling mechanism currently employed is first-in first-out (FIFO) where the available bandwidth of short flows is limited by elephant flows. Least attained service (LAS), which decides transfer priority of packets by the total amount of transferred data in all flows, was proposed to solve this problem. However, routers with LAS limit flows with large amount of transferred data even if they are low-rate. Therefore, it is necessary to improve the quality of low-rate flows with long holding times such as voice over Internet protocol (VoIP) applications. This paper proposes rate-based priority control (RBPC), which calculates the flow rate and control the priority by using it. Our proposed method can transfer short flows and low-rate flows in advance. Moreover, its fair performance is shown through simulations.

  3. TinyOS-based quality of service management in wireless sensor networks

    USGS Publications Warehouse

    Peterson, N.; Anusuya-Rangappa, L.; Shirazi, B.A.; Huang, R.; Song, W.-Z.; Miceli, M.; McBride, D.; Hurson, A.; LaHusen, R.

    2009-01-01

    Previously the cost and extremely limited capabilities of sensors prohibited Quality of Service (QoS) implementations in wireless sensor networks. With advances in technology, sensors are becoming significantly less expensive and the increases in computational and storage capabilities are opening the door for new, sophisticated algorithms to be implemented. Newer sensor network applications require higher data rates with more stringent priority requirements. We introduce a dynamic scheduling algorithm to improve bandwidth for high priority data in sensor networks, called Tiny-DWFQ. Our Tiny-Dynamic Weighted Fair Queuing scheduling algorithm allows for dynamic QoS for prioritized communications by continually adjusting the treatment of communication packages according to their priorities and the current level of network congestion. For performance evaluation, we tested Tiny-DWFQ, Tiny-WFQ (traditional WFQ algorithm implemented in TinyOS), and FIFO queues on an Imote2-based wireless sensor network and report their throughput and packet loss. Our results show that Tiny-DWFQ performs better in all test cases. ?? 2009 IEEE.

  4. Parallel processing of real-time dynamic systems simulation on OSCAR (Optimally SCheduled Advanced multiprocessoR)

    NASA Technical Reports Server (NTRS)

    Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke

    1989-01-01

    Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.

  5. Multimode resource-constrained multiple project scheduling problem under fuzzy random environment and its application to a large scale hydropower construction project.

    PubMed

    Xu, Jiuping; Feng, Cuiying

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method.

  6. Multimode Resource-Constrained Multiple Project Scheduling Problem under Fuzzy Random Environment and Its Application to a Large Scale Hydropower Construction Project

    PubMed Central

    Xu, Jiuping

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method. PMID:24550708

  7. Data-oriented scheduling for PROOF

    NASA Astrophysics Data System (ADS)

    Xu, Neng; Guan, Wen; Wu, Sau Lan; Ganis, Gerardo

    2011-12-01

    The Parallel ROOT Facility - PROOF - is a distributed analysis system optimized for I/O intensive analysis tasks of HEP data. With LHC entering the analysis phase, PROOF has become a natural ingredient for computing farms at Tier3 level. These analysis facilities will typically be used by a few tenths of users, and can also be federated into a sort of analysis cloud corresponding to the Virtual Organization of the experiment. Proper scheduling is required to guarantee fair resource usage, to enforce priority policies and to optimize the throughput. In this paper we discuss an advanced priority system that we are developing for PROOF. The system has been designed to automatically adapt to unknown length of the tasks, to take into account the data location and availability (including distribution across geographically separated sites), and the {group, user} default priorities. In this system, every element - user, group, dataset, job slot and storage - gets its priority and those priorities are dynamically linked with each other. In order to tune the interplay between the various components, we have designed and started implementing a simulation application that can model various type and size of PROOF clusters. In this application a monitoring package records all the changes of them so that we can easily understand and tune the performance. We will discuss the status of our simulation and show examples of the results we are expecting from it.

  8. HPV vaccine catch up schedule - an opportunity for chlamydia screening.

    PubMed

    Grotowski, Miriam; May, Jenny

    2008-07-01

    The human papillomavirus (HPV) vaccine (Gardasil) catch up schedule in general practice is available until June 2009 to females not in school and under the age of 27 years. A course of three injections is given over 6 months. This provides a unique opportunity for sexual health screening in an age group where chlamydia screening is a priority.

  9. Spatial analysis of fuel treatment options for chaparral on the Angeles national forest

    Treesearch

    G. Jones; J. Chew; R. Silverstein; C. Stalling; J. Sullivan; J. Troutwine; D. Weise; D. Garwood

    2008-01-01

    Spatial fuel treatment schedules were developed for the chaparral vegetation type on the Angeles National Forest using the Multi-resource Analysis and Geographic Information System (MAGIS). Schedules varied by the priority given to various wildland urban interface areas and the general forest, as well as by the number of acres treated per decade. The effectiveness of...

  10. 78 FR 41132 - Sunshine Act Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-09

    ... amendments to eliminate the prohibition against general solicitation and general advertising in certain... times, changes in Commission priorities require alterations in the scheduling of meeting items. For...

  11. Compiling Planning into Scheduling: A Sketch

    NASA Technical Reports Server (NTRS)

    Bedrax-Weiss, Tania; Crawford, James M.; Smith, David E.

    2004-01-01

    Although there are many approaches for compiling a planning problem into a static CSP or a scheduling problem, current approaches essentially preserve the structure of the planning problem in the encoding. In this pape: we present a fundamentally different encoding that more accurately resembles a scheduling problem. We sketch the approach and argue, based on an example, that it is possible to automate the generation of such an encoding for problems with certain properties and thus produce a compiler of planning into scheduling problems. Furthermore we argue that many NASA problems exhibit these properties and that such a compiler would provide benefits to both theory and practice.

  12. Optimization of multi-objective integrated process planning and scheduling problem using a priority based optimization algorithm

    NASA Astrophysics Data System (ADS)

    Ausaf, Muhammad Farhan; Gao, Liang; Li, Xinyu

    2015-12-01

    For increasing the overall performance of modern manufacturing systems, effective integration of process planning and scheduling functions has been an important area of consideration among researchers. Owing to the complexity of handling process planning and scheduling simultaneously, most of the research work has been limited to solving the integrated process planning and scheduling (IPPS) problem for a single objective function. As there are many conflicting objectives when dealing with process planning and scheduling, real world problems cannot be fully captured considering only a single objective for optimization. Therefore considering multi-objective IPPS (MOIPPS) problem is inevitable. Unfortunately, only a handful of research papers are available on solving MOIPPS problem. In this paper, an optimization algorithm for solving MOIPPS problem is presented. The proposed algorithm uses a set of dispatching rules coupled with priority assignment to optimize the IPPS problem for various objectives like makespan, total machine load, total tardiness, etc. A fixed sized external archive coupled with a crowding distance mechanism is used to store and maintain the non-dominated solutions. To compare the results with other algorithms, a C-matric based method has been used. Instances from four recent papers have been solved to demonstrate the effectiveness of the proposed algorithm. The experimental results show that the proposed method is an efficient approach for solving the MOIPPS problem.

  13. 47 CFR 76.95 - Exceptions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... CABLE TELEVISION SERVICE Network Non-duplication Protection, Syndicated Exclusivity and Sports Blackout... priority station for one hour following the scheduled time of completion of the broadcast of a live sports...

  14. 47 CFR 76.95 - Exceptions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... CABLE TELEVISION SERVICE Network Non-duplication Protection, Syndicated Exclusivity and Sports Blackout... priority station for one hour following the scheduled time of completion of the broadcast of a live sports...

  15. 47 CFR 76.95 - Exceptions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... CABLE TELEVISION SERVICE Network Non-duplication Protection, Syndicated Exclusivity and Sports Blackout... priority station for one hour following the scheduled time of completion of the broadcast of a live sports...

  16. 47 CFR 76.95 - Exceptions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... CABLE TELEVISION SERVICE Network Non-duplication Protection, Syndicated Exclusivity and Sports Blackout... priority station for one hour following the scheduled time of completion of the broadcast of a live sports...

  17. 47 CFR 76.95 - Exceptions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... CABLE TELEVISION SERVICE Network Non-duplication Protection, Syndicated Exclusivity and Sports Blackout... priority station for one hour following the scheduled time of completion of the broadcast of a live sports...

  18. 76 FR 60500 - Advisory Committee on Immunization Practices (ACIP)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-29

    ... schedule; human papillomavirus vaccine; hepatitis B vaccine; meningococcal vaccines; influenza; 13-valent... subject to change as priorities dictate. Contact Person for More Information: Stephanie B. Thomas...

  19. CQPSO scheduling algorithm for heterogeneous multi-core DAG task model

    NASA Astrophysics Data System (ADS)

    Zhai, Wenzheng; Hu, Yue-Li; Ran, Feng

    2017-07-01

    Efficient task scheduling is critical to achieve high performance in a heterogeneous multi-core computing environment. The paper focuses on the heterogeneous multi-core directed acyclic graph (DAG) task model and proposes a novel task scheduling method based on an improved chaotic quantum-behaved particle swarm optimization (CQPSO) algorithm. A task priority scheduling list was built. A processor with minimum cumulative earliest finish time (EFT) was acted as the object of the first task assignment. The task precedence relationships were satisfied and the total execution time of all tasks was minimized. The experimental results show that the proposed algorithm has the advantage of optimization abilities, simple and feasible, fast convergence, and can be applied to the task scheduling optimization for other heterogeneous and distributed environment.

  20. 76 FR 42143 - Sunshine Act Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-18

    ..., 2011: Adjudicatory Matters. At times, changes in Commission priorities require alterations in the scheduling of meeting items. For further information and to ascertain what, if any, matters have been added...

  1. Novel Hybrid Scheduling Technique for Sensor Nodes with Mixed Criticality Tasks.

    PubMed

    Micea, Mihai-Victor; Stangaciu, Cristina-Sorina; Stangaciu, Valentin; Curiac, Daniel-Ioan

    2017-06-26

    Sensor networks become increasingly a key technology for complex control applications. Their potential use in safety- and time-critical domains has raised the need for task scheduling mechanisms specially adapted to sensor node specific requirements, often materialized in predictable jitter-less execution of tasks characterized by different criticality levels. This paper offers an efficient scheduling solution, named Hybrid Hard Real-Time Scheduling (H²RTS), which combines a static, clock driven method with a dynamic, event driven scheduling technique, in order to provide high execution predictability, while keeping a high node Central Processing Unit (CPU) utilization factor. From the detailed, integrated schedulability analysis of the H²RTS, a set of sufficiency tests are introduced and demonstrated based on the processor demand and linear upper bound metrics. The performance and correct behavior of the proposed hybrid scheduling technique have been extensively evaluated and validated both on a simulator and on a sensor mote equipped with ARM7 microcontroller.

  2. Dawn Usage, Scheduling, and Governance Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Louis, S

    2009-11-02

    This document describes Dawn use, scheduling, and governance concerns. Users started running full-machine science runs in early April 2009 during the initial open shakedown period. Scheduling Dawn while in the Open Computing Facility (OCF) was controlled and coordinated via phone calls, emails, and a small number of controlled banks. With Dawn moving to the Secure Computing Facility (SCF) in fall of 2009, a more detailed scheduling and governance model is required. The three major objectives are: (1) Ensure Dawn resources are allocated on a program priority-driven basis; (2) Utilize Dawn resources on the job mixes for which they were intended;more » and (3) Minimize idle cycles through use of partitions, banks and proper job mix. The SCF workload for Dawn will be inherently different than Purple or BG/L, and therefore needs a different approach. Dawn's primary function is to permit adequate access for tri-lab code development in preparation for Sequoia, and in particular for weapons multi-physics codes in support of UQ. A second purpose is to provide time allocations for large-scale science runs and for UQ suite calculations to advance SSP program priorities. This proposed governance model will be the basis for initial time allocation of Dawn computing resources for the science and UQ workloads that merit priority on this class of resource, either because they cannot be reasonably attempted on any other resources due to size of problem, or because of the unavailability of sizable allocations on other ASC capability or capacity platforms. This proposed model intends to make the most effective use of Dawn as possible, but without being overly constrained by more formal proposal processes such as those now used for Purple CCCs.« less

  3. Scheduling Results for the THEMIS Observation Scheduling Tool

    NASA Technical Reports Server (NTRS)

    Mclaren, David; Rabideau, Gregg; Chien, Steve; Knight, Russell; Anwar, Sadaat; Mehall, Greg; Christensen, Philip

    2011-01-01

    We describe a scheduling system intended to assist in the development of instrument data acquisitions for the THEMIS instrument, onboard the Mars Odyssey spacecraft, and compare results from multiple scheduling algorithms. This tool creates observations of both (a) targeted geographical regions of interest and (b) general mapping observations, while respecting spacecraft constraints such as data volume, observation timing, visibility, lighting, season, and science priorities. This tool therefore must address both geometric and state/timing/resource constraints. We describe a tool that maps geometric polygon overlap constraints to set covering constraints using a grid-based approach. These set covering constraints are then incorporated into a greedy optimization scheduling algorithm incorporating operations constraints to generate feasible schedules. The resultant tool generates schedules of hundreds of observations per week out of potential thousands of observations. This tool is currently under evaluation by the THEMIS observation planning team at Arizona State University.

  4. Scheduling Real-Time Mixed-Criticality Jobs

    NASA Astrophysics Data System (ADS)

    Baruah, Sanjoy K.; Bonifaci, Vincenzo; D'Angelo, Gianlorenzo; Li, Haohan; Marchetti-Spaccamela, Alberto; Megow, Nicole; Stougie, Leen

    Many safety-critical embedded systems are subject to certification requirements; some systems may be required to meet multiple sets of certification requirements, from different certification authorities. Certification requirements in such "mixed-criticality" systems give rise to interesting scheduling problems, that cannot be satisfactorily addressed using techniques from conventional scheduling theory. In this paper, we study a formal model for representing such mixed-criticality workloads. We demonstrate first the intractability of determining whether a system specified in this model can be scheduled to meet all its certification requirements, even for systems subject to two sets of certification requirements. Then we quantify, via the metric of processor speedup factor, the effectiveness of two techniques, reservation-based scheduling and priority-based scheduling, that are widely used in scheduling such mixed-criticality systems, showing that the latter of the two is superior to the former. We also show that the speedup factors are tight for these two techniques.

  5. Improved Space Surveillance Network (SSN) Scheduling using Artificial Intelligence Techniques

    NASA Astrophysics Data System (ADS)

    Stottler, D.

    There are close to 20,000 cataloged manmade objects in space, the large majority of which are not active, functioning satellites. These are tracked by phased array and mechanical radars and ground and space-based optical telescopes, collectively known as the Space Surveillance Network (SSN). A better SSN schedule of observations could, using exactly the same legacy sensor resources, improve space catalog accuracy through more complementary tracking, provide better responsiveness to real-time changes, better track small debris in low earth orbit (LEO) through efficient use of applicable sensors, efficiently track deep space (DS) frequent revisit objects, handle increased numbers of objects and new types of sensors, and take advantage of future improved communication and control to globally optimize the SSN schedule. We have developed a scheduling algorithm that takes as input the space catalog and the associated covariance matrices and produces a globally optimized schedule for each sensor site as to what objects to observe and when. This algorithm is able to schedule more observations with the same sensor resources and have those observations be more complementary, in terms of the precision with which each orbit metric is known, to produce a satellite observation schedule that, when executed, minimizes the covariances across the entire space object catalog. If used operationally, the results would be significantly increased accuracy of the space catalog with fewer lost objects with the same set of sensor resources. This approach inherently can also trade-off fewer high priority tasks against more lower-priority tasks, when there is benefit in doing so. Currently the project has completed a prototyping and feasibility study, using open source data on the SSN's sensors, that showed significant reduction in orbit metric covariances. The algorithm techniques and results will be discussed along with future directions for the research.

  6. 77 FR 39749 - Sunshine Act Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-05

    ... advertising in securities offerings conducted pursuant to Rule 506 of Regulation D under the Securities Act... Startups Act. At times, changes in Commission priorities require alterations in the scheduling of meeting...

  7. OGUPSA sensor scheduling architecture and algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Zhixiong; Hintz, Kenneth J.

    1996-06-01

    This paper introduces a new architecture for a sensor measurement scheduler as well as a dynamic sensor scheduling algorithm called the on-line, greedy, urgency-driven, preemptive scheduling algorithm (OGUPSA). OGUPSA incorporates a preemptive mechanism which uses three policies, (1) most-urgent-first (MUF), (2) earliest- completed-first (ECF), and (3) least-versatile-first (LVF). The three policies are used successively to dynamically allocate and schedule and distribute a set of arriving tasks among a set of sensors. OGUPSA also can detect the failure of a task to meet a deadline as well as generate an optimal schedule in the sense of minimum makespan for a group of tasks with the same priorities. A side benefit is OGUPSA's ability to improve dynamic load balance among all sensors while being a polynomial time algorithm. Results of a simulation are presented for a simple sensor system.

  8. Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter

    PubMed Central

    Loganathan, Shyamala; Mukherjee, Saswati

    2015-01-01

    Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms. PMID:26473166

  9. Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter.

    PubMed

    Loganathan, Shyamala; Mukherjee, Saswati

    2015-01-01

    Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms.

  10. FALCON: A distributed scheduler for MIMD architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grimshaw, A.S.; Vivas, V.E. Jr.

    1991-01-01

    This paper describes FALCON (Fully Automatic Load COordinator for Networks), the scheduler for the Mentat parallel processing system. FALCON has a modular structure and is designed for systems that use a task scheduling mechanism. FALCON is distributed, stable, supports system heterogeneities, and employs a sender-initiated adaptive load sharing policy with static task assignment. FALCON is parameterizable and is implemented in Mentat, a working distributed system. We present the design and implementation of FALCON as well as a brief introduction to those features of the Mentat run-time system that influence FALCON. Performance measures under different scheduler configurations are also presented andmore » analyzed with respect to the system parameters. 36 refs., 8 figs.« less

  11. JPRS Report, Near East & South Asia, India

    DTIC Science & Technology

    1991-09-04

    effectively utilized. The national Scheduled Castes and Scheduled Tribes High priority will, therefore, be accorded to education. Finance and Development ...the Government’s reaction. Whatever is there was a sharp reduction in our foreign exchange banned or is obstructive is a negative development ...Unless I distribution system will be increased by 85 paise per kg find substantial improvement in tax compliance in the to Rs 6. 10 per kg with effect

  12. An Adaptive Priority Tuning System for Optimized Local CPU Scheduling using BOINC Clients

    NASA Astrophysics Data System (ADS)

    Mnaouer, Adel B.; Ragoonath, Colin

    2010-11-01

    Volunteer Computing (VC) is a Distributed Computing model which utilizes idle CPU cycles from computing resources donated by volunteers who are connected through the Internet to form a very large-scale, loosely coupled High Performance Computing environment. Distributed Volunteer Computing environments such as the BOINC framework is concerned mainly with the efficient scheduling of the available resources to the applications which require them. The BOINC framework thus contains a number of scheduling policies/algorithms both on the server-side and on the client which work together to maximize the available resources and to provide a degree of QoS in an environment which is highly volatile. This paper focuses on the BOINC client and introduces an adaptive priority tuning client side middleware application which improves the execution times of Work Units (WUs) while maintaining an acceptable Maximum Response Time (MRT) for the end user. We have conducted extensive experimentation of the proposed system and the results show clear speedup of BOINC applications using our optimized middleware as opposed to running using the original BOINC client.

  13. Architectural impact of FDDI network on scheduling hard real-time traffic

    NASA Technical Reports Server (NTRS)

    Agrawal, Gopal; Chen, Baio; Zhao, Wei; Davari, Sadegh

    1991-01-01

    The architectural impact on guaranteeing synchronous message deadlines in FDDI (Fiber Distributed Data Interface) token ring networks is examined. The FDDI network does not have facility to support (global) priority arbitration which is a useful facility for scheduling hard real time activities. As a result, it was found that the worst case utilization of synchronous traffic in an FDDI network can be far less than that in a centralized single processor system. Nevertheless, it is proposed and analyzed that a scheduling method can guarantee deadlines of synchronous messages having traffic utilization up to 33 pct., the highest to date.

  14. Departure Queue Prediction for Strategic and Tactical Surface Scheduler Integration

    NASA Technical Reports Server (NTRS)

    Zelinski, Shannon; Windhorst, Robert

    2016-01-01

    A departure metering concept to be demonstrated at Charlotte Douglas International Airport (CLT) will integrate strategic and tactical surface scheduling components to enable the respective collaborative decision making and improved efficiency benefits these two methods of scheduling provide. This study analyzes the effect of tactical scheduling on strategic scheduler predictability. Strategic queue predictions and target gate pushback times to achieve a desired queue length are compared between fast time simulations of CLT surface operations with and without tactical scheduling. The use of variable departure rates as a strategic scheduler input was shown to substantially improve queue predictions over static departure rates. With target queue length calibration, the strategic scheduler can be tuned to produce average delays within one minute of the tactical scheduler. However, root mean square differences between strategic and tactical delays were between 12 and 15 minutes due to the different methods the strategic and tactical schedulers use to predict takeoff times and generate gate pushback clearances. This demonstrates how difficult it is for the strategic scheduler to predict tactical scheduler assigned gate delays on an individual flight basis as the tactical scheduler adjusts departure sequence to accommodate arrival interactions. Strategic/tactical scheduler compatibility may be improved by providing more arrival information to the strategic scheduler and stabilizing tactical scheduler changes to runway sequence in response to arrivals.

  15. Memory-Based Structured Application Specific Integrated Circuit (ASIC) Study

    DTIC Science & Technology

    2008-10-01

    memory interface, arbiter/ schedulers for rescheduling the memory requests according to some schedule policy, and memory channels for communicating...between the power-savings and the wakeup overhead with respect to both wakeup power and wakeup delay. For example, dream mode can save 50% more static...power than sleep mode, but at the expense of twice the wake delay and three times the wakeup energy. The user can specify power-gating modes for various components.

  16. A Comparison of Techniques for Scheduling Fleets of Earth-Observing Satellites

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna

    2003-01-01

    Earth observing satellite (EOS) scheduling is a complex real-world domain representative of a broad class of over-subscription scheduling problems. Over-subscription problems are those where requests for a facility exceed its capacity. These problems arise in a wide variety of NASA and terrestrial domains and are .XI important class of scheduling problems because such facilities often represent large capital investments. We have run experiments comparing multiple variants of the genetic algorithm, hill climbing, simulated annealing, squeaky wheel optimization and iterated sampling on two variants of a realistically-sized model of the EOS scheduling problem. These are implemented as permutation-based methods; methods that search in the space of priority orderings of observation requests and evaluate each permutation by using it to drive a greedy scheduler. Simulated annealing performs best and random mutation operators outperform our squeaky (more intelligent) operator. Furthermore, taking smaller steps towards the end of the search improves performance.

  17. Novel Hybrid Scheduling Technique for Sensor Nodes with Mixed Criticality Tasks

    PubMed Central

    Micea, Mihai-Victor; Stangaciu, Cristina-Sorina; Stangaciu, Valentin; Curiac, Daniel-Ioan

    2017-01-01

    Sensor networks become increasingly a key technology for complex control applications. Their potential use in safety- and time-critical domains has raised the need for task scheduling mechanisms specially adapted to sensor node specific requirements, often materialized in predictable jitter-less execution of tasks characterized by different criticality levels. This paper offers an efficient scheduling solution, named Hybrid Hard Real-Time Scheduling (H2RTS), which combines a static, clock driven method with a dynamic, event driven scheduling technique, in order to provide high execution predictability, while keeping a high node Central Processing Unit (CPU) utilization factor. From the detailed, integrated schedulability analysis of the H2RTS, a set of sufficiency tests are introduced and demonstrated based on the processor demand and linear upper bound metrics. The performance and correct behavior of the proposed hybrid scheduling technique have been extensively evaluated and validated both on a simulator and on a sensor mote equipped with ARM7 microcontroller. PMID:28672856

  18. Enhanced round robin CPU scheduling with burst time based time quantum

    NASA Astrophysics Data System (ADS)

    Indusree, J. R.; Prabadevi, B.

    2017-11-01

    Process scheduling is a very important functionality of Operating system. The main-known process-scheduling algorithms are First Come First Serve (FCFS) algorithm, Round Robin (RR) algorithm, Priority scheduling algorithm and Shortest Job First (SJF) algorithm. Compared to its peers, Round Robin (RR) algorithm has the advantage that it gives fair share of CPU to the processes which are already in the ready-queue. The effectiveness of the RR algorithm greatly depends on chosen time quantum value. Through this research paper, we are proposing an enhanced algorithm called Enhanced Round Robin with Burst-time based Time Quantum (ERRBTQ) process scheduling algorithm which calculates time quantum as per the burst-time of processes already in ready queue. The experimental results and analysis of ERRBTQ algorithm clearly indicates the improved performance when compared with conventional RR and its variants.

  19. Comparison of OPC job prioritization schemes to generate data for mask manufacturing

    NASA Astrophysics Data System (ADS)

    Lewis, Travis; Veeraraghavan, Vijay; Jantzen, Kenneth; Kim, Stephen; Park, Minyoung; Russell, Gordon; Simmons, Mark

    2015-03-01

    Delivering mask ready OPC corrected data to the mask shop on-time is critical for a foundry to meet the cycle time commitment for a new product. With current OPC compute resource sharing technology, different job scheduling algorithms are possible, such as, priority based resource allocation and fair share resource allocation. In order to maximize computer cluster efficiency, minimize the cost of the data processing and deliver data on schedule, the trade-offs of each scheduling algorithm need to be understood. Using actual production jobs, each of the scheduling algorithms will be tested in a production tape-out environment. Each scheduling algorithm will be judged on its ability to deliver data on schedule and the trade-offs associated with each method will be analyzed. It is now possible to introduce advance scheduling algorithms to the OPC data processing environment to meet the goals of on-time delivery of mask ready OPC data while maximizing efficiency and reducing cost.

  20. Setting conservation priorities.

    PubMed

    Wilson, Kerrie A; Carwardine, Josie; Possingham, Hugh P

    2009-04-01

    A generic framework for setting conservation priorities based on the principles of classic decision theory is provided. This framework encapsulates the key elements of any problem, including the objective, the constraints, and knowledge of the system. Within the context of this framework the broad array of approaches for setting conservation priorities are reviewed. While some approaches prioritize assets or locations for conservation investment, it is concluded here that prioritization is incomplete without consideration of the conservation actions required to conserve the assets at particular locations. The challenges associated with prioritizing investments through time in the face of threats (and also spatially and temporally heterogeneous costs) can be aided by proper problem definition. Using the authors' general framework for setting conservation priorities, multiple criteria can be rationally integrated and where, how, and when to invest conservation resources can be scheduled. Trade-offs are unavoidable in priority setting when there are multiple considerations, and budgets are almost always finite. The authors discuss how trade-offs, risks, uncertainty, feedbacks, and learning can be explicitly evaluated within their generic framework for setting conservation priorities. Finally, they suggest ways that current priority-setting approaches may be improved.

  1. System-level power optimization for real-time distributed embedded systems

    NASA Astrophysics Data System (ADS)

    Luo, Jiong

    Power optimization is one of the crucial design considerations for modern electronic systems. In this thesis, we present several system-level power optimization techniques for real-time distributed embedded systems, based on dynamic voltage scaling, dynamic power management, and management of peak power and variance of the power profile. Dynamic voltage scaling has been widely acknowledged as an important and powerful technique to trade off dynamic power consumption and delay. Efficient dynamic voltage scaling requires effective variable-voltage scheduling mechanisms that can adjust voltages and clock frequencies adaptively based on workloads and timing constraints. For this purpose, we propose static variable-voltage scheduling algorithms utilizing criticalpath driven timing analysis for the case when tasks are assumed to have uniform switching activities, as well as energy-gradient driven slack allocation for a more general scenario. The proposed techniques can achieve closeto-optimal power savings with very low computational complexity, without violating any real-time constraints. We also present algorithms for power-efficient joint scheduling of multi-rate periodic task graphs along with soft aperiodic tasks. The power issue is addressed through both dynamic voltage scaling and power management. Periodic task graphs are scheduled statically. Flexibility is introduced into the static schedule to allow the on-line scheduler to make local changes to PE schedules through resource reclaiming and slack stealing, without interfering with the validity of the global schedule. We provide a unified framework in which the response times of aperiodic tasks and power consumption are dynamically optimized simultaneously. Interconnection network fabrics point to a new generation of power-efficient and scalable interconnection architectures for distributed embedded systems. As the system bandwidth continues to increase, interconnection networks become power/energy limited as well. Variable-frequency links have been designed by circuit designers for both parallel and serial links, which can adaptively regulate the supply voltage of transceivers to a desired link frequency, to exploit the variations in bandwidth requirement for power savings. We propose solutions for simultaneous dynamic voltage scaling of processors and links. The proposed solution considers real-time scheduling, flow control, and packet routing jointly. It can trade off the power consumption on processors and communication links via efficient slack allocation, and lead to more power savings than dynamic voltage scaling on processors alone. For battery-operated systems, the battery lifespan is an important concern. Due to the effects of discharge rate and battery recovery, the discharge pattern of batteries has an impact on the battery lifespan. Battery models indicate that even under the same average power consumption, reducing peak power current and variance in the power profile can increase the battery efficiency and thereby prolong battery lifetime. To take advantage of these effects, we propose battery-driven scheduling techniques for embedded applications, to reduce the peak power and the variance in the power profile of the overall system under real-time constraints. The proposed scheduling algorithms are also beneficial in addressing reliability and signal integrity concerns by effectively controlling peak power and variance of the power profile.

  2. Doing more with less - The new way of exploring the solar system

    NASA Technical Reports Server (NTRS)

    Ridenoure, Rex

    1992-01-01

    Exploration of the solar system is considered in the light of existing economic factors and scientific priorities, and a general blueprint for an exploration strategy is set forth. Attention is given to mission costs, typical schedules, and the scientific findings of typical projects which create the need for collaboration and diversification in mission development. The combined technologies and cooperative efforts of several small organizations can lead to missions with short schedules and low costs.

  3. Doing more with less - The new way of exploring the solar system

    NASA Astrophysics Data System (ADS)

    Ridenoure, Rex

    1992-08-01

    Exploration of the solar system is considered in the light of existing economic factors and scientific priorities, and a general blueprint for an exploration strategy is set forth. Attention is given to mission costs, typical schedules, and the scientific findings of typical projects which create the need for collaboration and diversification in mission development. The combined technologies and cooperative efforts of several small organizations can lead to missions with short schedules and low costs.

  4. Managing Contention and Timing Constraints in a Real-Time Database System

    DTIC Science & Technology

    1995-01-01

    In order to realize many of these goals, StarBase is constructed on top of RT-Mach, a real - time operating system developed at Carnegie Mellon...University [ll]. StarBase differs from previous RT-DBMS work [l, 2, 31 in that a) it relies on a real - time operating system which provides priority...CPU and resource scheduling pro- vided by tlhe underlying real - time operating system . Issues of data contention are dealt with by use of a priority

  5. Ada Quality and Style: Guidelines for Professional Programmers, Version 02.01.01

    DTIC Science & Technology

    1992-12-01

    47, 78, 79 predicate queue , entry not prioritized, 95 as function name, 22 for boolean object, 21 R preemptive scheduling. 118 race condition. 49...when lower priority tasks are given service while higher priority tasks remain blocked. In the above example, this occurred because entry queues are...from an entry queue 100 Ada QUALITY AND STYLE due to execution of an abort statement as well as expiration of a timed entry call. The use of this

  6. Short-term scheduling of an open-pit mine with multiple objectives

    NASA Astrophysics Data System (ADS)

    Blom, Michelle; Pearce, Adrian R.; Stuckey, Peter J.

    2017-05-01

    This article presents a novel algorithm for the generation of multiple short-term production schedules for an open-pit mine, in which several objectives, of varying priority, characterize the quality of each solution. A short-term schedule selects regions of a mine site, known as 'blocks', to be extracted in each week of a planning horizon (typically spanning 13 weeks). Existing tools for constructing these schedules use greedy heuristics, with little optimization. To construct a single schedule in which infrastructure is sufficiently utilized, with production grades consistently close to a desired target, a planner must often run these heuristics many times, adjusting parameters after each iteration. A planner's intuition and experience can evaluate the relative quality and mineability of different schedules in a way that is difficult to automate. Of interest to a short-term planner is the generation of multiple schedules, extracting available ore and waste in varying sequences, which can then be manually compared. This article presents a tool in which multiple, diverse, short-term schedules are constructed, meeting a range of common objectives without the need for iterative parameter adjustment.

  7. Evaluation of concurrent priority queue algorithms. Technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Q.

    1991-02-01

    The priority queue is a fundamental data structure that is used in a large variety of parallel algorithms, such as multiprocessor scheduling and parallel best-first search of state-space graphs. This thesis addresses the design and experimental evaluation of two novel concurrent priority queues: a parallel Fibonacci heap and a concurrent priority pool, and compares them with the concurrent binary heap. The parallel Fibonacci heap is based on the sequential Fibonacci heap, which is theoretically the most efficient data structure for sequential priority queues. This scheme not only preserves the efficient operation time bounds of its sequential counterpart, but also hasmore » very low contention by distributing locks over the entire data structure. The experimental results show its linearly scalable throughput and speedup up to as many processors as tested (currently 18). A concurrent access scheme for a doubly linked list is described as part of the implementation of the parallel Fibonacci heap. The concurrent priority pool is based on the concurrent B-tree and the concurrent pool. The concurrent priority pool has the highest throughput among the priority queues studied. Like the parallel Fibonacci heap, the concurrent priority pool scales linearly up to as many processors as tested. The priority queues are evaluated in terms of throughput and speedup. Some applications of concurrent priority queues such as the vertex cover problem and the single source shortest path problem are tested.« less

  8. Application of precomputed control laws in a reconfigurable aircraft flight control system

    NASA Technical Reports Server (NTRS)

    Moerder, Daniel D.; Halyo, Nesim; Broussard, John R.; Caglayan, Alper K.

    1989-01-01

    A self-repairing flight control system concept in which the control law is reconfigured after actuator and/or control surface damage to preserve stability and pilot command tracking is described. A key feature of the controller is reconfigurable multivariable feedback. The feedback gains are designed off-line and scheduled as a function of the aircraft control impairment status so that reconfiguration is performed simply by updating the gain schedule after detection of an impairment. A novel aspect of the gain schedule design procedure is that the schedule is calculated using a linear quadratic optimization-based simultaneous stabilization algorithm in which the scheduled gain is constrained to stabilize a collection of plant models representing the aircraft in various control failure modes. A description and numerical evaluation of a controller design for a model of a statically unstable high-performance aircraft are given.

  9. 78 FR 79027 - Product Change-Priority Mail Express Negotiated Service Agreement

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-27

    ...The Postal Service gives notice of filing a request with the Postal Regulatory Commission to add a domestic shipping services contract to the list of Negotiated Service Agreements in the Mail Classification Schedule's Competitive Products List.

  10. 78 FR 79027 - Product Change-Priority Mail Negotiated Service Agreement

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-27

    ...The Postal Service gives notice of filing a request with the Postal Regulatory Commission to add a domestic shipping services contract to the list of Negotiated Service Agreements in the Mail Classification Schedule's Competitive Products List.

  11. SPANR planning and scheduling

    NASA Astrophysics Data System (ADS)

    Freund, Richard F.; Braun, Tracy D.; Kussow, Matthew; Godfrey, Michael; Koyama, Terry

    2001-07-01

    SPANR (Schedule, Plan, Assess Networked Resources) is (i) a pre-run, off-line planning and (ii) a runtime, just-in-time scheduling mechanism. It is designed to support primarily commercial applications in that it optimizes throughput rather than individual jobs (unless they have highest priority). Thus it is a tool for a commercial production manager to maximize total work. First the SPANR Planner is presented showing the ability to do predictive 'what-if' planning. It can answer such questions as, (i) what is the overall effect of acquiring new hardware or (ii) what would be the effect of a different scheduler. The ability of the SPANR Planner to formulate in advance tree-trimming strategies is useful in several commercial applications, such as electronic design or pharmaceutical simulations. The SPANR Planner is demonstrated using a variety of benchmarks. The SPANR Runtime Scheduler (RS) is briefly presented. The SPANR RS can provide benefit for several commercial applications, such as airframe design and financial applications. Finally a design is shown whereby SPANR can provide scheduling advice to most resource management systems.

  12. Headspace needle-trap analysis of priority volatile organic compounds from aqueous samples: application to the analysis of natural and waste waters.

    PubMed

    Alonso, Monica; Cerdan, Laura; Godayol, Anna; Anticó, Enriqueta; Sanchez, Juan M

    2011-11-11

    Combining headspace (HS) sampling with a needle-trap device (NTD) to determine priority volatile organic compounds (VOCs) in water samples results in improved sensitivity and efficiency when compared to conventional static HS sampling. A 22 gauge stainless steel, 51-mm needle packed with Tenax TA and Carboxen 1000 particles is used as the NTD. Three different HS-NTD sampling methodologies are evaluated and all give limits of detection for the target VOCs in the ng L⁻¹ range. Active (purge-and-trap) HS-NTD sampling is found to give the best sensitivity but requires exhaustive control of the sampling conditions. The use of the NTD to collect the headspace gas sample results in a combined adsorption/desorption mechanism. The testing of different temperatures for the HS thermostating reveals a greater desorption effect when the sample is allowed to diffuse, whether passively or actively, through the sorbent particles. The limits of detection obtained in the simplest sampling methodology, static HS-NTD (5 mL aqueous sample in 20 mL HS vials, thermostating at 50 °C for 30 min with agitation), are sufficiently low as to permit its application to the analysis of 18 priority VOCs in natural and waste waters. In all cases compounds were detected below regulated levels. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Student Services for a New Breed

    ERIC Educational Resources Information Center

    Simmons, Howard L.; Kochey, Kenneth C.

    1975-01-01

    Today's student needs services sensitive to his priorities of work and economic security. Suggested services include: transportation services, food services, financial aid for basic physical needs, flexible scheduling, facilities for "lifetime" sports activities, counselors located at community centers, cooperative arrangements with local cultural…

  14. Problem area descriptions : motor vehicle crashes - data analysis and IVI program analysis

    DOT National Transportation Integrated Search

    In general, the IVI program focuses on the more significant safety problem categories as : indicated by statistical analyses of crash data. However, other factors were considered in setting : program priorities and schedules. For some problem areas, ...

  15. 48 CFR 42.302 - Contract administration functions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...) Perform preaward surveys (see Subpart 9.1). (33) Advise and assist contractors regarding their priorities... with contractual terms for schedule, cost, and technical performance in the areas of design... efforts and management systems that relate to design, development, production, engineering changes...

  16. 48 CFR 42.302 - Contract administration functions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...) Perform preaward surveys (see Subpart 9.1). (33) Advise and assist contractors regarding their priorities... with contractual terms for schedule, cost, and technical performance in the areas of design... efforts and management systems that relate to design, development, production, engineering changes...

  17. 48 CFR 42.302 - Contract administration functions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...) Perform preaward surveys (see Subpart 9.1). (33) Advise and assist contractors regarding their priorities... with contractual terms for schedule, cost, and technical performance in the areas of design... efforts and management systems that relate to design, development, production, engineering changes...

  18. 48 CFR 42.302 - Contract administration functions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...) Perform preaward surveys (see Subpart 9.1). (33) Advise and assist contractors regarding their priorities... with contractual terms for schedule, cost, and technical performance in the areas of design... efforts and management systems that relate to design, development, production, engineering changes...

  19. Impact of rest breaks on musculoskeletal discomfort of Chikan embroiderers of West Bengal, India: a follow up field study

    PubMed Central

    Chakrabarty, Sabarni; Sarkar, Krishnendu; Dev, Samrat; Das, Tamal; Mitra, Kalpita; Sahu, Subhashis; Gangopadhyay, Somnath

    2016-01-01

    Objectives: This study aimed to determine risk factors that predict musculoskeletal discomfort in Chikan embroiderers of West Bengal, India, and to compare the effect of two rest break schedules to reduce these symptoms. Methods: The Nordic musculoskeletal questionnaire was performed on 400 Chikan embroiderers at baseline containing questions on job autonomy, working behavior, and work stress factors. Relative risk was calculated to identify prognostic factors for musculoskeletal discomfort in different body regions. Two groups of workers received two rest break schedules for 4 months and compared in a between-subject design. Outcome variables were scores of Body Part Discomfort (BPD) scale. Results: Chikan embroiderers are afflicted with musculoskeletal discomfort mainly in the lower back, neck/shoulder and wrist/forearm region, which is attributed to their prolonged working timeinvolving hands and wrists, being in a static seating posture. Rigidity in working methods, prolonged working time, inadequate rest break during the working day, dissatisfaction regarding earning, monotonous work, static sitting posture, and repetitive movement of wrist and forearm were the significant predictors of these symptom developments. Rest break schedule 1 with more frequent and shorter breaks had more significant improvement on the severity of these musculoskeletal discomforts. Conclusions: Chikan embroiderers perform a highly dreary occupation and various ergonomics conditions work as predictors for developing musculoskeletal discomforts among them. Design of proper rest break schedule involving shorter and frequent breaks was competent for reducing these discomforts to a certain extent. PMID:27265529

  20. Individual differences in strategic flight management and scheduling

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher D.; Raby, Mireille

    1991-01-01

    A group of 30 instrument-rated pilots was made to fly simulator approaches to three airports under conditions of low, medium, and high workload conditions. An analysis is presently conducted of the difference in discrete task scheduling between the group of 10 highest and 10 lowest performing pilots in the sample; this categorization was based on the mean of various flight-profile measures. The two groups were found to differ from each other only in terms of the time when specific events were conducted, and of the optimality of scheduling for certain high-priority tasks. These results are assessed in view of the relative independence of task-management skills from aircraft-control skills.

  1. Current Development Status of an Integrated Tool for Modeling Quasi-static Deformation in the Solid Earth

    NASA Astrophysics Data System (ADS)

    Williams, C. A.; Dicaprio, C.; Simons, M.

    2003-12-01

    With the advent of projects such as the Plate Boundary Observatory and future InSAR missions, spatially dense geodetic data of high quality will provide an increasingly detailed picture of the movement of the earth's surface. To interpret such information, powerful and easily accessible modeling tools are required. We are presently developing such a tool that we feel will meet many of the needs for evaluating quasi-static earth deformation. As a starting point, we begin with a modified version of the finite element code TECTON, which has been specifically designed to solve tectonic problems involving faulting and viscoelastic/plastic earth behavior. As our first priority, we are integrating the code into the GeoFramework, which is an extension of the Python-based Pyre modeling framework. The goal of this framework is to provide simplified user interfaces for powerful modeling codes, to provide easy access to utilities such as meshers and visualization tools, and to provide a tight integration between different modeling tools so they can interact with each other. The initial integration of the code into this framework is essentially complete, and a more thorough integration, where Python-based drivers control the entire solution, will be completed in the near future. We have an evolving set of priorities that we expect to solidify as we receive more input from the modeling community. Current priorities include the development of linear and quadratic tetrahedral elements, the development of a parallelized version of the code using the PETSc libraries, the addition of more complex rheologies, realistic fault friction models, adaptive time stepping, and spherical geometries. In this presentation we describe current progress toward our various priorities, briefly describe the structure of the code within the GeoFramework, and demonstrate some sample applications.

  2. Punctuations and Agendas: A New Look at Local Government Budget Expenditures

    ERIC Educational Resources Information Center

    Jordan, Meagan M.

    2003-01-01

    Punctuated equilibrium theory (PET) is an agenda-based theory that offers a theoretical foundation for large budget shifts. PET emphasizes that the static, incremental nature of agendas is occasionally interrupted by punctuations. These punctuations indicate shifts in priority among the agenda items, and with those agenda shifts come trade-offs.…

  3. Ready to Renovate

    ERIC Educational Resources Information Center

    Kennedy, Mike

    2008-01-01

    Schools and universities should have a thorough plan that sets priorities for the most pressing facility renovations. With remedial programs, enrichment offerings, recreational activities and extended-year schedules, many school facilities are no longer dormant in the summer months. The increased year-round use of education facilities benefits…

  4. ARS irrigation research priorities and projects-An update

    USDA-ARS?s Scientific Manuscript database

    The USDA Agricultural Research Service focuses on six areas of research that are crucial to safe and effective use of all water resources for agricultural production: 1) Irrigation Scheduling Technologies for Water Productivity; 2) Water Productivity (WP) at Multiple Scales; 3) Irrigation Applicatio...

  5. A new intuitionistic fuzzy rule-based decision-making system for an operating system process scheduler.

    PubMed

    Butt, Muhammad Arif; Akram, Muhammad

    2016-01-01

    We present a new intuitionistic fuzzy rule-based decision-making system based on intuitionistic fuzzy sets for a process scheduler of a batch operating system. Our proposed intuitionistic fuzzy scheduling algorithm, inputs the nice value and burst time of all available processes in the ready queue, intuitionistically fuzzify the input values, triggers appropriate rules of our intuitionistic fuzzy inference engine and finally calculates the dynamic priority (dp) of all the processes in the ready queue. Once the dp of every process is calculated the ready queue is sorted in decreasing order of dp of every process. The process with maximum dp value is sent to the central processing unit for execution. Finally, we show complete working of our algorithm on two different data sets and give comparisons with some standard non-preemptive process schedulers.

  6. A Model for Speedup of Parallel Programs

    DTIC Science & Technology

    1997-01-01

    Sanjeev. K Setia . The interaction between mem- ory allocation and adaptive partitioning in message- passing multicomputers. In IPPS 󈨣 Workshop on Job...Scheduling Strategies for Parallel Processing, pages 89{99, 1995. [15] Sanjeev K. Setia and Satish K. Tripathi. A compar- ative analysis of static

  7. Five Strategies of Successful Part-Time Work.

    ERIC Educational Resources Information Center

    Corwin, Vivien; Lawrence, Thomas B.; Frost, Peter J.

    2001-01-01

    Identifies commonalities in the approaches of successful part-time professionals. Discusses five strategies for success: (1) communicating work-life priorities and schedules to the organization; (2) making the business case for part-time arrangements; (3) establishing time management routines; (4) cultivating advocates in senior management; and…

  8. Future Research Priorities for Morbidity Control of Lymphedema.

    PubMed

    Narahari, S R; Aggithaya, Madhur Guruprasad; Moffatt, Christine; Ryan, T J; Keeley, Vaughan; Vijaya, B; Rajendran, P; Karalam, S B; Rajagopala, S; Kumar, N K; Bose, K S; Sushma, K V

    2017-01-01

    Innovation in the treatment of lower extremity lymphedema has received low priority from the governments and pharmaceutical industry. Advancing lymphedema is irreversible and initiates fibrosis in the dermis, reactive changes in the epidermis and subcutis. Most medical treatments offered for lymphedema are either too demanding with a less than satisfactory response or patients have low concordance due to complex schedules. A priority setting partnership (PSP) was established to decide on the future priorities in lymphedema research. A table of abstracts following a literature search was published in workshop website. Stake holders were requested to upload their priorities. Their questions were listed, randomized, and sent to lymphologists for ranking. High ranked ten research priorities, obtained through median score, were presented in final prioritization work shop attended by invited stake holders. A free medical camp was organized during workshop to understand patients' priorities. One hundred research priorities were selected from priorities uploaded to website. Ten priorities were short listed through a peer review process involving 12 lymphologists, for final discussion. They were related to simplification of integrative treatment for lymphedema, cellular changes in lymphedema and mechanisms of its reversal, eliminating bacterial entry lesions to reduce cellulitis episodes, exploring evidence for therapies in traditional medicine, improving patient concordance to compression therapy, epidemiology of lymphatic filariasis (LF), and economic benefit of integrative treatments of lymphedema. A robust research priority setting process, organized as described in James Lind Alliance guidebook, identified seven priority areas to achieve effective morbidity control of lymphedema including LF. All stake holders including Department of Health Research, Government of India, participated in the PSP.

  9. Future Research Priorities for Morbidity Control of Lymphedema

    PubMed Central

    Narahari, S R; Aggithaya, Madhur Guruprasad; Moffatt, Christine; Ryan, T J; Keeley, Vaughan; Vijaya, B; Rajendran, P; Karalam, S B; Rajagopala, S; Kumar, N K; Bose, K S; Sushma, K V

    2017-01-01

    Background: Innovation in the treatment of lower extremity lymphedema has received low priority from the governments and pharmaceutical industry. Advancing lymphedema is irreversible and initiates fibrosis in the dermis, reactive changes in the epidermis and subcutis. Most medical treatments offered for lymphedema are either too demanding with a less than satisfactory response or patients have low concordance due to complex schedules. A priority setting partnership (PSP) was established to decide on the future priorities in lymphedema research. Methods: A table of abstracts following a literature search was published in workshop website. Stake holders were requested to upload their priorities. Their questions were listed, randomized, and sent to lymphologists for ranking. High ranked ten research priorities, obtained through median score, were presented in final prioritization work shop attended by invited stake holders. A free medical camp was organized during workshop to understand patients’ priorities. Results: One hundred research priorities were selected from priorities uploaded to website. Ten priorities were short listed through a peer review process involving 12 lymphologists, for final discussion. They were related to simplification of integrative treatment for lymphedema, cellular changes in lymphedema and mechanisms of its reversal, eliminating bacterial entry lesions to reduce cellulitis episodes, exploring evidence for therapies in traditional medicine, improving patient concordance to compression therapy, epidemiology of lymphatic filariasis (LF), and economic benefit of integrative treatments of lymphedema. Conclusion: A robust research priority setting process, organized as described in James Lind Alliance guidebook, identified seven priority areas to achieve effective morbidity control of lymphedema including LF. All stake holders including Department of Health Research, Government of India, participated in the PSP. PMID:28216723

  10. Automatic Scheduling and Planning (ASAP) in future ground control systems

    NASA Technical Reports Server (NTRS)

    Matlin, Sam

    1988-01-01

    This report describes two complementary approaches to the problem of space mission planning and scheduling. The first is an Expert System or Knowledge-Based System for automatically resolving most of the activity conflicts in a candidate plan. The second is an Interactive Graphics Decision Aid to assist the operator in manually resolving the residual conflicts which are beyond the scope of the Expert System. The two system designs are consistent with future ground control station activity requirements, support activity timing constraints, resource limits and activity priority guidelines.

  11. On program restructuring, scheduling, and communication for parallel processor systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polychronopoulos, Constantine D.

    1986-08-01

    This dissertation discusses several software and hardware aspects of program execution on large-scale, high-performance parallel processor systems. The issues covered are program restructuring, partitioning, scheduling and interprocessor communication, synchronization, and hardware design issues of specialized units. All this work was performed focusing on a single goal: to maximize program speedup, or equivalently, to minimize parallel execution time. Parafrase, a Fortran restructuring compiler was used to transform programs in a parallel form and conduct experiments. Two new program restructuring techniques are presented, loop coalescing and subscript blocking. Compile-time and run-time scheduling schemes are covered extensively. Depending on the program construct, thesemore » algorithms generate optimal or near-optimal schedules. For the case of arbitrarily nested hybrid loops, two optimal scheduling algorithms for dynamic and static scheduling are presented. Simulation results are given for a new dynamic scheduling algorithm. The performance of this algorithm is compared to that of self-scheduling. Techniques for program partitioning and minimization of interprocessor communication for idealized program models and for real Fortran programs are also discussed. The close relationship between scheduling, interprocessor communication, and synchronization becomes apparent at several points in this work. Finally, the impact of various types of overhead on program speedup and experimental results are presented.« less

  12. Autonomous Hybrid Priority Queueing for Scheduling Residential Energy Demands

    NASA Astrophysics Data System (ADS)

    Kalimullah, I. Q.; Shamroukh, M.; Sahar, N.; Shetty, S.

    2017-05-01

    The advent of smart grid technologies has opened up opportunities to manage the energy consumption of the users within a residential smart grid system. Demand response management is particularly being employed to reduce the overall load on an electricity network which could in turn reduce outages and electricity costs. The objective of this paper is to develop an intelligible scheduler to optimize the energy available to a micro grid through hybrid queueing algorithm centered around the consumers’ energy demands. This is achieved by shifting certain schedulable load appliances to light load hours. Various factors such as the type of demand, grid load, consumers’ energy usage patterns and preferences are considered while formulating the logical constraints required for the algorithm. The algorithm thus obtained is then implemented in MATLAB workspace to simulate its execution by an Energy Consumption Scheduler (ECS) found within smart meters, which automatically finds the optimal energy consumption schedule tailor made to fit each consumer within the micro grid network.

  13. Meta-RaPS Algorithm for the Aerial Refueling Scheduling Problem

    NASA Technical Reports Server (NTRS)

    Kaplan, Sezgin; Arin, Arif; Rabadi, Ghaith

    2011-01-01

    The Aerial Refueling Scheduling Problem (ARSP) can be defined as determining the refueling completion times for each fighter aircraft (job) on multiple tankers (machines). ARSP assumes that jobs have different release times and due dates, The total weighted tardiness is used to evaluate schedule's quality. Therefore, ARSP can be modeled as a parallel machine scheduling with release limes and due dates to minimize the total weighted tardiness. Since ARSP is NP-hard, it will be more appropriate to develop a pproimate or heuristic algorithm to obtain solutions in reasonable computation limes. In this paper, Meta-Raps-ATC algorithm is implemented to create high quality solutions. Meta-RaPS (Meta-heuristic for Randomized Priority Search) is a recent and promising meta heuristic that is applied by introducing randomness to a construction heuristic. The Apparent Tardiness Rule (ATC), which is a good rule for scheduling problems with tardiness objective, is used to construct initial solutions which are improved by an exchanging operation. Results are presented for generated instances.

  14. WATER EGRESS

    NASA Image and Video Library

    1965-07-16

    S65-28459 (16 July 1965) --- Astronaut Neil A. Armstrong, command pilot for the Gemini-5 backup crew, inside the Gemini Static Article 5 spacecraft prior to water egress training in the Gulf of Mexico. The training is part of the prelaunch schedule for prime and backup crew on the Gemini-5 mission.

  15. A new task scheduling algorithm based on value and time for cloud platform

    NASA Astrophysics Data System (ADS)

    Kuang, Ling; Zhang, Lichen

    2017-08-01

    Tasks scheduling, a key part of increasing resource utilization and enhancing system performance, is a never outdated problem especially in cloud platforms. Based on the value density algorithm of the real-time task scheduling system and the character of the distributed system, the paper present a new task scheduling algorithm by further studying the cloud technology and the real-time system: Least Level Value Density First (LLVDF). The algorithm not only introduces some attributes of time and value for tasks, it also can describe weighting relationships between these properties mathematically. As this feature of the algorithm, it can gain some advantages to distinguish between different tasks more dynamically and more reasonably. When the scheme was used in the priority calculation of the dynamic task scheduling on cloud platform, relying on its advantage, it can schedule and distinguish tasks with large amounts and many kinds more efficiently. The paper designs some experiments, some distributed server simulation models based on M/M/C model of queuing theory and negative arrivals, to compare the algorithm against traditional algorithm to observe and show its characters and advantages.

  16. A Study on Real-Time Scheduling Methods in Holonic Manufacturing Systems

    NASA Astrophysics Data System (ADS)

    Iwamura, Koji; Taimizu, Yoshitaka; Sugimura, Nobuhiro

    Recently, new architectures of manufacturing systems have been proposed to realize flexible control structures of the manufacturing systems, which can cope with the dynamic changes in the volume and the variety of the products and also the unforeseen disruptions, such as failures of manufacturing resources and interruptions by high priority jobs. They are so called as the autonomous distributed manufacturing system, the biological manufacturing system and the holonic manufacturing system. Rule-based scheduling methods were proposed and applied to the real-time production scheduling problems of the HMS (Holonic Manufacturing System) in the previous report. However, there are still remaining problems from the viewpoint of the optimization of the whole production schedules. New procedures are proposed, in the present paper, to select the production schedules, aimed at generating effective production schedules in real-time. The proposed methods enable the individual holons to select suitable machining operations to be carried out in the next time period. Coordination process among the holons is also proposed to carry out the coordination based on the effectiveness values of the individual holons.

  17. U. S. fusion programs: Struggling to stay in the game

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, M.

    Funding for the US fusion energy program has suffered and will probably continue to suffer major cuts. A committee hand-picked by Energy Secretary James Watkins urged the Department of Energy to mount an aggressive program to develop fusion power, but congress cut funding from $323 million in 1990 to $275 million in 1991. This portends dire conditions for fusion research and development. Projects to receive top priority are concerned with the tokamaks and to keep the next big machine, the Burning Plasma Experiment, scheduled for beginning of construction in 1993 on schedule. Secretary Watkins is said to want to keepmore » the International Thermonuclear Energy Reactor (ITER) on schedule. ITER would follow the Burning Plasma Experiment.« less

  18. Surveying School Facilities Needs.

    ERIC Educational Resources Information Center

    Weichel, Harry J.; Dennell, James

    1990-01-01

    Ralston (Nebraska) Public School District's communitywide survey helped set school facilities priorities while keeping the district's finite resources firmly in mind. With an outline of maintenance costs for the next 10 years, the district can develop a strategic construction schedule. The board also has the option of financing projects through a…

  19. Positive Change through a Credential Process

    ERIC Educational Resources Information Center

    Williams, Tinnycua

    2018-01-01

    Studies have demonstrated the significance of afterschool staff development and have attempted to show the impacts of staff training on program quality and youth outcomes. Professional development, though necessary, wasn't always a priority for the author, especially if training hours conflicted with the author's afterschool program schedule.…

  20. Austin Community College Comprehensive Master Plan, 2000-2001.

    ERIC Educational Resources Information Center

    Austin Community Coll., TX.

    This document describes Austin Community College's (Texas) educational academic plans, facilities plans, and financial implementation plans for 2000-2001. Plan goals and priorities include: (1) enhancing scheduling efficiency while responding to unmet student demand and community needs, and increasing enrollments by 3-5%; (2) opening two…

  1. Scheduling with hop-by-hop priority increasing in meshed optical burst-switched network

    NASA Astrophysics Data System (ADS)

    Chang, Hao; Luo, Jiangtao; Zhang, Zhizhong; Xia, Da; Gong, Jue

    2006-09-01

    In OBS, JET (Just-Enough-Time) is the classical wavelength reservation scheme. But there is a phenomenon that the burst priority decreasing hop-by-hop in multi-hop networks that will waste the bandwidth that was used in the upstream. Based on the HPI (Hop-by-hop Priority Increasing) proposed in the former research, this paper will do an unprecedented simulation in 4×4 meshed topology, which is closer to the real network environment with the help of a NS2-based OBSN simulation platform constructed by ourselves. By contrasting, the drop probability and throughput on one of the longest end-to-end path lengths in the whole networks, it shows that the HPI scheme can improve the utilance of bandwidth better.

  2. Collaborative Resource Allocation

    NASA Technical Reports Server (NTRS)

    Wang, Yeou-Fang; Wax, Allan; Lam, Raymond; Baldwin, John; Borden, Chester

    2007-01-01

    Collaborative Resource Allocation Networking Environment (CRANE) Version 0.5 is a prototype created to prove the newest concept of using a distributed environment to schedule Deep Space Network (DSN) antenna times in a collaborative fashion. This program is for all space-flight and terrestrial science project users and DSN schedulers to perform scheduling activities and conflict resolution, both synchronously and asynchronously. Project schedulers can, for the first time, participate directly in scheduling their tracking times into the official DSN schedule, and negotiate directly with other projects in an integrated scheduling system. A master schedule covers long-range, mid-range, near-real-time, and real-time scheduling time frames all in one, rather than the current method of separate functions that are supported by different processes and tools. CRANE also provides private workspaces (both dynamic and static), data sharing, scenario management, user control, rapid messaging (based on Java Message Service), data/time synchronization, workflow management, notification (including emails), conflict checking, and a linkage to a schedule generation engine. The data structure with corresponding database design combines object trees with multiple associated mortal instances and relational database to provide unprecedented traceability and simplify the existing DSN XML schedule representation. These technologies are used to provide traceability, schedule negotiation, conflict resolution, and load forecasting from real-time operations to long-range loading analysis up to 20 years in the future. CRANE includes a database, a stored procedure layer, an agent-based middle tier, a Web service wrapper, a Windows Integrated Analysis Environment (IAE), a Java application, and a Web page interface.

  3. Toward an Autonomous Telescope Network: the TBT Scheduler

    NASA Astrophysics Data System (ADS)

    Racero, E.; Ibarra, A.; Ocaña, F.; de Lis, S. B.; Ponz, J. D.; Castillo, M.; Sánchez-Portal, M.

    2015-09-01

    Within the ESA SSA program, it is foreseen to deploy several robotic telescopes to provide surveillance and tracking services for hazardous objects. The TBT project will procure a validation platform for an autonomous optical observing system in a realistic scenario, consisting of two telescopes located in Spain and Australia, to collect representative test data for precursor SSA services. In this context, the planning and scheduling of the night consists of two software modules, the TBT Scheduler, that will allow the manual and autonomous planning of the night, and the control of the real-time response of the system, done by the RTS2 internal scheduler. The TBT Scheduler allocates tasks for both telescopes without human intervention. Every night it takes all the inputs needed and prepares the schedule following some predefined rules. The main purpose of the scheduler is the distribution of the time for follow-up of recently discovered targets and surveys. The TBT Scheduler considers the overall performance of the system, and combine follow-up with a priori survey strategies for both kind of objects. The strategy is defined according to the expected combined performance for both systems the upcoming night (weather, sky brightness, object accessibility and priority). Therefore, TBT Scheduler defines the global approach for the network and relies on the RTS2 internal scheduler for the final detailed distribution of tasks at each sensor.

  4. Electrolysis Performance Improvement and Validation Experiment

    NASA Technical Reports Server (NTRS)

    Schubert, Franz H.

    1992-01-01

    Viewgraphs on electrolysis performance improvement and validation experiment are presented. Topics covered include: water electrolysis: an ever increasing need/role for space missions; static feed electrolysis (SFE) technology: a concept developed for space applications; experiment objectives: why test in microgravity environment; and experiment description: approach, hardware description, test sequence and schedule.

  5. Report of the Terrestrial Bodies Science Working Group. Volume 3: Venus

    NASA Technical Reports Server (NTRS)

    Kaula, W. M.; Malin, M. C.; Masursky, H.; Pettengill, G.; Prinn, R.; Young, R. E.

    1977-01-01

    The science objectives of Pioneer Venus and future investigations of the planet are discussed. Concepts and payloads for proposed missions and the supporting research and technology required to obtain the desired measurements from space and Earth-based observations are examined, as well as mission priorities and schedules.

  6. Adaptations in Electronic Structure Calculations in Heterogeneous Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talamudupula, Sai

    Modern quantum chemistry deals with electronic structure calculations of unprecedented complexity and accuracy. They demand full power of high-performance computing and must be in tune with the given architecture for superior e ciency. To make such applications resourceaware, it is desirable to enable their static and dynamic adaptations using some external software (middleware), which may monitor both system availability and application needs, rather than mix science with system-related calls inside the application. The present work investigates scienti c application interlinking with middleware based on the example of the computational chemistry package GAMESS and middleware NICAN. The existing synchronous model ismore » limited by the possible delays due to the middleware processing time under the sustainable runtime system conditions. Proposed asynchronous and hybrid models aim at overcoming this limitation. When linked with NICAN, the fragment molecular orbital (FMO) method is capable of adapting statically and dynamically its fragment scheduling policy based on the computing platform conditions. Signi cant execution time and throughput gains have been obtained due to such static adaptations when the compute nodes have very di erent core counts. Dynamic adaptations are based on the main memory availability at run time. NICAN prompts FMO to postpone scheduling certain fragments, if there is not enough memory for their immediate execution. Hence, FMO may be able to complete the calculations whereas without such adaptations it aborts.« less

  7. An on-time power-aware scheduling scheme for medical sensor SoC-based WBAN systems.

    PubMed

    Hwang, Tae-Ho; Kim, Dong-Sun; Kim, Jung-Guk

    2012-12-27

    The focus of many leading technologies in the field of medical sensor systems is on low power consumption and robust data transmission. For example, the implantable cardioverter-defibrillator (ICD), which is used to maintain the heart in a healthy state, requires a reliable wireless communication scheme with an extremely low duty-cycle, high bit rate, and energy-efficient media access protocols. Because such devices must be sustained for over 5 years without access to battery replacement, they must be designed to have extremely low power consumption in sleep mode. Here, an on-time, energy-efficient scheduling scheme is proposed that performs power adjustments to minimize the sleep-mode current. The novelty of this scheduler is that it increases the determinacy of power adjustment and the predictability of scheduling by employing non-pre-emptible dual priority scheduling. This predictable scheduling also guarantees the punctuality of important periodic tasks based on their serialization, by using their worst case execution time) and the power consumption optimization. The scheduler was embedded into a system on chip (SoC) developed to support the wireless body area network-a wakeup-radio and wakeup-timer for implantable medical devices. This scheduling system is validated by the experimental results of its performance when used with life-time extensions of ICD devices.

  8. An On-Time Power-Aware Scheduling Scheme for Medical Sensor SoC-Based WBAN Systems

    PubMed Central

    Hwang, Tae-Ho; Kim, Dong-Sun; Kim, Jung-Guk

    2013-01-01

    The focus of many leading technologies in the field of medical sensor systems is on low power consumption and robust data transmission. For example, the implantable cardioverter-defibrillator (ICD), which is used to maintain the heart in a healthy state, requires a reliable wireless communication scheme with an extremely low duty-cycle, high bit rate, and energy-efficient media access protocols. Because such devices must be sustained for over 5 years without access to battery replacement, they must be designed to have extremely low power consumption in sleep mode. Here, an on-time, energy-efficient scheduling scheme is proposed that performs power adjustments to minimize the sleep-mode current. The novelty of this scheduler is that it increases the determinacy of power adjustment and the predictability of scheduling by employing non-pre-emptible dual priority scheduling. This predictable scheduling also guarantees the punctuality of important periodic tasks based on their serialization, by using their worst case execution time) and the power consumption optimization. The scheduler was embedded into a system on chip (SoC) developed to support the wireless body area network—a wakeup-radio and wakeup-timer for implantable medical devices. This scheduling system is validated by the experimental results of its performance when used with life-time extensions of ICD devices. PMID:23271602

  9. Beyond clinical priority: what matters when making operational decisions about emergency surgical queues?

    PubMed

    Fitzgerald, Anneke; Wu, Yong

    2017-08-01

    Objective This paper describes the perceptions of operating theatre staff in Australia and The Netherlands regarding the influence of logistical or operational reasons that may affect the scheduling of unplanned surgical cases. It is proposed that logistical or operational issues can influence the priority determination of queue position of surgical cases on the emergency waiting list. Methods A questionnaire was developed and conducted in 15 hospitals across The Netherlands and Australia, targeting anaesthetists, managers, nurses and surgeons. Statistical analyses revolved around these four professional groups. Six hypotheses were then developed and tested based on the responses collected from the participants. Results There were significant differences in perceptions of logistics delay factors across different professional groups when patients were waiting for unplanned surgery. There were also significant differences among different groups when setting logistical priority factors for planning and scheduling unplanned cases. The hypotheses tests confirm these differences, and the findings concur with the paradigmatic differences mentioned in the literature. These paradigmatic differences among the four professional groups may explain some of the tensions encountered when making decisions about scheduling emergency surgical queues, and therefore should be taken into consideration for management of operating theatres. Conclusions Queue positions of patients waiting for unplanned surgery, or emergency surgery, are determined by medical clinicians according to clinicians' indication of clinical priority. However, operating theatre managers are important in facilitating smooth operations when planning for emergency surgeries. It is necessary for surgeons to understand the logistical challenges faced by managers when requesting logistical priorities for their operations. What is known about the topic? Tensions exist about the efficient use of operating theatres and negotiating individual surgeon's demands, especially between surgeons and managers, because in many countries surgeons only work in the hospital and not for the hospital. What does this paper add? The present study examined the logistical effects on functionality and purports the notion that, while recognising the importance of clinical precedence, logistical factors influence queue order to ensure efficient use of operating theatre resources. What are the implications for practitioners? The results indicate that there are differences in the perceptions of healthcare professionals regarding the sequencing of emergency patients. These differences may lead to conflicts in the decision making process about triaging emergency or unplanned surgical cases. A clear understanding of the different perceptions of different functional groups may help address the conflicts that often arise in practice.

  10. SciBox, an end-to-end automated science planning and commanding system

    NASA Astrophysics Data System (ADS)

    Choo, Teck H.; Murchie, Scott L.; Bedini, Peter D.; Steele, R. Josh; Skura, Joseph P.; Nguyen, Lillian; Nair, Hari; Lucks, Michael; Berman, Alice F.; McGovern, James A.; Turner, F. Scott

    2014-01-01

    SciBox is a new technology for planning and commanding science operations for Earth-orbital and planetary space missions. It has been incrementally developed since 2001 and demonstrated on several spaceflight projects. The technology has matured to the point that it is now being used to plan and command all orbital science operations for the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) mission to Mercury. SciBox encompasses the derivation of observing sequences from science objectives, the scheduling of those sequences, the generation of spacecraft and instrument commands, and the validation of those commands prior to uploading to the spacecraft. Although the process is automated, science and observing requirements are incorporated at each step by a series of rules and parameters to optimize observing opportunities, which are tested and validated through simulation and review. Except for limited special operations and tests, there is no manual scheduling of observations or construction of command sequences. SciBox reduces the lead time for operations planning by shortening the time-consuming coordination process, reduces cost by automating the labor-intensive processes of human-in-the-loop adjudication of observing priorities, reduces operations risk by systematically checking constraints, and maximizes science return by fully evaluating the trade space of observing opportunities to meet MESSENGER science priorities within spacecraft recorder, downlink, scheduling, and orbital-geometry constraints.

  11. Request-Driven Schedule Automation for the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Johnston, Mark D.; Tran, Daniel; Arroyo, Belinda; Call, Jared; Mercado, Marisol

    2010-01-01

    The DSN Scheduling Engine (DSE) has been developed to increase the level of automated scheduling support available to users of NASA s Deep Space Network (DSN). We have adopted a request-driven approach to DSN scheduling, in contrast to the activity-oriented approach used up to now. Scheduling requests allow users to declaratively specify patterns and conditions on their DSN service allocations, including timing, resource requirements, gaps, overlaps, time linkages among services, repetition, priorities, and a wide range of additional factors and preferences. The DSE incorporates a model of the key constraints and preferences of the DSN scheduling domain, along with algorithms to expand scheduling requests into valid resource allocations, to resolve schedule conflicts, and to repair unsatisfied requests. We use time-bounded systematic search with constraint relaxation to return nearby solutions if exact ones cannot be found, where the relaxation options and order are under user control. To explore the usability aspects of our approach we have developed a graphical user interface incorporating some crucial features to make it easier to work with complex scheduling requests. Among these are: progressive revelation of relevant detail, immediate propagation and visual feedback from a user s decisions, and a meeting calendar metaphor for repeated patterns of requests. Even as a prototype, the DSE has been deployed and adopted as the initial step in building the operational DSN schedule, thus representing an important initial validation of our overall approach. The DSE is a core element of the DSN Service Scheduling Software (S(sup 3)), a web-based collaborative scheduling system now under development for deployment to all DSN users.

  12. Intelligent Transportation Systems/Commercial Vehicle Operations Project Plan For Commercial Vehicle Information Systems And Networks Electronic Data Interchange Standards Development And Deployment

    DOT National Transportation Integrated Search

    1996-12-02

    THE PURPOSE OF THIS DOCUMENT IS TO PROVIDE INFORMATION ON THE PLANNING FOR THE DEVELOPMENT AND DEPLOYMENT OF EDI STANDARDS FOR COMMERCIAL VEHICLE INFORMATION SYSTEMS AND NETWORKS (CVISN). THE STATUS, PRIORITIES, AND SCHEDULES FOR THIS EFFORT ARE CONT...

  13. Never Been KIST: Tor’s Congestion Management Blossoms with Kernel-Informed Socket Transport

    DTIC Science & Technology

    2014-08-01

    when compared to vanilla Tor. Outline of Major Contributions: We outline our major contributions as follows: – in Section 3 we discuss improvements to...experiments. We tested vanilla Tor us- ing the default CircuitPriorityHalflife of 30, the global scheduling part of KIST (without enforcing the write limits

  14. Real-time design with peer tasks

    NASA Technical Reports Server (NTRS)

    Goforth, Andre; Howes, Norman R.; Wood, Jonathan D.; Barnes, Michael J.

    1995-01-01

    We introduce a real-time design methodology for large scale, distributed, parallel architecture, real-time systems (LDPARTS), as an alternative to those methods using rate or dead-line monotonic analysis. In our method the fundamental units of prioritization, work items, are domain specific objects with timing requirements (deadlines) found in user's specification. A work item consists of a collection of tasks of equal priority. Current scheduling theories are applied with artifact deadlines introduced by the designer whereas our method schedules work items to meet user's specification deadlines (sometimes called end-to-end deadlines). Our method supports these scheduling properties. Work item scheduling is based on domain specific importance instead of task level urgency and still meets as many user specification deadlines as can be met by scheduling tasks with respect to urgency. Second, the minimum (closest) on-line deadline that can be guaranteed for a work item of highest importance, scheduled at run time, is approximately the inverse of the throughput, measured in work items per second. Third, throughput is not degraded during overload and instead of resorting to task shedding during overload, the designer can specify which work items to shed. We prove these properties in a mathematical model.

  15. Chip-set for quality of service support in passive optical networks

    NASA Astrophysics Data System (ADS)

    Ringoot, Edwin; Hoebeke, Rudy; Slabbinck, B. Hans; Verhaert, Michel

    1998-10-01

    In this paper the design of a chip-set for QoS provisioning in ATM-based Passive Optical Networks is discussed. The implementation of a general-purpose switch chip on the Optical Network Unit is presented, with focus on the design of the cell scheduling and buffer management logic. The cell scheduling logic supports `colored' grants, priority jumping and weighted round-robin scheduling. The switch chip offers powerful buffer management capabilities enabling the efficient support of GFR and UBR services. Multicast forwarding is also supported. In addition, the architecture of a MAC controller chip developed for a SuperPON access network is introduced. In particular, the permit scheduling logic and its implementation on the Optical Line Termination will be discussed. The chip-set enables the efficient support of services with different service requirements on the SuperPON. The permit scheduling logic built into the MAC controller chip in combination with the cell scheduling and buffer management capabilities of the switch chip can be used by network operators to offer guaranteed service performance to delay sensitive services, and to efficiently and fairly distribute any spare capacity to delay insensitive services.

  16. Chandra mission scheduling on-orbit experience

    NASA Astrophysics Data System (ADS)

    Bucher, Sabina; Williams, Brent; Pendexter, Misty; Balke, David

    2008-07-01

    Scheduling observatory time to maximize both day-to-day science target integration time and the lifetime of the observatory is a formidable challenge. Furthermore, it is not a static problem. Of course, every schedule brings a new set of observations, but the boundaries of the problem change as well. As spacecraft ages, its capabilities may degrade. As in-flight experience grows, capabilities may expand. As observing programs are completed, the needs and expectations of the science community may evolve. Changes such as these impact the rules by which a mission scheduled. In eight years on orbit, the Chandra X-Ray Observatory Mission Planning process has adapted to meet the challenge of maximizing day-to-day and mission lifetime science return, despite a consistently evolving set of scheduling constraints. The success of the planning team has been achieved, not through the use of complex algorithms and optimization routines, but through processes and home grown tools that help individuals make smart short term and long term Mission Planning decisions. This paper walks through the processes and tools used to plan and produce mission schedules for the Chandra X-Ray Observatory. Nominal planning and scheduling, target of opportunity response, and recovery from on-board autonomous safing actions are all addressed. Evolution of tools and processes, best practices, and lessons learned are highlighted along the way.

  17. Dynamic autonomous routing technology for IP-based satellite ad hoc networks

    NASA Astrophysics Data System (ADS)

    Wang, Xiaofei; Deng, Jing; Kostas, Theresa; Rajappan, Gowri

    2014-06-01

    IP-based routing for military LEO/MEO satellite ad hoc networks is very challenging due to network and traffic heterogeneity, network topology and traffic dynamics. In this paper, we describe a traffic priority-aware routing scheme for such networks, namely Dynamic Autonomous Routing Technology (DART) for satellite ad hoc networks. DART has a cross-layer design, and conducts routing and resource reservation concurrently for optimal performance in the fluid but predictable satellite ad hoc networks. DART ensures end-to-end data delivery with QoS assurances by only choosing routing paths that have sufficient resources, supporting different packet priority levels. In order to do so, DART incorporates several resource management and innovative routing mechanisms, which dynamically adapt to best fit the prevailing conditions. In particular, DART integrates a resource reservation mechanism to reserve network bandwidth resources; a proactive routing mechanism to set up non-overlapping spanning trees to segregate high priority traffic flows from lower priority flows so that the high priority flows do not face contention from low priority flows; a reactive routing mechanism to arbitrate resources between various traffic priorities when needed; a predictive routing mechanism to set up routes for scheduled missions and for anticipated topology changes for QoS assurance. We present simulation results showing the performance of DART. We have conducted these simulations using the Iridium constellation and trajectories as well as realistic military communications scenarios. The simulation results demonstrate DART's ability to discriminate between high-priority and low-priority traffic flows and ensure disparate QoS requirements of these traffic flows.

  18. A hierarchically distributed architecture for fault isolation expert systems on the space station

    NASA Technical Reports Server (NTRS)

    Miksell, Steve; Coffer, Sue

    1987-01-01

    The Space Station Axiomatic Fault Isolating Expert Systems (SAFTIES) system deals with the hierarchical distribution of control and knowledge among independent expert systems doing fault isolation and scheduling of Space Station subsystems. On its lower level, fault isolation is performed on individual subsystems. These fault isolation expert systems contain knowledge about the performance requirements of their particular subsystem and corrective procedures which may be involved in repsonse to certain performance errors. They can control the functions of equipment in their system and coordinate system task schedules. On a higher level, the Executive contains knowledge of all resources, task schedules for all systems, and the relative priority of all resources and tasks. The executive can override any subsystem task schedule in order to resolve use conflicts or resolve errors that require resources from multiple subsystems. Interprocessor communication is implemented using the SAFTIES Communications Interface (SCI). The SCI is an application layer protocol which supports the SAFTIES distributed multi-level architecture.

  19. A Clonal Selection Algorithm for Minimizing Distance Travel and Back Tracking of Automatic Guided Vehicles in Flexible Manufacturing System

    NASA Astrophysics Data System (ADS)

    Chawla, Viveak Kumar; Chanda, Arindam Kumar; Angra, Surjit

    2018-03-01

    The flexible manufacturing system (FMS) constitute of several programmable production work centers, material handling systems (MHSs), assembly stations and automatic storage and retrieval systems. In FMS, the automatic guided vehicles (AGVs) play a vital role in material handling operations and enhance the performance of the FMS in its overall operations. To achieve low makespan and high throughput yield in the FMS operations, it is highly imperative to integrate the production work centers schedules with the AGVs schedules. The Production schedule for work centers is generated by application of the Giffler and Thompson algorithm under four kind of priority hybrid dispatching rules. Then the clonal selection algorithm (CSA) is applied for the simultaneous scheduling to reduce backtracking as well as distance travel of AGVs within the FMS facility. The proposed procedure is computationally tested on the benchmark FMS configuration from the literature and findings from the investigations clearly indicates that the CSA yields best results in comparison of other applied methods from the literature.

  20. Dynamic Modeling of ALS Systems

    NASA Technical Reports Server (NTRS)

    Jones, Harry

    2002-01-01

    The purpose of dynamic modeling and simulation of Advanced Life Support (ALS) systems is to help design them. Static steady state systems analysis provides basic information and is necessary to guide dynamic modeling, but static analysis is not sufficient to design and compare systems. ALS systems must respond to external input variations and internal off-nominal behavior. Buffer sizing, resupply scheduling, failure response, and control system design are aspects of dynamic system design. We develop two dynamic mass flow models and use them in simulations to evaluate systems issues, optimize designs, and make system design trades. One model is of nitrogen leakage in the space station, the other is of a waste processor failure in a regenerative life support system. Most systems analyses are concerned with optimizing the cost/benefit of a system at its nominal steady-state operating point. ALS analysis must go beyond the static steady state to include dynamic system design. All life support systems exhibit behavior that varies over time. ALS systems must respond to equipment operating cycles, repair schedules, and occasional off-nominal behavior or malfunctions. Biological components, such as bioreactors, composters, and food plant growth chambers, usually have operating cycles or other complex time behavior. Buffer sizes, material stocks, and resupply rates determine dynamic system behavior and directly affect system mass and cost. Dynamic simulation is needed to avoid the extremes of costly over-design of buffers and material reserves or system failure due to insufficient buffers and lack of stored material.

  1. Online stochastic optimization of radiotherapy patient scheduling.

    PubMed

    Legrain, Antoine; Fortin, Marie-Andrée; Lahrichi, Nadia; Rousseau, Louis-Martin

    2015-06-01

    The effective management of a cancer treatment facility for radiation therapy depends mainly on optimizing the use of the linear accelerators. In this project, we schedule patients on these machines taking into account their priority for treatment, the maximum waiting time before the first treatment, and the treatment duration. We collaborate with the Centre Intégré de Cancérologie de Laval to determine the best scheduling policy. Furthermore, we integrate the uncertainty related to the arrival of patients at the center. We develop a hybrid method combining stochastic optimization and online optimization to better meet the needs of central planning. We use information on the future arrivals of patients to provide an accurate picture of the expected utilization of resources. Results based on real data show that our method outperforms the policies typically used in treatment centers.

  2. Advanced Technology System Scheduling Governance Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ang, Jim; Carnes, Brian; Hoang, Thuc

    In the fall of 2005, the Advanced Simulation and Computing (ASC) Program appointed a team to formulate a governance model for allocating resources and scheduling the stockpile stewardship workload on ASC capability systems. This update to the original document takes into account the new technical challenges and roles for advanced technology (AT) systems and the new ASC Program workload categories that must be supported. The goal of this updated model is to effectively allocate and schedule AT computing resources among all three National Nuclear Security Administration (NNSA) laboratories for weapons deliverables that merit priority on this class of resource. Themore » process outlined below describes how proposed work can be evaluated and approved for resource allocations while preserving high effective utilization of the systems. This approach will provide the broadest possible benefit to the Stockpile Stewardship Program (SSP).« less

  3. Seamless transitions from early prototypes to mature operational software - A technology that enables the process for planning and scheduling applications

    NASA Technical Reports Server (NTRS)

    Hornstein, Rhoda S.; Wunderlich, Dana A.; Willoughby, John K.

    1992-01-01

    New and innovative software technology is presented that provides a cost effective bridge for smoothly transitioning prototype software, in the field of planning and scheduling, into an operational environment. Specifically, this technology mixes the flexibility and human design efficiency of dynamic data typing with the rigor and run-time efficiencies of static data typing. This new technology provides a very valuable tool for conducting the extensive, up-front system prototyping that leads to specifying the correct system and producing a reliable, efficient version that will be operationally effective and will be accepted by the intended users.

  4. A real-time programming system.

    PubMed

    Townsend, H R

    1979-03-01

    The paper describes a Basic Operating and Scheduling System (BOSS) designed for a small computer. User programs are organised as self-contained modular 'processes' and the way in which the scheduler divides the time of the computer equally between them, while arranging for any process which has to respond to an interrupt from a peripheral device to be given the necessary priority, is described in detail. Next the procedures provided by the operating system to organise communication between processes are described, and how they are used to construct dynamically self-modifying real-time systems. Finally, the general philosophy of BOSS and applications to a multi-processor assembly are discussed.

  5. 78 FR 75629 - Self-Regulatory Organizations; Miami International Securities Exchange LLC; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-12

    ... Effectiveness of a Proposed Rule Change To Amend the MIAX Fee Schedule December 6, 2013. Pursuant to the... Priority Customer Rebate Program (the ``Program'') to (i) lower the volume thresholds of the four highest... thresholds in a month as described below. The volume thresholds are calculated based on the customer average...

  6. 78 FR 76339 - Self-Regulatory Organizations; International Securities Exchange, LLC; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-17

    ... November 1, 2013 the Exchange further amended its Schedule of Fees to increase its Market Maker Plus rebate... Maker Plus and are affiliated with an Electronic Access Member that executes a total affiliated Priority Customer ADV of 200,000 contracts in a calendar month.\\4\\ When introducing this new Market Maker Plus...

  7. KSC-02pd1894

    NASA Image and Video Library

    2002-12-09

    KENNEDY SPACE CENTER, FLA. -- Space Shuttle Columbia sits on Launch Pad 39A, atop the Mobile Launcher Platform. The STS-107 research mission comprises experiments ranging from material sciences to life sciences, plus the Fast Reaction Experiments Enabling Science, Technology, Applications and Research (FREESTAR) that incorporates eight high priority secondary attached shuttle experiments. Mission STS-107 is scheduled to launch Jan. 16, 2003.

  8. A static model of a Sendzimir mill for use in shape control

    NASA Astrophysics Data System (ADS)

    Gunawardene, G. W. D. M.

    The design of shape control systems is an area of current interest in the steel industry. Shape is defined as the internal stress distribution resulting from a transverse variation in the reduction of the strip thickness. The object of shape control is to adjust the mill so that the rolled strip is free from internal stresses. Both static and dynamic models of the mill are required for the control system design.The subject of this thesis is the static model of the Sendzimir cold rolling mill, which is a 1-2-3-4 type cluster mill. The static model derived enables shape profiles to be calculated for a given set of actuator positions, and is used to generate the steady state mill gains. The method of calculation of these shape profiles is discussed. The shape profiles obtained for different mill schedules are plotted against the distance across the strip. The corresponding mill gains are calculated and these relate the shape changes to the actuator changes. These mill gains are presented in the form of a square matrix, obtained by measuring shape at eight points across the strip.

  9. Algorithm of composing the schedule of construction and installation works

    NASA Astrophysics Data System (ADS)

    Nehaj, Rustam; Molotkov, Georgij; Rudchenko, Ivan; Grinev, Anatolij; Sekisov, Aleksandr

    2017-10-01

    An algorithm for scheduling works is developed, in which the priority of the work corresponds to the total weight of the subordinate works, the vertices of the graph, and it is proved that for graphs of the tree type the algorithm is optimal. An algorithm is synthesized to reduce the search for solutions when drawing up schedules of construction and installation works, allocating a subset with the optimal solution of the problem of the minimum power, which is determined by the structure of its initial data and numerical values. An algorithm for scheduling construction and installation work is developed, taking into account the schedule for the movement of brigades, which is characterized by the possibility to efficiently calculate the values of minimizing the time of work performance by the parameters of organizational and technological reliability through the use of the branch and boundary method. The program of the computational algorithm was compiled in the MatLAB-2008 program. For the initial data of the matrix, random numbers were taken, uniformly distributed in the range from 1 to 100. It takes 0.5; 2.5; 7.5; 27 minutes to solve the problem. Thus, the proposed method for estimating the lower boundary of the solution is sufficiently accurate and allows efficient solution of the minimax task of scheduling construction and installation works.

  10. Concepts, requirements, and design approaches for building successful planning and scheduling systems

    NASA Technical Reports Server (NTRS)

    Hornstein, Rhoda Shaller; Willoughby, John K.

    1991-01-01

    Traditional practice of systems engineering management assumes requirements can be precisely determined and unambiguously defined prior to system design and implementation; practice further assumes requirements are held static during implementation. Human-computer decision support systems for service planning and scheduling applications do not conform well to these assumptions. Adaptation to the traditional practice of systems engineering management are required. Basic technology exists to support these adaptations. Additional innovations must be encouraged and nutured. Continued partnership between the programmatic and technical perspective assures proper balance of the impossible with the possible. Past problems have the following origins: not recognizing the unusual and perverse nature of the requirements for planning and scheduling; not recognizing the best starting point assumptions for the design; not understanding the type of system that being built; and not understanding the design consequences of the operations concept selected.

  11. Transmission Scheduling and Routing Algorithms for Delay Tolerant Networks

    NASA Technical Reports Server (NTRS)

    Dudukovich, Rachel; Raible, Daniel E.

    2016-01-01

    The challenges of data processing, transmission scheduling and routing within a space network present a multi-criteria optimization problem. Long delays, intermittent connectivity, asymmetric data rates and potentially high error rates make traditional networking approaches unsuitable. The delay tolerant networking architecture and protocols attempt to mitigate many of these issues, yet transmission scheduling is largely manually configured and routes are determined by a static contact routing graph. A high level of variability exists among the requirements and environmental characteristics of different missions, some of which may allow for the use of more opportunistic routing methods. In all cases, resource allocation and constraints must be balanced with the optimization of data throughput and quality of service. Much work has been done researching routing techniques for terrestrial-based challenged networks in an attempt to optimize contact opportunities and resource usage. This paper examines several popular methods to determine their potential applicability to space networks.

  12. Time Triggered Ethernet System Testing Means and Method

    NASA Technical Reports Server (NTRS)

    Smithgall, William Todd (Inventor); Hall, Brendan (Inventor); Varadarajan, Srivatsan (Inventor)

    2014-01-01

    Methods and apparatus are provided for evaluating the performance of a Time Triggered Ethernet (TTE) system employing Time Triggered (TT) communication. A real TTE system under test (SUT) having real input elements communicating using TT messages with output elements via one or more first TTE switches during a first time interval schedule established for the SUT. A simulation system is also provided having input simulators that communicate using TT messages via one or more second TTE switches with the same output elements during a second time interval schedule established for the simulation system. The first and second time interval schedules are off-set slightly so that messages from the input simulators, when present, arrive at the output elements prior to messages from the analogous real inputs, thereby having priority over messages from the real inputs and causing the system to operate based on the simulated inputs when present.

  13. KSC-02pd0978

    NASA Image and Video Library

    2002-06-14

    KENNEDY SPACE CENTER, FLA. - -- Columbia's payload bay doors are ready to be closed for mission STS-107. Installed inside are the Hitchhiker Bridge, a carrier for the Fast Reaction Experiments Enabling Science, Technology, Applications and Research (FREESTAR) that incorporates eight high priority secondary attached shuttle experiments, plus the SHI Research Double Module (SHI/RDM), also known as SPACEHAB. STS-107 is scheduled for launch July 19, 2001

  14. Top Information Need Priorities of Older Adults Newly Diagnosed With Active Myeloma.

    PubMed

    Tariman, Joseph D; Doorenbos, Ardith; Schepp, Karen G; Singhal, Seema; Berry, Donna L

    2015-01-01

    Prioritizing patients' information needs maximizes efficiency. This study examined the information sources and priorities in a sample of older adults newly diagnosed with symptomatic myeloma requiring immediate therapy. An association analysis of whether information needs were influenced by sociodemographic variables such as age, gender, education, marital status, and income was also conducted. The Information Needs Questionnaire (INQ) and an investigator-developed interview schedule were administered to 20 older adults diagnosed with symptomatic myeloma during a 30- to 45-minute semistructured interview. We found that older adults newly diagnosed with symptomatic myeloma have different priorities of information needs when compared with younger patients diagnosed with various types of cancer. The top three priorities related to treatment, prognosis, and self-care. Sociodemographic variables did not influence the priorities of information needs among older adults with symptomatic myeloma. The Internet, physicians, family, and friends were among the top sources of information. Advanced practitioners in oncology should support and identify interventions that can enhance patients' learning process from these sources. Well poised to assist patients in searching credible and reliable Internet sources, advanced practitioners in oncology can provide patient education about different treatments and the impact of such treatments on prognosis (e.g., overall survival and likelihood of cure).

  15. SOFIA's Choice: Automating the Scheduling of Airborne Observations

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Norvig, Peter (Technical Monitor)

    1999-01-01

    This paper describes the problem of scheduling observations for an airborne telescope. Given a set of prioritized observations to choose from, and a wide range of complex constraints governing legitimate choices and orderings, how can we efficiently and effectively create a valid flight plan which supports high priority observations? This problem is quite different from scheduling problems which are routinely solved automatically in industry. For instance, the problem requires making choices which lead to other choices later, and contains many interacting complex constraints over both discrete and continuous variables. Furthermore, new types of constraints may be added as the fundamental problem changes. As a result of these features, this problem cannot be solved by traditional scheduling techniques. The problem resembles other problems in NASA and industry, from observation scheduling for rovers and other science instruments to vehicle routing. The remainder of the paper is organized as follows. In 2 we describe the observatory in order to provide some background. In 3 we describe the problem of scheduling a single flight. In 4 we compare flight planning and other scheduling problems and argue that traditional techniques are not sufficient to solve this problem. We also mention similar complex scheduling problems which may benefit from efforts to solve this problem. In 5 we describe an approach for solving this problem based on research into a similar problem, that of scheduling observations for a space-borne probe. In 6 we discuss extensions of the flight planning problem as well as other problems which are similar to flight planning. In 7 we conclude and discuss future work.

  16. Definition study for variable cycle engine testbed engine and associated test program

    NASA Technical Reports Server (NTRS)

    Vdoviak, J. W.

    1978-01-01

    The product/study double bypass variable cycle engine (VCE) was updated to incorporate recent improvements. The effect of these improvements on mission range and noise levels was determined. This engine design was then compared with current existing high-technology core engines in order to define a subscale testbed configuration that simulated many of the critical technology features of the product/study VCE. Detailed preliminary program plans were then developed for the design, fabrication, and static test of the selected testbed engine configuration. These plans included estimated costs and schedules for the detail design, fabrication and test of the testbed engine and the definition of a test program, test plan, schedule, instrumentation, and test stand requirements.

  17. Comparison of Inter-Observer Variability and Diagnostic Performance of the Fifth Edition of BI-RADS for Breast Ultrasound of Static versus Video Images.

    PubMed

    Youk, Ji Hyun; Jung, Inkyung; Yoon, Jung Hyun; Kim, Sung Hun; Kim, You Me; Lee, Eun Hye; Jeong, Sun Hye; Kim, Min Jung

    2016-09-01

    Our aim was to compare the inter-observer variability and diagnostic performance of the Breast Imaging Reporting and Data System (BI-RADS) lexicon for breast ultrasound of static and video images. Ninety-nine breast masses visible on ultrasound examination from 95 women 19-81 y of age at five institutions were enrolled in this study. They were scheduled to undergo biopsy or surgery or had been stable for at least 2 y of ultrasound follow-up after benign biopsy results or typically benign findings. For each mass, representative long- and short-axis static ultrasound images were acquired; real-time long- and short-axis B-mode video images through the mass area were separately saved as cine clips. Each image was reviewed independently by five radiologists who were asked to classify ultrasound features according to the fifth edition of the BI-RADS lexicon. Inter-observer variability was assessed using kappa (κ) statistics. Diagnostic performance on static and video images was compared using the area under the receiver operating characteristic curve. No significant difference was found in κ values between static and video images for all descriptors, although κ values of video images were higher than those of static images for shape, orientation, margin and calcifications. After receiver operating characteristic curve analysis, the video images (0.83, range: 0.77-0.87) had higher areas under the curve than the static images (0.80, range: 0.75-0.83; p = 0.08). Inter-observer variability and diagnostic performance of video images was similar to that of static images on breast ultrasonography according to the new edition of BI-RADS. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  18. Runway Scheduling for Charlotte Douglas International Airport

    NASA Technical Reports Server (NTRS)

    Malik, Waqar A.; Lee, Hanbong; Jung, Yoon C.

    2016-01-01

    This paper describes the runway scheduler that was used in the 2014 SARDA human-in-the-loop simulations for CLT. The algorithm considers multiple runways and computes optimal runway times for departures and arrivals. In this paper, we plan to run additional simulation on the standalone MRS algorithm and compare the performance of the algorithm against a FCFS heuristic where aircraft avail of runway slots based on a priority given by their positions in the FCFS sequence. Several traffic scenarios corresponding to current day traffic level and demand profile will be generated. We also plan to examine the effect of increase in traffic level (1.2x and 1.5x) and observe trends in algorithm performance.

  19. Load Balancing in Cloud Computing Environment Using Improved Weighted Round Robin Algorithm for Nonpreemptive Dependent Tasks.

    PubMed

    Devi, D Chitra; Uthariaraj, V Rhymend

    2016-01-01

    Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods.

  20. Load Balancing in Cloud Computing Environment Using Improved Weighted Round Robin Algorithm for Nonpreemptive Dependent Tasks

    PubMed Central

    Devi, D. Chitra; Uthariaraj, V. Rhymend

    2016-01-01

    Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods. PMID:26955656

  1. Scheduling periodic jobs that allow imprecise results

    NASA Technical Reports Server (NTRS)

    Chung, Jen-Yao; Liu, Jane W. S.; Lin, Kwei-Jay

    1990-01-01

    The problem of scheduling periodic jobs in hard real-time systems that support imprecise computations is discussed. Two workload models of imprecise computations are presented. These models differ from traditional models in that a task may be terminated any time after it has produced an acceptable result. Each task is logically decomposed into a mandatory part followed by an optional part. In a feasible schedule, the mandatory part of every task is completed before the deadline of the task. The optional part refines the result produced by the mandatory part to reduce the error in the result. Applications are classified as type N and type C, according to undesirable effects of errors. The two workload models characterize the two types of applications. The optional parts of the tasks in an N job need not ever be completed. The resulting quality of each type-N job is measured in terms of the average error in the results over several consecutive periods. A class of preemptive, priority-driven algorithms that leads to feasible schedules with small average error is described and evaluated.

  2. Memory and Energy Optimization Strategies for Multithreaded Operating System on the Resource-Constrained Wireless Sensor Node

    PubMed Central

    Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Xu, Jun; Yang, Jianfeng; Zhou, Haiying; Shi, Hongling; Zhou, Peng

    2015-01-01

    Memory and energy optimization strategies are essential for the resource-constrained wireless sensor network (WSN) nodes. In this article, a new memory-optimized and energy-optimized multithreaded WSN operating system (OS) LiveOS is designed and implemented. Memory cost of LiveOS is optimized by using the stack-shifting hybrid scheduling approach. Different from the traditional multithreaded OS in which thread stacks are allocated statically by the pre-reservation, thread stacks in LiveOS are allocated dynamically by using the stack-shifting technique. As a result, memory waste problems caused by the static pre-reservation can be avoided. In addition to the stack-shifting dynamic allocation approach, the hybrid scheduling mechanism which can decrease both the thread scheduling overhead and the thread stack number is also implemented in LiveOS. With these mechanisms, the stack memory cost of LiveOS can be reduced more than 50% if compared to that of a traditional multithreaded OS. Not is memory cost optimized, but also the energy cost is optimized in LiveOS, and this is achieved by using the multi-core “context aware” and multi-core “power-off/wakeup” energy conservation approaches. By using these approaches, energy cost of LiveOS can be reduced more than 30% when compared to the single-core WSN system. Memory and energy optimization strategies in LiveOS not only prolong the lifetime of WSN nodes, but also make the multithreaded OS feasible to run on the memory-constrained WSN nodes. PMID:25545264

  3. Research on intelligent power consumption strategy based on time-of-use pricing

    NASA Astrophysics Data System (ADS)

    Fu, Wei; Gong, Li; Chen, Heli; He, Yu

    2017-06-01

    In this paper, through the analysis of shortcomings of the current domestic and foreign household power consumption strategy: Passive way of power consumption, ignoring the different priority of electric equipment, neglecting the actual load pressure of the grid, ignoring the interaction with the user, to decrease the peak-valley difference and improve load curve in residential area by demand response (DR technology), an intelligent power consumption scheme based on time-of-use(TOU) pricing for household appliances is proposed. The main contribution of this paper is: (1) Three types of household appliance loads are abstracted from different operating laws of various household appliances, and the control models and DR strategies corresponding to these types are established. (2) The fuzzified processing for the information of TOU price, which is based on the time intervals, is performed to get the price priority, in accordance with such DR events as the maximum restricted load of DR, the time of DR and the duration of interruptible load and so on, the DR control rule and pre-scheduling mechanism are led in. (3) The dispatching sequence of household appliances in the control and scheduling queue are switched and controlled to implement the equilibrium of peak and valley loads. The equilibrium effects and economic benefits of power system by pre-scheduling and DR dispatching are compared and analyzed by simulation example, and the results show that using the proposed household appliance control (HAC) scheme the overall cost of consumers can be reduced and the power system load can be alleviated, so the proposed household appliance control (HAC) scheme is feasible and reasonable.

  4. An Empirical Comparison of Seven Iterative and Evolutionary Function Optimization Heuristics

    NASA Technical Reports Server (NTRS)

    Baluja, Shumeet

    1995-01-01

    This report is a repository of the results obtained from a large scale empirical comparison of seven iterative and evolution-based optimization heuristics. Twenty-seven static optimization problems, spanning six sets of problem classes which are commonly explored in genetic algorithm literature, are examined. The problem sets include job-shop scheduling, traveling salesman, knapsack, binpacking, neural network weight optimization, and standard numerical optimization. The search spaces in these problems range from 2368 to 22040. The results indicate that using genetic algorithms for the optimization of static functions does not yield a benefit, in terms of the final answer obtained, over simpler optimization heuristics. Descriptions of the algorithms tested and the encodings of the problems are described in detail for reproducibility.

  5. Adaptive wing static aeroelastic roll control

    NASA Astrophysics Data System (ADS)

    Ehlers, Steven M.; Weisshaar, Terrence A.

    1993-09-01

    Control of the static aeroelastic characteristics of a swept uniform wing in roll using an adaptive structure is examined. The wing structure is modeled as a uniform beam with bending and torsional deformation freedom. Aerodynamic loads are obtained from strip theory. The structure model includes coefficients representing torsional and bending actuation provided by embedded piezoelectric material layers. The wing is made adaptive by requiring the electric field applied to the piezoelectric material layers to be proportional to the wing root loads. The proportionality factor, or feedback gain, is used to control static aeroelastic rolling properties. Example wing configurations are used to illustrate the capabilities of the adaptive structure. The results show that rolling power, damping-in-roll and aileron effectiveness can be controlled by adjusting the feedback gain. And that dynamic pressure affects the gain required. Gain scheduling can be used to set and maintain rolling properties over a range of dynamic pressures. An adaptive wing provides a method for active aeroelastic tailoring of structural response to meet changing structural performance requirements during a roll maneuver.

  6. Including internal mammary lymph nodes in radiation therapy for synchronous bilateral breast cancer: an international survey of treatment technique and clinical priorities.

    PubMed

    Roumeliotis, M; Long, K; Phan, T; Graham, D; Quirk, S

    2018-06-05

    The aim of this study was to understand the international standard practice for radiation therapy treatment techniques and clinical priorities for institutions including the internal mammary lymph nodes (IMLNs) in the target volume for patients with synchronous bilateral breast cancer. An international survey was developed to include questions that would provide awareness of favored treatment techniques, treatment planning and delivery resource requirements, and the clinical priorities that may lead to the utilization of preferred treatment techniques. Of the 135 respondents, 82 indicated that IMLNs are regularly included in the target volume for radiation therapy (IMLN-inclusion) when the patient is otherwise generally indicated for regional nodal irradiation. Of the 82 respondents that regularly include IMLNs, five were removed as those respondents do not treat this population synchronously. Of the 77 respondents, institutional standard of care varied significantly, though VMAT (34%) and combined static photon and electron fields (21%) were the most commonly utilized techniques. Respondents did preferentially select target volume coverage (70%) as the most important clinical priority, followed by normal tissue sparing (25%). The results of the survey indicate that the IMLN-inclusion for radiation therapy has not yet been comprehensively adopted. As well, no consensus on best practice for radiation therapy treatment techniques has been reached.

  7. Decentralized Real-Time Scheduling

    DTIC Science & Technology

    1990-08-01

    must provide several alternative resource management policies, including FIFO and deadline queueing for shared resources that are not available. 5...When demand exceeds the supply of shared resources (even within a single switch), some calls cannot be completed. In that case, a call’s priority...associated chiefly with the need to manage resources in a timely and decentralized fashion. The Alpha programming model permits the convenient expression of

  8. KSC-02pd1880

    NASA Image and Video Library

    2002-12-09

    KENNEDY SPACE CENTER, FLA. - Space Shuttle Columbia is poised to begin rollout from the Vehicle Assembly Building to Launch Pad 39A. The STS-107 research mission comprises experiments ranging from material sciences to life sciences (many rats), plus the Fast Reaction Experiments Enabling Science, Technology, Applications and Research (FREESTAR) that incorporates eight high priority secondary attached shuttle experiments. Mission STS-107 is scheduled to launch Jan. 16, 2003.

  9. Generation of Look-Up Tables for Dynamic Job Shop Scheduling Decision Support Tool

    NASA Astrophysics Data System (ADS)

    Oktaviandri, Muchamad; Hassan, Adnan; Mohd Shaharoun, Awaluddin

    2016-02-01

    Majority of existing scheduling techniques are based on static demand and deterministic processing time, while most job shop scheduling problem are concerned with dynamic demand and stochastic processing time. As a consequence, the solutions obtained from the traditional scheduling technique are ineffective wherever changes occur to the system. Therefore, this research intends to develop a decision support tool (DST) based on promising artificial intelligent that is able to accommodate the dynamics that regularly occur in job shop scheduling problem. The DST was designed through three phases, i.e. (i) the look-up table generation, (ii) inverse model development and (iii) integration of DST components. This paper reports the generation of look-up tables for various scenarios as a part in development of the DST. A discrete event simulation model was used to compare the performance among SPT, EDD, FCFS, S/OPN and Slack rules; the best performances measures (mean flow time, mean tardiness and mean lateness) and the job order requirement (inter-arrival time, due dates tightness and setup time ratio) which were compiled into look-up tables. The well-known 6/6/J/Cmax Problem from Muth and Thompson (1963) was used as a case study. In the future, the performance measure of various scheduling scenarios and the job order requirement will be mapped using ANN inverse model.

  10. [Vaccination schedule of the Spanish association of paediatrics: recommendations 2010].

    PubMed

    Marès Bermúdez, J; van Esso Arbolave, D; Arístegui Fernández, J; Ruiz Contreras, J; González Hachero, J; Merino Moína, M; Barrio Corrales, F; Alvarez García, F J; Cilleruelo Ortega, M J; Ortigosa Del Castillo, L; Moreno Pérez, D

    2010-06-01

    The Vaccine Advisory Committee of the Spanish Association of Paediatrics updates annually, the immunization schedule, taking into account epidemiological data, as well as evidence of the effectiveness and efficiency of vaccines. This vaccination schedule includes grades of recommendation. The committee has graded as universal vaccines those that all children should receive, as recommended those with a profile of universal vaccination in childhood and which are desirable that all children receive, but that can be prioritized based on resources for its public funding and for risk groups those targeting groups of people in situations of epidemiological risk. The Committee considers as a priority to achieve a common immunization schedule. The Committee reaffirms the recommendation to include pneumococcal vaccination in the routine vaccination schedule. Vaccination against varicella in the second year of life is an effective strategy and therefore a desirable goal. Vaccination against rotavirus is recommended for all infants given the morbidity and high burden on the health care system. The Committee adheres to the recommendations of the Interterritorial Council of the National Health Care System in reference to routine vaccination against HPV for all girls aged 11 to 14 years and stresses the need to vaccinate against influenza and hepatitis A all patients with risk factors for these diseases. Finally, it stresses the need to update incomplete immunization schedules using accelerated immunization schedules. Copyright 2010 Asociación Española de Pediatría. Published by Elsevier Espana. All rights reserved.

  11. The TJO-OAdM Robotic Observatory: the scheduler

    NASA Astrophysics Data System (ADS)

    Colomé, Josep; Casteels, Kevin; Ribas, Ignasi; Francisco, Xavier

    2010-07-01

    The Joan Oró Telescope at the Montsec Astronomical Observatory (TJO - OAdM) is a small-class observatory working under completely unattended control, due to the isolation of the site. Robotic operation is mandatory for its routine use. The level of robotization of an observatory is given by its reliability in responding to environment changes and by the required human interaction due to possible alarms. These two points establish a level of human attendance to ensure low risk at any time. But there is another key point when deciding how the system performs as a robot: the capability to adapt the scheduled observation to actual conditions. The scheduler represents a fundamental element to fully achieve an intelligent response at any time. Its main task is the mid- and short-term time optimization and it has a direct effect on the scientific return achieved by the observatory. We present a description of the scheduler developed for the TJO - OAdM, which is separated in two parts. Firstly, a pre-scheduler that makes a temporary selection of objects from the available projects according to their possibility of observation. This process is carried out before the beginning of the night following different selection criteria. Secondly, a dynamic scheduler that is executed any time a target observation is complete and a new one must be scheduled. The latter enables the selection of the best target in real time according to actual environment conditions and the set of priorities.

  12. Surgery scheduling optimization considering real life constraints and comprehensive operation cost of operating room.

    PubMed

    Xiang, Wei; Li, Chong

    2015-01-01

    Operating Room (OR) is the core sector in hospital expenditure, the operation management of which involves a complete three-stage surgery flow, multiple resources, prioritization of the various surgeries, and several real-life OR constraints. As such reasonable surgery scheduling is crucial to OR management. To optimize OR management and reduce operation cost, a short-term surgery scheduling problem is proposed and defined based on the survey of the OR operation in a typical hospital in China. The comprehensive operation cost is clearly defined considering both under-utilization and overutilization. A nested Ant Colony Optimization (nested-ACO) incorporated with several real-life OR constraints is proposed to solve such a combinatorial optimization problem. The 10-day manual surgery schedules from a hospital in China are compared with the optimized schedules solved by the nested-ACO. Comparison results show the advantage using the nested-ACO in several measurements: OR-related time, nurse-related time, variation in resources' working time, and the end time. The nested-ACO considering real-life operation constraints such as the difference between first and following case, surgeries priority, and fixed nurses in pre/post-operative stage is proposed to solve the surgery scheduling optimization problem. The results clearly show the benefit of using the nested-ACO in enhancing the OR management efficiency and minimizing the comprehensive overall operation cost.

  13. White Paper on studying the safety of the childhood immunization schedule in the Vaccine Safety Datalink.

    PubMed

    Glanz, Jason M; Newcomer, Sophia R; Jackson, Michael L; Omer, Saad B; Bednarczyk, Robert A; Shoup, Jo Ann; DeStefano, Frank; Daley, Matthew F

    2016-02-15

    While the large majority of parents in the U.S. vaccinate their children according to the recommended immunization schedule, some parents have refused or delayed vaccinating, often citing safety concerns. In response to public concern, the U.S. Institute of Medicine (IOM) evaluated existing research regarding the safety of the recommended immunization schedule. The IOM concluded that although available evidence strongly supported the safety of the currently recommended schedule as a whole, additional observational research was warranted to compare health outcomes between fully vaccinated children and those on a delayed or alternative schedule. In addition, the IOM identified the Vaccine Safety Datalink (VSD) as an important resource for conducting this research. Guided by the IOM findings, the Centers for Disease Control and Prevention (CDC) commissioned a White Paper to assess how the VSD could be used to study the safety of the childhood immunization schedule. Guided by subject matter expert engagement, the resulting White Paper outlines a 4 stage approach for identifying exposure groups of undervaccinated children, presents a list of health outcomes of highest priority to examine in this context, and describes various study designs and statistical methods that could be used to analyze the safety of the schedule. While it appears feasible to study the safety of the recommended immunization schedule in settings such as the VSD, these studies will be inherently complex, and as with all observational studies, will need to carefully address issues of confounding and bias. In light of these considerations, decisions about conducting studies of the safety of the schedule will also need to assess epidemiological evidence of potential adverse events that could be related to the schedule, the biological plausibility of an association between an adverse event and the schedule, and public concern about the safety of the schedule. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Saturn Apollo Program

    NASA Image and Video Library

    1960-06-15

    The Saturn Project was approved on January 18, 1960 as a program of the highest national priority. The formal test program to prove out the clustered-booster concept was well underway. A series of static tests of the Saturn I booster (S-I stage) began June 3, 1960 at the Marshall Space Flight Center (MSFC). This photograph depicts the Saturn I S-I stage equipped with eight H-1 engines, being successfully test-fired for the duration of 121 seconds on June 15, 1960.

  15. Variable Scheduling to Mitigate Channel Losses in Energy-Efficient Body Area Networks

    PubMed Central

    Tselishchev, Yuriy; Boulis, Athanassios; Libman, Lavy

    2012-01-01

    We consider a typical body area network (BAN) setting in which sensor nodes send data to a common hub regularly on a TDMA basis, as defined by the emerging IEEE 802.15.6 BAN standard. To reduce transmission losses caused by the highly dynamic nature of the wireless channel around the human body, we explore variable TDMA scheduling techniques that allow the order of transmissions within each TDMA round to be decided on the fly, rather than being fixed in advance. Using a simple Markov model of the wireless links, we devise a number of scheduling algorithms that can be performed by the hub, which aim to maximize the expected number of successful transmissions in a TDMA round, and thereby significantly reduce transmission losses as compared with a static TDMA schedule. Importantly, these algorithms do not require a priori knowledge of the statistical properties of the wireless channels, and the reliability improvement is achieved entirely via shuffling the order of transmissions among devices, and does not involve any additional energy consumption (e.g., retransmissions). We evaluate these algorithms directly on an experimental set of traces obtained from devices strapped to human subjects performing regular daily activities, and confirm that the benefits of the proposed variable scheduling algorithms extend to this practical setup as well. PMID:23202183

  16. A lock-free priority queue design based on multi-dimensional linked lists

    DOE PAGES

    Dechev, Damian; Zhang, Deli

    2015-04-03

    The throughput of concurrent priority queues is pivotal to multiprocessor applications such as discrete event simulation, best-first search and task scheduling. Existing lock-free priority queues are mostly based on skiplists, which probabilistically create shortcuts in an ordered list for fast insertion of elements. The use of skiplists eliminates the need of global rebalancing in balanced search trees and ensures logarithmic sequential search time on average, but the worst-case performance is linear with respect to the input size. In this paper, we propose a quiescently consistent lock-free priority queue based on a multi-dimensional list that guarantees worst-case search time of O(logN)more » for key universe of size N. The novel multi-dimensional list (MDList) is composed of nodes that contain multiple links to child nodes arranged by their dimensionality. The insertion operation works by first injectively mapping the scalar key to a high-dimensional vector, then uniquely locating the target position by using the vector as coordinates. Nodes in MDList are ordered by their coordinate prefixes and the ordering property of the data structure is readily maintained during insertion without rebalancing nor randomization. Furthermore, in our experimental evaluation using a micro-benchmark, our priority queue achieves an average of 50% speedup over the state of the art approaches under high concurrency.« less

  17. A lock-free priority queue design based on multi-dimensional linked lists

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dechev, Damian; Zhang, Deli

    The throughput of concurrent priority queues is pivotal to multiprocessor applications such as discrete event simulation, best-first search and task scheduling. Existing lock-free priority queues are mostly based on skiplists, which probabilistically create shortcuts in an ordered list for fast insertion of elements. The use of skiplists eliminates the need of global rebalancing in balanced search trees and ensures logarithmic sequential search time on average, but the worst-case performance is linear with respect to the input size. In this paper, we propose a quiescently consistent lock-free priority queue based on a multi-dimensional list that guarantees worst-case search time of O(logN)more » for key universe of size N. The novel multi-dimensional list (MDList) is composed of nodes that contain multiple links to child nodes arranged by their dimensionality. The insertion operation works by first injectively mapping the scalar key to a high-dimensional vector, then uniquely locating the target position by using the vector as coordinates. Nodes in MDList are ordered by their coordinate prefixes and the ordering property of the data structure is readily maintained during insertion without rebalancing nor randomization. Furthermore, in our experimental evaluation using a micro-benchmark, our priority queue achieves an average of 50% speedup over the state of the art approaches under high concurrency.« less

  18. Advanced turboprop testbed systems study. Volume 1: Testbed program objectives and priorities, drive system and aircraft design studies, evaluation and recommendations and wind tunnel test plans

    NASA Technical Reports Server (NTRS)

    Bradley, E. S.; Little, B. H.; Warnock, W.; Jenness, C. M.; Wilson, J. M.; Powell, C. W.; Shoaf, L.

    1982-01-01

    The establishment of propfan technology readiness was determined and candidate drive systems for propfan application were identified. Candidate testbed aircraft were investigated for testbed aircraft suitability and four aircraft selected as possible propfan testbed vehicles. An evaluation of the four candidates was performed and the Boeing KC-135A and the Gulfstream American Gulfstream II recommended as the most suitable aircraft for test application. Conceptual designs of the two recommended aircraft were performed and cost and schedule data for the entire testbed program were generated. The program total cost was estimated and a wind tunnel program cost and schedule is generated in support of the testbed program.

  19. Aerial Refueling Process Rescheduling Under Job Related Disruptions

    NASA Technical Reports Server (NTRS)

    Kaplan, Sezgin; Rabadi, Ghaith

    2011-01-01

    The Aerial Refueling Scheduling Problem (ARSP) can be defined as determining the refueling completion times for each fighter aircraft (job) on the multiple tankers (machines) to minimize the total weighted tardiness. ARSP assumes that the jobs have different release times and due dates. The ARSP is dynamic environment and unexpected events may occur. In this paper, rescheduling in the aerial refueling process with a time set of jobs will be studied to deal with job related disruptions such as the arrival of new jobs, the departure of an existing job, high deviations in the release times and changes in job priorities. In order to keep the stability and to avoid excessive computation, partial schedule repair algorithm is developed and its preliminary results are presented.

  20. Using Queue Time Predictions for Processor Allocation

    DTIC Science & Technology

    1997-01-01

    Diego Supercomputer Center, 1996. 19 [15] Vijay K. Naik, Sanjeev K. Setia , and Mark S. Squillante. Performance analysis of job schedul- ing policies in...Processing, pages 101{111, 1995. [19] Sanjeev K. Setia and Satish K. Tripathi. An analysis of several processor partitioning policies for parallel...computers. Technical Report CS-TR-2684, University of Maryland, May 1991. [20] Sanjeev K. Setia and Satish K. Tripathi. A comparative analysis of static

  1. Performance Analysis on the Coexistence of Multiple Cognitive Radio Networks

    DTIC Science & Technology

    2015-05-28

    the scarce spectrum resources. Cognitive radio is a key in minimizing the spectral congestion through its adaptability, where the radio parameters...static allocation of spectrum results in congestion in some parts of the spectrum and non use in some others, therefore, spectra utilization is...well as the secondary user (SU) activities in multiple CR networks. It is shown that the scheduler provided much needed gain during congestions . However

  2. Rapid Prototyping of High Performance Signal Processing Applications

    NASA Astrophysics Data System (ADS)

    Sane, Nimish

    Advances in embedded systems for digital signal processing (DSP) are enabling many scientific projects and commercial applications. At the same time, these applications are key to driving advances in many important kinds of computing platforms. In this region of high performance DSP, rapid prototyping is critical for faster time-to-market (e.g., in the wireless communications industry) or time-to-science (e.g., in radio astronomy). DSP system architectures have evolved from being based on application specific integrated circuits (ASICs) to incorporate reconfigurable off-the-shelf field programmable gate arrays (FPGAs), the latest multiprocessors such as graphics processing units (GPUs), or heterogeneous combinations of such devices. We, thus, have a vast design space to explore based on performance trade-offs, and expanded by the multitude of possibilities for target platforms. In order to allow systematic design space exploration, and develop scalable and portable prototypes, model based design tools are increasingly used in design and implementation of embedded systems. These tools allow scalable high-level representations, model based semantics for analysis and optimization, and portable implementations that can be verified at higher levels of abstractions and targeted toward multiple platforms for implementation. The designer can experiment using such tools at an early stage in the design cycle, and employ the latest hardware at later stages. In this thesis, we have focused on dataflow-based approaches for rapid DSP system prototyping. This thesis contributes to various aspects of dataflow-based design flows and tools as follows: 1. We have introduced the concept of topological patterns, which exploits commonly found repetitive patterns in DSP algorithms to allow scalable, concise, and parameterizable representations of large scale dataflow graphs in high-level languages. We have shown how an underlying design tool can systematically exploit a high-level application specification consisting of topological patterns in various aspects of the design flow. 2. We have formulated the core functional dataflow (CFDF) model of computation, which can be used to model a wide variety of deterministic dynamic dataflow behaviors. We have also presented key features of the CFDF model and tools based on these features. These tools provide support for heterogeneous dataflow behaviors, an intuitive and common framework for functional specification, support for functional simulation, portability from several existing dataflow models to CFDF, integrated emphasis on minimally-restricted specification of actor functionality, and support for efficient static, quasi-static, and dynamic scheduling techniques. 3. We have developed a generalized scheduling technique for CFDF graphs based on decomposition of a CFDF graph into static graphs that interact at run-time. Furthermore, we have refined this generalized scheduling technique using a new notion of "mode grouping," which better exposes the underlying static behavior. We have also developed a scheduling technique for a class of dynamic applications that generates parameterized looped schedules (PLSs), which can handle dynamic dataflow behavior without major limitations on compile-time predictability. 4. We have demonstrated the use of dataflow-based approaches for design and implementation of radio astronomy DSP systems using an application example of a tunable digital downconverter (TDD) for spectrometers. Design and implementation of this module has been an integral part of this thesis work. This thesis demonstrates a design flow that consists of a high-level software prototype, analysis, and simulation using the dataflow interchange format (DIF) tool, and integration of this design with the existing tool flow for the target implementation on an FPGA platform, called interconnect break-out board (IBOB). We have also explored the trade-off between low hardware cost for fixed configurations of digital downconverters and flexibility offered by TDD designs. 5. This thesis has contributed significantly to the development and release of the latest version of a graph package oriented toward models of computation (MoCGraph). Our enhancements to this package include support for tree data structures, and generalized schedule trees (GSTs), which provide a useful data structure for a wide variety of schedule representations. Our extensions to the MoCGraph package provided key support for the CFDF model, and functional simulation capabilities in the DIF package.

  3. No Project Exists In A Vacuum: Organizational Effects In Enterprise Information System Development

    DTIC Science & Technology

    2014-06-01

    development projects may take too much time, or even fail, if [senior management] commitment is erratic” ( Newman & Sabherwal, 1996, p. 24) and...commitment is clearly important to the success of IS development projects” ( Newman & Sabherwal, 1996, p. 23). Not only should decision makers establish...another example, functionality in the parent program could be curtailed due to funding priorities, affecting schedule and functionality in

  4. CINRG: Infrastructure for Clinical Trials in Duchenne Dystrophy

    DTIC Science & Technology

    2013-09-01

    research centers sharing a common goal of improving the quality of life of neuromuscular disease patients by cooperative planning , implementation, analysis...discrepancies were noted surrounding the regulatory binder, consent process and data records. The site personnel have a scheduled plan with the CINRG CC...missing long term follow-up visits). The project manager also reviewed the study records to date while onsite. A high priority plan was developed with the

  5. KSC-02pd1890

    NASA Image and Video Library

    2002-12-09

    KENNEDY SPACE CENTER, FLA. -- Space Shuttle Columbia, atop the Mobile Launcher Platform, approaches the top of Launch Pad 39A where it will undergo preparations for launch. The STS-107 research mission comprises experiments ranging from material sciences to life sciences, plus the Fast Reaction Experiments Enabling Science, Technology, Applications and Research (FREESTAR) that incorporates eight high priority secondary attached shuttle experiments. Mission STS-107 is scheduled to launch Jan. 16, 2003.

  6. Joint Cross-Layer Design for Wireless QoS Content Delivery

    NASA Astrophysics Data System (ADS)

    Chen, Jie; Lv, Tiejun; Zheng, Haitao

    2005-12-01

    In this paper, we propose a joint cross-layer design for wireless quality-of-service (QoS) content delivery. Central to our proposed cross-layer design is the concept of adaptation. Adaptation represents the ability to adjust protocol stacks and applications to respond to channel variations. We focus our cross-layer design especially on the application, media access control (MAC), and physical layers. The network is designed based on our proposed fast frequency-hopping orthogonal frequency division multiplex (OFDM) technique. We also propose a QoS-awareness scheduler and a power adaptation transmission scheme operating at both the base station and mobile sides. The proposed MAC scheduler coordinates the transmissions of an IP base station and mobile nodes. The scheduler also selects appropriate transmission formats and packet priorities for individual users based on current channel conditions and the users' QoS requirements. The test results show that our cross-layer design provides an excellent framework for wireless QoS content delivery.

  7. Planning for rover opportunistic science

    NASA Technical Reports Server (NTRS)

    Gaines, Daniel M.; Estlin, Tara; Forest, Fisher; Chouinard, Caroline; Castano, Rebecca; Anderson, Robert C.

    2004-01-01

    The Mars Exploration Rover Spirit recently set a record for the furthest distance traveled in a single sol on Mars. Future planetary exploration missions are expected to use even longer drives to position rovers in areas of high scientific interest. This increase provides the potential for a large rise in the number of new science collection opportunities as the rover traverses the Martian surface. In this paper, we describe the OASIS system, which provides autonomous capabilities for dynamically identifying and pursuing these science opportunities during longrange traverses. OASIS uses machine learning and planning and scheduling techniques to address this goal. Machine learning techniques are applied to analyze data as it is collected and quickly determine new science gods and priorities on these goals. Planning and scheduling techniques are used to alter the behavior of the rover so that new science measurements can be performed while still obeying resource and other mission constraints. We will introduce OASIS and describe how planning and scheduling algorithms support opportunistic science.

  8. Saturn Apollo Program

    NASA Image and Video Library

    1961-02-04

    The Saturn project was approved on January 18, 1960 as a program of the highest national priority. The formal test program to prove out the clustered-booster concept was well underway. A series of static tests of the Saturn I booster (S-I stage) began June 3, 1960 at the Marshall Space Flight Center (MSFC). This photograph depicts the Saturn I S-I stage equipped with eight H-1 engines, being successfully test-fired on February 4, 1961. A Juno rocket is visible on the right side of the test stand.

  9. Figure-ground segregation modulates apparent motion.

    PubMed

    Ramachandran, V S; Anstis, S

    1986-01-01

    We explored the relationship between figure-ground segmentation and apparent motion. Results suggest that: static elements in the surround can eliminate apparent motion of a cluster of dots in the centre, but only if the cluster and surround have similar "grain" or texture; outlines that define occluding surfaces are taken into account by the motion mechanism; the brain uses a hierarchy of precedence rules in attributing motion to different segments of the visual scene. Being designated as "figure" confers a high rank in this scheme of priorities.

  10. Method for providing real-time control of a gaseous propellant rocket propulsion system

    NASA Technical Reports Server (NTRS)

    Morris, Brian G. (Inventor)

    1991-01-01

    The new and improved methods and apparatus disclosed provide effective real-time management of a spacecraft rocket engine powered by gaseous propellants. Real-time measurements representative of the engine performance are compared with predetermined standards to selectively control the supply of propellants to the engine for optimizing its performance as well as efficiently managing the consumption of propellants. A priority system is provided for achieving effective real-time management of the propulsion system by first regulating the propellants to keep the engine operating at an efficient level and thereafter regulating the consumption ratio of the propellants. A lower priority level is provided to balance the consumption of the propellants so significant quantities of unexpended propellants will not be left over at the end of the scheduled mission of the engine.

  11. A novel downlink scheduling strategy for traffic communication system based on TD-LTE technology.

    PubMed

    Chen, Ting; Zhao, Xiangmo; Gao, Tao; Zhang, Licheng

    2016-01-01

    There are many existing classical scheduling algorithms which can obtain better system throughput and user equality, however, they are not designed for traffic transportation environment, which cannot consider whether the transmission performance of various information flows could meet comprehensive requirements of traffic safety and delay tolerance. This paper proposes a novel downlink scheduling strategy for traffic communication system based on TD-LTE technology, which can perform two classification mappings for various information flows in the eNodeB: firstly, associate every information flow packet with traffic safety importance weight according to its relevance to the traffic safety; secondly, associate every traffic information flow with service type importance weight according to its quality of service (QoS) requirements. Once the connection is established, at every scheduling moment, scheduler would decide the scheduling order of all buffers' head of line packets periodically according to the instant value of scheduling importance weight function, which calculated by the proposed algorithm. From different scenario simulations, it can be verified that the proposed algorithm can provide superior differentiated transmission service and reliable QoS guarantee to information flows with different traffic safety levels and service types, which is more suitable for traffic transportation environment compared with the existing popularity PF algorithm. With the limited wireless resource, information flow closed related to traffic safety will always obtain priority scheduling right timely, which can help the passengers' journey more safe. Moreover, the proposed algorithm cannot only obtain good flow throughput and user fairness which are almost equal to those of the PF algorithm without significant differences, but also provide better realtime transmission guarantee to realtime information flow.

  12. Game Schedules and Rate of Concussions in the National Football League.

    PubMed

    Teramoto, Masaru; Cushman, Daniel M; Cross, Chad L; Curtiss, Heather M; Willick, Stuart E

    2017-11-01

    Concussion prevention in the National Football League (NFL) is an important priority for player safety. The NFL now has modified game schedules, and one concern is that unconventional game schedules, such as a shortened rest period due to playing on a Thursday rather than during the weekend, may lead to an increased risk of injuries. Unconventional game schedules in the NFL are associated with an increased rate of concussion. Descriptive epidemiological study. This study analyzed concussions and game schedules over the NFL regular seasons from 2012 to 2015 (4 years). Documented numbers of concussions, identified by use of the online database PBS Frontline Concussion Watch, were summarized by regular-season weeks. Association of days of rest and game location (home, away, or overseas) with the rate of concussion was examined by use of the χ 2 test. Logistic regression analysis was performed to examine the relationships of days of rest and home/away games to the risk of repeated concussions, with adjustment for player position. A total of 582 concussions were analyzed in this study. A significantly greater number of concussions occurred in the second half of the season ( P < .01). No significant association was found between the rate of concussion and the days of rest, game location, or timing of the bye week by the team or the opponent ( P > .05). Game schedules were not significantly associated with the occurrence of repeat concussions ( P > .05). Unconventional game schedules in the NFL, including playing on Thursday and playing overseas, do not seem to put players at increased risk of concussions.

  13. Dynamic Hop Service Differentiation Model for End-to-End QoS Provisioning in Multi-Hop Wireless Networks

    NASA Astrophysics Data System (ADS)

    Youn, Joo-Sang; Seok, Seung-Joon; Kang, Chul-Hee

    This paper presents a new QoS model for end-to-end service provisioning in multi-hop wireless networks. In legacy IEEE 802.11e based multi-hop wireless networks, the fixed assignment of service classes according to flow's priority at every node causes priority inversion problem when performing end-to-end service differentiation. Thus, this paper proposes a new QoS provisioning model called Dynamic Hop Service Differentiation (DHSD) to alleviate the problem and support effective service differentiation between end-to-end nodes. Many previous works for QoS model through the 802.11e based service differentiation focus on packet scheduling on several service queues with different service rate and service priority. Our model, however, concentrates on a dynamic class selection scheme, called Per Hop Class Assignment (PHCA), in the node's MAC layer, which selects a proper service class for each packet, in accordance with queue states and service requirement, in every node along the end-to-end route of the packet. The proposed QoS solution is evaluated using the OPNET simulator. The simulation results show that the proposed model outperforms both best-effort and 802.11e based strict priority service models in mobile ad hoc environments.

  14. Analysis of Lean Initiatives in the Production of Naval Aviators

    DTIC Science & Technology

    2012-09-01

    originated in the 1950s with an engineer named Eji Toyoda, and a production genius Taiichi Ohno at Toyota in Japan. Toyoda and Ohno are credited with...to flow wastes must be eliminated. Taiichi Ohno, the designer of the Toyota Production System, was obsessed with making materials flow and to...pooling between phases or blocks.  Producing quality defects . Delays (waiting).–Varying scheduling priorities for students during different blocks of

  15. KSC-02pd1885

    NASA Image and Video Library

    2002-12-09

    KENNEDY SPACE CENTER, FLA. -- Space Shuttle Columbia rolls towards Launch Pad 39A, sitting atop the Mobile Launcher Platform, which in turn is carried by the crawler-transporter underneath. The STS-107 research mission comprises experiments ranging from material sciences to life sciences (many rats), plus the Fast Reaction Experiments Enabling Science, Technology, Applications and Research (FREESTAR) that incorporates eight high priority secondary attached shuttle experiments. Mission STS-107 is scheduled to launch Jan. 16, 2003.

  16. A Decentralized Scheduling Policy for a Dynamically Reconfigurable Production System

    NASA Astrophysics Data System (ADS)

    Giordani, Stefano; Lujak, Marin; Martinelli, Francesco

    In this paper, the static layout of a traditional multi-machine factory producing a set of distinct goods is integrated with a set of mobile production units - robots. The robots dynamically change their work position to increment the product rate of the different typologies of products in respect to the fluctuations of the demands and production costs during a given time horizon. Assuming that the planning time horizon is subdivided into a finite number of time periods, this particularly flexible layout requires the definition and the solution of a complex scheduling problem, involving for each period of the planning time horizon, the determination of the position of the robots, i.e., the assignment to the respective tasks in order to minimize production costs given the product demand rates during the planning time horizon.

  17. Alphabus Mechanical Validation Plan and Test Campaign

    NASA Astrophysics Data System (ADS)

    Calvisi, G.; Bonnet, D.; Belliol, P.; Lodereau, P.; Redoundo, R.

    2012-07-01

    A joint team of the two leading European satellite companies (Astrium and Thales Alenia Space) worked with the support of ESA and CNES to define a product line able to efficiently address the upper segment of communications satellites : Alphabus Starting in 2009 and up to 2011 the mechanical validation of the Alphabus platform has been obtained thanks to static tests performed on dedicated static model and to environmental test performed on the first satellite based on Alphabus: Alphasat I-XL. The mechanical validation of the Alphabus platform presented an excellent opportunity to improve the validation and qualification process, with respect to static, sine vibrations, acoustic and L/V shock environment, minimizing recurrent cost of manufacturing, integration and testing. A main driver on mechanical testing is that mechanical acceptance testing at satellite level will be performed with empty tanks due to technical constraints (limitation of existing vibration devices) and programmatic advantages (test risk reduction, test schedule minimization). In this paper the impacts that such testing logic have on validation plan are briefly recalled and its actual application for Alphasat PFM mechanical test campaign is detailed.

  18. Reliability Constrained Priority Load Shedding for Aerospace Power System Automation

    NASA Technical Reports Server (NTRS)

    Momoh, James A.; Zhu, Jizhong; Kaddah, Sahar S.; Dolce, James L. (Technical Monitor)

    2000-01-01

    The need for improving load shedding on board the space station is one of the goals of aerospace power system automation. To accelerate the optimum load-shedding functions, several constraints must be involved. These constraints include congestion margin determined by weighted probability contingency, component/system reliability index, generation rescheduling. The impact of different faults and indices for computing reliability were defined before optimization. The optimum load schedule is done based on priority, value and location of loads. An optimization strategy capable of handling discrete decision making, such as Everett optimization, is proposed. We extended Everett method to handle expected congestion margin and reliability index as constraints. To make it effective for real time load dispatch process, a rule-based scheme is presented in the optimization method. It assists in selecting which feeder load to be shed, the location of the load, the value, priority of the load and cost benefit analysis of the load profile is included in the scheme. The scheme is tested using a benchmark NASA system consisting of generators, loads and network.

  19. Analysis of Factors for Incorporating User Preferences in Air Traffic Management: A system Perspective

    NASA Technical Reports Server (NTRS)

    Sheth, Kapil S.; Gutierrez-Nolasco, Sebastian

    2010-01-01

    This paper presents an analysis of factors that impact user flight schedules during air traffic congestion. In pre-departure flight planning, users file one route per flight, which often leads to increased delays, inefficient airspace utilization, and exclusion of user flight preferences. In this paper, first the idea of filing alternate routes and providing priorities on each of those routes is introduced. Then, the impact of varying planning interval and system imposed departure delay increment is discussed. The metrics of total delay and equity are used for analyzing the impact of these factors on increased traffic and on different users. The results are shown for four cases, with and without the optional routes and priority assignments. Results demonstrate that adding priorities to optional routes further improves system performance compared to filing one route per flight and using first-come first-served scheme. It was also observed that a two-hour planning interval with a five-minute system imposed departure delay increment results in highest delay reduction. The trend holds for a scenario with increased traffic.

  20. How Home Health Nurses Plan Their Work Schedules: A Qualitative Descriptive Study.

    PubMed

    Irani, Elliane; Hirschman, Karen B; Cacchione, Pamela Z; Bowles, Kathryn H

    2018-06-12

    To describe how home health nurses plan their daily work schedules and what challenges they face during the planning process. Home health nurses are viewed as independent providers and value the nature of their work because of the flexibility and autonomy they hold in developing their work schedules. However, there is limited empirical evidence about how home health nurses plan their work schedules, including the factors they consider during the process and the challenges they face within the dynamic home health setting. Qualitative descriptive design. Semi-structured interviews were conducted with 20 registered nurses who had greater than 2 years of experience in home health and were employed by one of the three participating home health agencies in the mid-Atlantic region of the United States. Data were analyzed using conventional content analysis. Four themes emerged about planning work schedules and daily itineraries: identifying patient needs to prioritize visits accordingly, partnering with patients to accommodate their preferences, coordinating visit timing with other providers to avoid overwhelming patients, and working within agency standards to meet productivity requirements. Scheduling challenges included readjusting the schedule based on patient needs and staffing availability, anticipating longer visits, and maintaining continuity of care with patients. Home health nurses make autonomous decisions regarding their work schedules while considering specific patient and agency factors, and overcome challenges related to the unpredictable nature of providing care in a home health setting. Future research is needed to further explore nurse productivity in home health and improve home health work environments. Home health nurses plan their work schedules to provide high quality care that is patient-centered and timely. The findings also highlight organizational priorities to facilitate continuity of care and support nurses while alleviating the burnout associated with high productivity requirements. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  1. Advanced spacecraft fire safety: Proposed projects and program plan

    NASA Technical Reports Server (NTRS)

    Youngblood, Wallace W.; Vedha-Nayagam, M.

    1989-01-01

    A detailed review identifies spacecraft fire safety issues and the efforts for their resolution, particularly for the threats posed by the increased on-orbit duration, size, and complexity of the Space Station Freedom. Suggestions provided by a survey of Wyle consultants and outside fire safety experts were combined into 30 research and engineering projects. The projects were then prioritized with respect to urgency to meet Freedom design goals, status of enabling technology, cost, and so on, to yield 14 highest priority projects, described in terms of background, work breakdown structure, and schedule. These highest priority projects can be grouped into the thematic areas of fire detection, fire extinguishment, risk assessment, toxicology and human effects, and ground based testing. Recommendations for overall program management stress the need for NASA Headquarters and field center coordination, with information exchange through spacecraft fire safety oversight committees.

  2. StarBase: A Firm Real-Time Database Manager for Time-Critical Applications

    DTIC Science & Technology

    1995-01-01

    Mellon University [IO]. StarBase differs from previous RT-DBMS work [l, 2, 31 in that a) it relies on a real - time operating system which provides...simulation studies, StarBase uses a real - time operating system to provide basic real-time functionality and deals with issues beyond transaction...re- source scheduling provided by the underlying real - time operating system . Issues of data contention are dealt with by use of a priority

  3. Temporal Proof Methodologies for Real-Time Systems,

    DTIC Science & Technology

    1990-09-01

    real time systems that communicate either through shared variables or by message passing and real time issues such as time-outs, process priorities (interrupts) and process scheduling. The authors exhibit two styles for the specification of real - time systems . While the first approach uses bounded versions of temporal operators the second approach allows explicit references to time through a special clock variable. Corresponding to two styles of specification the authors present and compare two fundamentally different proof

  4. Phoenix, a High-Performance UNIX with an Emphasis on Dynamic Modification, Real-Time Response and Survivability

    DTIC Science & Technology

    1990-01-12

    Communications COM-28, (April 1980 ) 425-432. [6] Munch-Anderson, B. and T.U. Zahle, Scheduling According to Job Priority With Prevention of Deadlock and...Interconnection, IEEE Transactions on Communications COM-28, (April 1980 ) 425-432. (4] Cook, R.P., StarMod, A Language for Distributed Programming...Research Triangle Park, NC 27709-2211 Attention: Dr. David W. Hislop * Electronics Division 51 Dr. David W. Hislop Electronics Division U. S. Army Research

  5. European Software Engineering Process Group Conference (2nd Annual), EUROPEAN SEPG󈨥. Delegate Material, Tutorials

    DTIC Science & Technology

    1997-06-17

    There is Good and Bad News With CMMs8 *bad news: process improvement takes time *good news: the first benefit Is better schedule management With PSP s...e g similar supp v EURO not sudden death toolset for assessment and v EURO => Business benefits detailed analysis) . EURO could collapse (low risk...from SPI live on even after year 2000. Priority BENEFITS Actions * Improved management and application development processes * Strengthened Change

  6. Progress on plutonium stabilization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurt, D.

    1996-05-01

    The Defense Nuclear Facilities Safety Board has safety oversight responsibility for most of the facilities where unstable forms of plutonium are being processed and packaged for interim storage. The Board has issued recommendations on plutonium stabilization and has has a considerable influence on DOE`s stabilization schedules and priorities. The Board has not made any recommendations on long-term plutonium disposition, although it may get more involved in the future if DOE develops plans to use defense nuclear facilities for disposition activities.

  7. Testing and Validation of Timing Properties for High Speed Digital Cameras - A Best Practices Guide

    DTIC Science & Technology

    2016-07-27

    a five year plan to begin replacing its inventory of antiquated film and video systems with more modern and capable digital systems. As evidenced in...installation, testing, and documentation of DITCS. If shop support can be accelerated due to shifting mission priorities, this schedule can likely...assistance from the machine shop , welding shop , paint shop , and carpenter shop . Testing the DITCS system will require a KTM with digital cameras and

  8. Advancing the LSST Operations Simulator

    NASA Astrophysics Data System (ADS)

    Saha, Abhijit; Ridgway, S. T.; Cook, K. H.; Delgado, F.; Chandrasekharan, S.; Petry, C. E.; Operations Simulator Group

    2013-01-01

    The Operations Simulator for the Large Synoptic Survey Telescope (LSST; http://lsst.org) allows the planning of LSST observations that obey explicit science driven observing specifications, patterns, schema, and priorities, while optimizing against the constraints placed by design-specific opto-mechanical system performance of the telescope facility, site specific conditions (including weather and seeing), as well as additional scheduled and unscheduled downtime. A simulation run records the characteristics of all observations (e.g., epoch, sky position, seeing, sky brightness) in a MySQL database, which can be queried for any desired purpose. Derivative information digests of the observing history database are made with an analysis package called Simulation Survey Tools for Analysis and Reporting (SSTAR). Merit functions and metrics have been designed to examine how suitable a specific simulation run is for several different science applications. This poster reports recent work which has focussed on an architectural restructuring of the code that will allow us to a) use "look-ahead" strategies that avoid cadence sequences that cannot be completed due to observing constraints; and b) examine alternate optimization strategies, so that the most efficient scheduling algorithm(s) can be identified and used: even few-percent efficiency gains will create substantive scientific opportunity. The enhanced simulator will be used to assess the feasibility of desired observing cadences, study the impact of changing science program priorities, and assist with performance margin investigations of the LSST system.

  9. Towards Evolving Electronic Circuits for Autonomous Space Applications

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Haith, Gary L.; Colombano, Silvano P.; Stassinopoulos, Dimitris

    2000-01-01

    The relatively new field of Evolvable Hardware studies how simulated evolution can reconfigure, adapt, and design hardware structures in an automated manner. Space applications, especially those requiring autonomy, are potential beneficiaries of evolvable hardware. For example, robotic drilling from a mobile platform requires high-bandwidth controller circuits that are difficult to design. In this paper, we present automated design techniques based on evolutionary search that could potentially be used in such applications. First, we present a method of automatically generating analog circuit designs using evolutionary search and a circuit construction language. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. Using a parallel genetic algorithm, we present experimental results for five design tasks. Second, we investigate the use of coevolution in automated circuit design. We examine fitness evaluation by comparing the effectiveness of four fitness schedules. The results indicate that solution quality is highest with static and co-evolving fitness schedules as compared to the other two dynamic schedules. We discuss these results and offer two possible explanations for the observed behavior: retention of useful information, and alignment of problem difficulty with circuit proficiency.

  10. The effect of dynamic scheduling and routing in a solid waste management system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johansson, Ola M.

    2006-07-01

    Solid waste collection and hauling account for the greater part of the total cost in modern solid waste management systems. In a recent initiative, 3300 Swedish recycling containers have been fitted with level sensors and wireless communication equipment, thereby giving waste collection operators access to real-time information on the status of each container. In this study, analytical modeling and discrete-event simulation have been used to evaluate different scheduling and routing policies utilizing the real-time data. In addition to the general models developed, an empirical simulation study has been performed on the downtown recycling station system in Malmoe, Sweden. From themore » study, it can be concluded that dynamic scheduling and routing policies exist that have lower operating costs, shorter collection and hauling distances, and reduced labor hours compared to the static policy with fixed routes and pre-determined pick-up frequencies employed by many waste collection operators today. The results of the analytical model and the simulation models are coherent, and consistent with experiences of the waste collection operators.« less

  11. A FairShare Scheduling Service for OpenNebula

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Vallero, S.; Zaccolo, V.

    2017-10-01

    In the ideal limit of infinite resources, multi-tenant applications are able to scale in/out on a Cloud driven only by their functional requirements. While a large Public Cloud may be a reasonable approximation of this condition, small scientific computing centres usually work in a saturated regime. In this case, an advanced resource allocation policy is needed in order to optimize the use of the data centre. The general topic of advanced resource scheduling is addressed by several components of the EU-funded INDIGO-DataCloud project. In this contribution, we describe the FairShare Scheduler Service (FaSS) for OpenNebula (ONE). The service must satisfy resource requests according to an algorithm which prioritizes tasks according to an initial weight and to the historical resource usage of the project. The software was designed to be less intrusive as possible in the ONE code. We keep the original ONE scheduler implementation to match requests to available resources, but the queue of pending jobs to be processed is the one ordered according to priorities as delivered by the FaSS. The FaSS implementation is still being finalized and in this contribution we describe the functional and design requirements the module should satisfy, as well as its high-level architecture.

  12. A novel LTE scheduling algorithm for green technology in smart grid.

    PubMed

    Hindia, Mohammad Nour; Reza, Ahmed Wasif; Noordin, Kamarul Ariffin; Chayon, Muhammad Hasibur Rashid

    2015-01-01

    Smart grid (SG) application is being used nowadays to meet the demand of increasing power consumption. SG application is considered as a perfect solution for combining renewable energy resources and electrical grid by means of creating a bidirectional communication channel between the two systems. In this paper, three SG applications applicable to renewable energy system, namely, distribution automation (DA), distributed energy system-storage (DER) and electrical vehicle (EV), are investigated in order to study their suitability in Long Term Evolution (LTE) network. To compensate the weakness in the existing scheduling algorithms, a novel bandwidth estimation and allocation technique and a new scheduling algorithm are proposed. The technique allocates available network resources based on application's priority, whereas the algorithm makes scheduling decision based on dynamic weighting factors of multi-criteria to satisfy the demands (delay, past average throughput and instantaneous transmission rate) of quality of service. Finally, the simulation results demonstrate that the proposed mechanism achieves higher throughput, lower delay and lower packet loss rate for DA and DER as well as provide a degree of service for EV. In terms of fairness, the proposed algorithm shows 3%, 7 % and 9% better performance compared to exponential rule (EXP-Rule), modified-largest weighted delay first (M-LWDF) and exponential/PF (EXP/PF), respectively.

  13. A Novel LTE Scheduling Algorithm for Green Technology in Smart Grid

    PubMed Central

    Hindia, Mohammad Nour; Reza, Ahmed Wasif; Noordin, Kamarul Ariffin; Chayon, Muhammad Hasibur Rashid

    2015-01-01

    Smart grid (SG) application is being used nowadays to meet the demand of increasing power consumption. SG application is considered as a perfect solution for combining renewable energy resources and electrical grid by means of creating a bidirectional communication channel between the two systems. In this paper, three SG applications applicable to renewable energy system, namely, distribution automation (DA), distributed energy system-storage (DER) and electrical vehicle (EV), are investigated in order to study their suitability in Long Term Evolution (LTE) network. To compensate the weakness in the existing scheduling algorithms, a novel bandwidth estimation and allocation technique and a new scheduling algorithm are proposed. The technique allocates available network resources based on application’s priority, whereas the algorithm makes scheduling decision based on dynamic weighting factors of multi-criteria to satisfy the demands (delay, past average throughput and instantaneous transmission rate) of quality of service. Finally, the simulation results demonstrate that the proposed mechanism achieves higher throughput, lower delay and lower packet loss rate for DA and DER as well as provide a degree of service for EV. In terms of fairness, the proposed algorithm shows 3%, 7 % and 9% better performance compared to exponential rule (EXP-Rule), modified-largest weighted delay first (M-LWDF) and exponential/PF (EXP/PF), respectively. PMID:25830703

  14. Model Checking Real Time Java Using Java PathFinder

    NASA Technical Reports Server (NTRS)

    Lindstrom, Gary; Mehlitz, Peter C.; Visser, Willem

    2005-01-01

    The Real Time Specification for Java (RTSJ) is an augmentation of Java for real time applications of various degrees of hardness. The central features of RTSJ are real time threads; user defined schedulers; asynchronous events, handlers, and control transfers; a priority inheritance based default scheduler; non-heap memory areas such as immortal and scoped, and non-heap real time threads whose execution is not impeded by garbage collection. The Robust Software Systems group at NASA Ames Research Center has JAVA PATHFINDER (JPF) under development, a Java model checker. JPF at its core is a state exploring JVM which can examine alternative paths in a Java program (e.g., via backtracking) by trying all nondeterministic choices, including thread scheduling order. This paper describes our implementation of an RTSJ profile (subset) in JPF, including requirements, design decisions, and current implementation status. Two examples are analyzed: jobs on a multiprogramming operating system, and a complex resource contention example involving autonomous vehicles crossing an intersection. The utility of JPF in finding logic and timing errors is illustrated, and the remaining challenges in supporting all of RTSJ are assessed.

  15. New scheduling rules for a dynamic flexible flow line problem with sequence-dependent setup times

    NASA Astrophysics Data System (ADS)

    Kia, Hamidreza; Ghodsypour, Seyed Hassan; Davoudpour, Hamid

    2017-09-01

    In the literature, the application of multi-objective dynamic scheduling problem and simple priority rules are widely studied. Although these rules are not efficient enough due to simplicity and lack of general insight, composite dispatching rules have a very suitable performance because they result from experiments. In this paper, a dynamic flexible flow line problem with sequence-dependent setup times is studied. The objective of the problem is minimization of mean flow time and mean tardiness. A 0-1 mixed integer model of the problem is formulated. Since the problem is NP-hard, four new composite dispatching rules are proposed to solve it by applying genetic programming framework and choosing proper operators. Furthermore, a discrete-event simulation model is made to examine the performances of scheduling rules considering four new heuristic rules and the six adapted heuristic rules from the literature. It is clear from the experimental results that composite dispatching rules that are formed from genetic programming have a better performance in minimization of mean flow time and mean tardiness than others.

  16. Enabling Autonomous Rover Science through Dynamic Planning and Scheduling

    NASA Technical Reports Server (NTRS)

    Estlin, Tara A.; Gaines, Daniel; Chouinard, Caroline; Fisher, Forest; Castano, Rebecca; Judd, Michele; Nesnas, Issa

    2005-01-01

    This paper describes how dynamic planning and scheduling techniques can be used onboard a rover to autonomously adjust rover activities in support of science goals. These goals could be identified by scientists on the ground or could be identified by onboard data-analysis software. Several different types of dynamic decisions are described, including the handling of opportunistic science goals identified during rover traverses, preserving high priority science targets when resources, such as power, are unexpectedly over-subscribed, and dynamically adding additional, ground-specified science targets when rover actions are executed more quickly than expected. After describing our specific system approach, we discuss some of the particular challenges we have examined to support autonomous rover decision-making. These include interaction with rover navigation and path-planning software and handling large amounts of uncertainty in state and resource estimations.

  17. Static and Dynamical Structural Investigations of Metal-Oxide Nanocrystals by Powder X-ray Diffraction: Colloidal Tungsten Oxide as a Case Study

    DOE PAGES

    Caliandro, Rocco; Sibillano, Teresa; Belviso, B. Danilo; ...

    2016-02-02

    In this study, we have developed a general X-ray powder diffraction (XPD) methodology for the simultaneous structural and compositional characterization of inorganic nanomaterials. The approach is validated on colloidal tungsten oxide nanocrystals (WO 3-x NCs), as a model polymorphic nanoscale material system. Rod-shaped WO 3-x NCs with different crystal structure and stoichiometry are comparatively investigated under an inert atmosphere and after prolonged air exposure. An initial structural model for the as-synthesized NCs is preliminarily identified by means of Rietveld analysis against several reference crystal phases, followed by atomic pair distribution function (PDF) refinement of the best-matching candidates (static analysis). Subtlemore » stoichiometry deviations from the corresponding bulk standards are revealed. NCs exposed to air at room temperature are monitored by XPD measurements at scheduled time intervals. The static PDF analysis is complemented with an investigation into the evolution of the WO 3-x NC structure, performed by applying the modulation enhanced diffraction technique to the whole time series of XPD profiles (dynamical analysis). Prolonged contact with ambient air is found to cause an appreciable increase in the static disorder of the O atoms in the WO 3-x NC lattice, rather than a variation in stoichiometry. Finally, the time behavior of such structural change is identified on the basis of multivariate analysis.« less

  18. Programming with Intervals

    NASA Astrophysics Data System (ADS)

    Matsakis, Nicholas D.; Gross, Thomas R.

    Intervals are a new, higher-level primitive for parallel programming with which programmers directly construct the program schedule. Programs using intervals can be statically analyzed to ensure that they do not deadlock or contain data races. In this paper, we demonstrate the flexibility of intervals by showing how to use them to emulate common parallel control-flow constructs like barriers and signals, as well as higher-level patterns such as bounded-buffer producer-consumer. We have implemented intervals as a publicly available library for Java and Scala.

  19. Evidence gaps in advanced cancer care: community-based clinicians' perspectives and priorities for CER.

    PubMed

    Lowry, Sarah J; Loggers, Elizabeth T; Bowles, Erin J A; Wagner, Edward H

    2012-05-01

    Although much effort has focused on identifying national comparative effectiveness research (CER) priorities, little is known about the CER priorities of community-based practitioners treating patients with advanced cancer. CER priorities of managed care-based clinicians may be valuable as reflections of both payer and provider research interests. We conducted mixed methods interviews with 10 clinicians (5 oncologists and 5 pharmacists) at 5 health plans within the Health Maintenance Organization Cancer Research Network. We asked, "What evidence do you most wish you had when treating patients with advanced cancer" and questioned participants on their impressions and knowledge of CER and pragmatic clinical trials (PCTs). We conducted qualitative analyses to identify themes across interviews. Ninety percent of participants had heard of CER, 20% had heard of PCTs, and all rated CER/PCTs as highly relevant to patient and health plan decision making. Each participant offered between 3 and 10 research priorities. Half (49%) involved head-to-head treatment comparisons; another 20% involved comparing different schedules or dosing regimens of the same treatment. The majority included alternative outcomes to survival (eg, toxicity, quality of life, noninferiority). Participants cited several limitations to existing evidence, including lack of generalizability, funding biases, and rapid development of new treatments. Head-to-head treatment comparisons remain a major evidence need among community- based oncology clinicians, and CER/PCTs are highly valued methods to address the limitations of traditional randomized trials, answer questions of cost-effectiveness or noninferiority, and inform data-driven dialogue and decision making by all stakeholders.

  20. Threatened species and the potential loss of phylogenetic diversity: conservation scenarios based on estimated extinction probabilities and phylogenetic risk analysis.

    PubMed

    Faith, Daniel P

    2008-12-01

    New species conservation strategies, including the EDGE of Existence (EDGE) program, have expanded threatened species assessments by integrating information about species' phylogenetic distinctiveness. Distinctiveness has been measured through simple scores that assign shared credit among species for evolutionary heritage represented by the deeper phylogenetic branches. A species with a high score combined with a high extinction probability receives high priority for conservation efforts. Simple hypothetical scenarios for phylogenetic trees and extinction probabilities demonstrate how such scoring approaches can provide inefficient priorities for conservation. An existing probabilistic framework derived from the phylogenetic diversity measure (PD) properly captures the idea of shared responsibility for the persistence of evolutionary history. It avoids static scores, takes into account the status of close relatives through their extinction probabilities, and allows for the necessary updating of priorities in light of changes in species threat status. A hypothetical phylogenetic tree illustrates how changes in extinction probabilities of one or more species translate into changes in expected PD. The probabilistic PD framework provided a range of strategies that moved beyond expected PD to better consider worst-case PD losses. In another example, risk aversion gave higher priority to a conservation program that provided a smaller, but less risky, gain in expected PD. The EDGE program could continue to promote a list of top species conservation priorities through application of probabilistic PD and simple estimates of current extinction probability. The list might be a dynamic one, with all the priority scores updated as extinction probabilities change. Results of recent studies suggest that estimation of extinction probabilities derived from the red list criteria linked to changes in species range sizes may provide estimated probabilities for many different species. Probabilistic PD provides a framework for single-species assessment that is well-integrated with a broader measurement of impacts on PD owing to climate change and other factors.

  1. An Optimal Schedule for Urban Road Network Repair Based on the Greedy Algorithm

    PubMed Central

    Lu, Guangquan; Xiong, Ying; Wang, Yunpeng

    2016-01-01

    The schedule of urban road network recovery caused by rainstorms, snow, and other bad weather conditions, traffic incidents, and other daily events is essential. However, limited studies have been conducted to investigate this problem. We fill this research gap by proposing an optimal schedule for urban road network repair with limited repair resources based on the greedy algorithm. Critical links will be given priority in repair according to the basic concept of the greedy algorithm. In this study, the link whose restoration produces the ratio of the system-wide travel time of the current network to the worst network is the minimum. We define such a link as the critical link for the current network. We will re-evaluate the importance of damaged links after each repair process is completed. That is, the critical link ranking will be changed along with the repair process because of the interaction among links. We repair the most critical link for the specific network state based on the greedy algorithm to obtain the optimal schedule. The algorithm can still quickly obtain an optimal schedule even if the scale of the road network is large because the greedy algorithm can reduce computational complexity. We prove that the problem can obtain the optimal solution using the greedy algorithm in theory. The algorithm is also demonstrated in the Sioux Falls network. The problem discussed in this paper is highly significant in dealing with urban road network restoration. PMID:27768732

  2. The DEEP-South: Scheduling and Data Reduction Software System

    NASA Astrophysics Data System (ADS)

    Yim, Hong-Suh; Kim, Myung-Jin; Bae, Youngho; Moon, Hong-Kyu; Choi, Young-Jun; Roh, Dong-Goo; the DEEP-South Team

    2015-08-01

    The DEep Ecliptic Patrol of the Southern sky (DEEP-South), started in October 2012, is currently in test runs with the first Korea Microlensing Telescope Network (KMTNet) 1.6 m wide-field telescope located at CTIO in Chile. While the primary objective for the DEEP-South is physical characterization of small bodies in the Solar System, it is expected to discover a large number of such bodies, many of them previously unknown.An automatic observation planning and data reduction software subsystem called "The DEEP-South Scheduling and Data reduction System" (the DEEP-South SDS) is currently being designed and implemented for observation planning, data reduction and analysis of huge amount of data with minimum human interaction. The DEEP-South SDS consists of three software subsystems: the DEEP-South Scheduling System (DSS), the Local Data Reduction System (LDR), and the Main Data Reduction System (MDR). The DSS manages observation targets, makes decision on target priority and observation methods, schedules nightly observations, and archive data using the Database Management System (DBMS). The LDR is designed to detect moving objects from CCD images, while the MDR conducts photometry and reconstructs lightcurves. Based on analysis made at the LDR and the MDR, the DSS schedules follow-up observation to be conducted at other KMTNet stations. In the end of 2015, we expect the DEEP-South SDS to achieve a stable operation. We also have a plan to improve the SDS to accomplish finely tuned observation strategy and more efficient data reduction in 2016.

  3. Design and evaluation of a theory-based, culturally relevant outreach model for breast and cervical cancer screening for Latina immigrants.

    PubMed

    White, Kari; Garces, Isabel C; Bandura, Lisa; McGuire, Allison A; Scarinci, Isabel C

    2012-01-01

    Breast and cervical cancer are common among Latinas, but screening rates among foreign-born Latinas are relatively low. In this article we describe the design and implementation of a theory-based (PEN-3) outreach program to promote breast and cervical cancer screening to Latina immigrants, and evaluate the program's effectiveness. We used data from self-administered questionnaires completed at six annual outreach events to examine the sociodemographic characteristics of attendees and evaluate whether the program reached the priority population - foreign-born Latina immigrants with limited access to health care and screening services. To evaluate the program's effectiveness in connecting women to screening, we examined the proportion and characteristics of women who scheduled and attended Pap smear and mammography appointments. Among the 782 Latinas who attended the outreach program, 60% and 83% had not had a Pap smear or mammogram, respectively, in at least a year. Overall, 80% scheduled a Pap smear and 78% scheduled a mammogram. Women without insurance, who did not know where to get screening and had not been screened in the last year were more likely to schedule appointments (P < .05). Among women who scheduled appointments, 65% attended their Pap smear and 79% attended the mammogram. We did not identify significant differences in sociodemographic characteristics associated with appointment attendance. Using a theoretical approach to outreach design and implementation, it is possible to reach a substantial number of Latina immigrants and connect them to cancer screening services.

  4. Fair Energy Scheduling for Vehicle-to-Grid Networks Using Adaptive Dynamic Programming.

    PubMed

    Xie, Shengli; Zhong, Weifeng; Xie, Kan; Yu, Rong; Zhang, Yan

    2016-08-01

    Research on the smart grid is being given enormous supports worldwide due to its great significance in solving environmental and energy crises. Electric vehicles (EVs), which are powered by clean energy, are adopted increasingly year by year. It is predictable that the huge charge load caused by high EV penetration will have a considerable impact on the reliability of the smart grid. Therefore, fair energy scheduling for EV charge and discharge is proposed in this paper. By using the vehicle-to-grid technology, the scheduler controls the electricity loads of EVs considering fairness in the residential distribution network. We propose contribution-based fairness, in which EVs with high contributions have high priorities to obtain charge energy. The contribution value is defined by both the charge/discharge energy and the timing of the action. EVs can achieve higher contribution values when discharging during the load peak hours. However, charging during this time will decrease the contribution values seriously. We formulate the fair energy scheduling problem as an infinite-horizon Markov decision process. The methodology of adaptive dynamic programming is employed to maximize the long-term fairness by processing online network training. The numerical results illustrate that the proposed EV energy scheduling is able to mitigate and flatten the peak load in the distribution network. Furthermore, contribution-based fairness achieves a fast recovery of EV batteries that have deeply discharged and guarantee fairness in the full charge time of all EVs.

  5. Co-evolution for Problem Simplification

    NASA Technical Reports Server (NTRS)

    Haith, Gary L.; Lohn, Jason D.; Cplombano, Silvano P.; Stassinopoulos, Dimitris

    1999-01-01

    This paper explores a co-evolutionary approach applicable to difficult problems with limited failure/success performance feedback. Like familiar "predator-prey" frameworks this algorithm evolves two populations of individuals - the solutions (predators) and the problems (prey). The approach extends previous work by rewarding only the problems that match their difficulty to the level of solut,ion competence. In complex problem domains with limited feedback, this "tractability constraint" helps provide an adaptive fitness gradient that, effectively differentiates the candidate solutions. The algorithm generates selective pressure toward the evolution of increasingly competent solutions by rewarding solution generality and uniqueness and problem tractability and difficulty. Relative (inverse-fitness) and absolute (static objective function) approaches to evaluating problem difficulty are explored and discussed. On a simple control task, this co-evolutionary algorithm was found to have significant advantages over a genetic algorithm with either a static fitness function or a fitness function that changes on a hand-tuned schedule.

  6. Precise and Scalable Static Program Analysis of NASA Flight Software

    NASA Technical Reports Server (NTRS)

    Brat, G.; Venet, A.

    2005-01-01

    Recent NASA mission failures (e.g., Mars Polar Lander and Mars Orbiter) illustrate the importance of having an efficient verification and validation process for such systems. One software error, as simple as it may be, can cause the loss of an expensive mission, or lead to budget overruns and crunched schedules. Unfortunately, traditional verification methods cannot guarantee the absence of errors in software systems. Therefore, we have developed the CGS static program analysis tool, which can exhaustively analyze large C programs. CGS analyzes the source code and identifies statements in which arrays are accessed out of bounds, or, pointers are used outside the memory region they should address. This paper gives a high-level description of CGS and its theoretical foundations. It also reports on the use of CGS on real NASA software systems used in Mars missions (from Mars PathFinder to Mars Exploration Rover) and on the International Space Station.

  7. STS-107 Columbia rollout to Launch Pad 39A

    NASA Technical Reports Server (NTRS)

    2002-01-01

    KENNEDY SPACE CENTER, FLA. -- Space Shuttle Columbia, framed by trees near the Banana River, rolls towards Launch Pad 39A, sitting atop the Mobile Launcher Platform, which in turn is carried by the crawler-transporter underneath. The STS-107 research mission comprises experiments ranging from material sciences to life sciences (many rats), plus the Fast Reaction Experiments Enabling Science, Technology, Applications and Research (FREESTAR) that incorporates eight high priority secondary attached shuttle experiments. Mission STS-107 is scheduled to launch Jan. 16, 2003.

  8. Launch team training system

    NASA Technical Reports Server (NTRS)

    Webb, J. T.

    1988-01-01

    A new approach to the training, certification, recertification, and proficiency maintenance of the Shuttle launch team is proposed. Previous training approaches are first reviewed. Short term program goals include expanding current training methods, improving the existing simulation capability, and scheduling training exercises with the same priority as hardware tests. Long-term goals include developing user requirements which would take advantage of state-of-the-art tools and techniques. Training requirements for the different groups of people to be trained are identified, and future goals are outlined.

  9. Using Animated Computer Simulation to Determine the Optimal Resource Support for the Endodontic Specialty Practice at Fort Lewis.

    DTIC Science & Technology

    1998-03-01

    Series Pt Endo Tx 114 Time Series Pt Perio Ex 114 None Pt Perio Tx 114 None Pt Perio Sx 114 None Pt Perio Pot 114 None Pt Exam 114 None Pt Other...prevention, diagnosis, and treatment of diseases and injuries that affect the dental pulp, tooth root, and periapical tissue" (Jablonski, 1982...Time Priority Scheduled Disable Logic Entrance 1 480 99 Yes No wait 180 * Entities * Name Speed (fpm) Stats Pt Endo Ex 114 Time

  10. KSC-02pd1198

    NASA Image and Video Library

    2002-08-19

    KENNEDY SPACE CENTER, FLA. -- Only the nose and tail of Columbia are visible as it sits inside an protective tent used to keep out moisture. The orbiter is next scheduled to fly on mission STS-107 no earlier than Nov. 29. STS-107 is a research mission. The payload includes the Hitchhiker Bridge, a carrier for the Fast Reaction Experiments Enabling Science, Technology, Applications and Research (FREESTAR) that incorporates eight high priority secondary attached shuttle experiments, plus the SHI Research Double Module (SHI/RDM), also known as SPACEHAB.

  11. Constraint based scheduling for the Goddard Space Flight Center distributed Active Archive Center's data archive and distribution system

    NASA Technical Reports Server (NTRS)

    Short, Nick, Jr.; Bedet, Jean-Jacques; Bodden, Lee; Boddy, Mark; White, Jim; Beane, John

    1994-01-01

    The Goddard Space Flight Center (GSFC) Distributed Active Archive Center (DAAC) has been operational since October 1, 1993. Its mission is to support the Earth Observing System (EOS) by providing rapid access to EOS data and analysis products, and to test Earth Observing System Data and Information System (EOSDIS) design concepts. One of the challenges is to ensure quick and easy retrieval of any data archived within the DAAC's Data Archive and Distributed System (DADS). Over the 15-year life of EOS project, an estimated several Petabytes (10(exp 15)) of data will be permanently stored. Accessing that amount of information is a formidable task that will require innovative approaches. As a precursor of the full EOS system, the GSFC DAAC with a few Terabits of storage, has implemented a prototype of a constraint-based task and resource scheduler to improve the performance of the DADS. This Honeywell Task and Resource Scheduler (HTRS), developed by Honeywell Technology Center in cooperation the Information Science and Technology Branch/935, the Code X Operations Technology Program, and the GSFC DAAC, makes better use of limited resources, prevents backlog of data, provides information about resources bottlenecks and performance characteristics. The prototype which is developed concurrently with the GSFC Version 0 (V0) DADS, models DADS activities such as ingestion and distribution with priority, precedence, resource requirements (disk and network bandwidth) and temporal constraints. HTRS supports schedule updates, insertions, and retrieval of task information via an Application Program Interface (API). The prototype has demonstrated with a few examples, the substantial advantages of using HTRS over scheduling algorithms such as a First In First Out (FIFO) queue. The kernel scheduling engine for HTRS, called Kronos, has been successfully applied to several other domains such as space shuttle mission scheduling, demand flow manufacturing, and avionics communications scheduling.

  12. Design Process Improvement for Electric CAR Harness

    NASA Astrophysics Data System (ADS)

    Sawatdee, Thiwarat; Chutima, Parames

    2017-06-01

    In an automobile parts design company, the customer satisfaction is one of the most important factors for product design. Therefore, the company employs all means to focus its product design process based on the various requirements of customers resulting in high number of design changes. The objective of this research is to improve the design process of the electric car harness that effects the production scheduling by using Fault Tree Analysis (FTA) and Failure Mode and Effect Analysis (FMEA) as the main tools. FTA is employed for root cause analysis and FMEA is used to ranking a High Risk Priority Number (RPN) which is shows the priority of factors in the electric car harness that have high impact to the design of the electric car harness. After the implementation, the improvements are realized significantly since the number of design change is reduced from 0.26% to 0.08%.

  13. The no-show rate in a high-risk obstetric clinic.

    PubMed

    Campbell, J D; Chez, R A; Queen, T; Barcelo, A; Patron, E

    2000-10-01

    We wished to determine the reasons for an average missed appointment rate of 28% in a high-risk pregnancy clinic. Only 41% of the 261 women in the study group could be reached by telephone. The reasons included not having a phone, the phone had been disconnected, incorrect phone number on the chart, the patient had moved, and the patient did not respond to the answering machine message. The reasons for missing the appointment included lack of transportation, scheduling problems, overslept or forgot, presence of a sick child or relative, and lack of child care. The response of patients to assessing prenatal care may reflect their priority of medical care relative to other priorities associated with day-to-day existence. There may be a baseline missed appointment rate for prenatal care in lower socioeconomic populations of women. The commitment of personnel time and energy to attempt to modify the no-show rate should be reexamined.

  14. NATO Human View Architecture and Human Networks

    NASA Technical Reports Server (NTRS)

    Handley, Holly A. H.; Houston, Nancy P.

    2010-01-01

    The NATO Human View is a system architectural viewpoint that focuses on the human as part of a system. Its purpose is to capture the human requirements and to inform on how the human impacts the system design. The viewpoint contains seven static models that include different aspects of the human element, such as roles, tasks, constraints, training and metrics. It also includes a Human Dynamics component to perform simulations of the human system under design. One of the static models, termed Human Networks, focuses on the human-to-human communication patterns that occur as a result of ad hoc or deliberate team formation, especially teams distributed across space and time. Parameters of human teams that effect system performance can be captured in this model. Human centered aspects of networks, such as differences in operational tempo (sense of urgency), priorities (common goal), and team history (knowledge of the other team members), can be incorporated. The information captured in the Human Network static model can then be included in the Human Dynamics component so that the impact of distributed teams is represented in the simulation. As the NATO militaries transform to a more networked force, the Human View architecture is an important tool that can be used to make recommendations on the proper mix of technological innovations and human interactions.

  15. Architectural Analysis of Systems Based on the Publisher-Subscriber Style

    NASA Technical Reports Server (NTRS)

    Ganesun, Dharmalingam; Lindvall, Mikael; Ruley, Lamont; Wiegand, Robert; Ly, Vuong; Tsui, Tina

    2010-01-01

    Architectural styles impose constraints on both the topology and the interaction behavior of involved parties. In this paper, we propose an approach for analyzing implemented systems based on the publisher-subscriber architectural style. From the style definition, we derive a set of reusable questions and show that some of them can be answered statically whereas others are best answered using dynamic analysis. The paper explains how the results of static analysis can be used to orchestrate dynamic analysis. The proposed method was successfully applied on the NASA's Goddard Mission Services Evolution Center (GMSEC) software product line. The results show that the GMSEC has a) a novel reusable vendor-independent middleware abstraction layer that allows the NASA's missions to configure the middleware of interest without changing the publishers' or subscribers' source code, and b) some high priority bugs due to behavioral discrepancies, which were eluded during testing and code reviews, among different implementations of the same APIs for different vendors.

  16. Study on the intelligent decision making of soccer robot side-wall behavior

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaochuan; Shao, Guifang; Tan, Zhi; Li, Zushu

    2007-12-01

    Side-wall is the static obstacle in soccer robot game, reasonably making use of the Side-wall can improve soccer robot competitive ability. As a kind of artificial life, the Side-wall processing strategy of soccer robot is influenced by many factors, such as game state, field region, attacking and defending situation and so on, each factor also has different influence degree, so, the Side-wall behavior selection is an intelligent selecting process. From the view point of human simulated, based on the idea of Side-wall processing priority[1], this paper builds the priority function for Side-wall processing, constructs the action predicative model for Side-wall obstacle, puts forward the Side-wall processing strategy, and forms the Side-wall behavior selection mechanism. Through the contrasting experiment between the strategy applied and none, proves that this strategy can improve the soccer robot capacity, it is feasible and effective, and has positive meaning for soccer robot stepped study.

  17. Short-term Inundation Forecasting for Tsunamis Version 4.0 Brings Forecasting Speed, Accuracy, and Capability Improvements to NOAA's Tsunami Warning Centers

    NASA Astrophysics Data System (ADS)

    Sterling, K.; Denbo, D. W.; Eble, M. C.

    2016-12-01

    Short-term Inundation Forecasting for Tsunamis (SIFT) software was developed by NOAA's Pacific Marine Environmental Laboratory (PMEL) for use in tsunami forecasting and has been used by both U.S. Tsunami Warning Centers (TWCs) since 2012, when SIFTv3.1 was operationally accepted. Since then, advancements in research and modeling have resulted in several new features being incorporated into SIFT forecasting. Following the priorities and needs of the TWCs, upgrades to SIFT forecasting were implemented into SIFTv4.0, scheduled to become operational in October 2016. Because every minute counts in the early warning process, two major time saving features were implemented in SIFT 4.0. To increase processing speeds and generate high-resolution flooding forecasts more quickly, the tsunami propagation and inundation codes were modified to run on Graphics Processing Units (GPUs). To reduce time demand on duty scientists during an event, an automated DART inversion (or fitting) process was implemented. To increase forecasting accuracy, the forecasted amplitudes and inundations were adjusted to include dynamic tidal oscillations, thereby reducing the over-estimates of flooding common in SIFTv3.1 due to the static tide stage conservatively set at Mean High Water. Further improvements to forecasts were gained through the assimilation of additional real-time observations. Cabled array measurements from Bottom Pressure Recorders (BPRs) in the Oceans Canada NEPTUNE network are now available to SIFT for use in the inversion process. To better meet the needs of harbor masters and emergency managers, SIFTv4.0 adds a tsunami currents graphical product to the suite of disseminated forecast results. When delivered, these new features in SIFTv4.0 will improve the operational tsunami forecasting speed, accuracy, and capabilities at NOAA's Tsunami Warning Centers.

  18. A hierarchical scheduling and management solution for dynamic reconfiguration in FPGA-based embedded systems

    NASA Astrophysics Data System (ADS)

    Cervero, T.; Gómez, A.; López, S.; Sarmiento, R.; Dondo, J.; Rincón, F.; López, J. C.

    2013-05-01

    One of the limiting factors that have prevented a widely dissemination of the reconfigurable technology is the absence of an appropriate model for certain target applications capable of offering a reliable control. Moreover, the lack of flexible and easy-to-use scheduling and management systems are also relevant drawbacks to be considered. Under static scenarios, it is relatively easy to schedule and manage the reconfiguration process since all the variations corresponding to predetermined and well-known tasks. However, the difficulty increases when the adaptation needs of the overall system change semi-randomly according to the environmental fluctuations. In this context, this work proposes a change in the paradigm of dynamically reconfigurable systems, by attending to the dynamically reconfigurable control problematic as a whole, in which the scheduling and the placement issues are packed together as a hierarchical management structure, interacting together as one entity from the system point of view, but performing their tasks with certain degree of independence each other. In this sense, the top hierarchical level corresponds with a dynamic scheduler in charge of planning and adjusting all the reconfigurable modules according to the variations of the external stimulus. The lower level interacts with the physical layer of the device by means of instantiating, relocating, removing a reconfigurable module following the scheduler's instructions. In regards to how fast is the proposed solution, the total partial reconfiguration time achieved with this proposal has been measured and compared with other two approaches: 1) using traditional Xilinx's tools; 2) using an optimized version of the Xilinx's drivers. The collected numbers demonstrate that our solution reaches a gain up to 10 times faster than the other approaches.

  19. Evaluation of different infant vaccination schedules incorporating pneumococcal vaccination (The Vietnam Pneumococcal Project): protocol of a randomised controlled trial

    PubMed Central

    Temple, Beth; Toan, Nguyen Trong; Uyen, Doan Y; Balloch, Anne; Bright, Kathryn; Cheung, Yin Bun; Licciardi, Paul; Nguyen, Cattram Duong; Phuong, Nguyen Thi Minh; Satzke, Catherine; Smith-Vaughan, Heidi; Vu, Thi Que Huong; Huu, Tran Ngoc; Mulholland, Edward Kim

    2018-01-01

    Introduction WHO recommends the use of pneumococcal conjugate vaccine (PCV) as a priority. However, there are many countries yet to introduce PCV, especially in Asia. This trial aims to evaluate different PCV schedules and to provide a head-to-head comparison of PCV10 and PCV13 in order to generate evidence to assist with decisions regarding PCV introduction. Schedules will be compared in relation to their immunogenicity and impact on nasopharyngeal carriage of Streptococcus pneumoniae and Haemophilus influenzae. Methods and analysis This randomised, single-blind controlled trial involves 1200 infants recruited at 2 months of age to one of six infant PCV schedules: PCV10 in a 3+1, 3+0, 2+1 or two-dose schedule; PCV13 in a 2+1 schedule; and controls that receive two doses of PCV10 and 18 and 24 months. An additional control group of 200 children is recruited at 18 months that receive one dose of PCV10 at 24 months. All participants are followed up until 24 months of age. The primary outcome is the post-primary series immunogenicity, expressed as the proportions of participants with serotype-specific antibody levels ≥0.35 µg/mL for each serotype in PCV10. Ethics and dissemination Ethical approval has been obtained from the Human Research Ethics Committee of the Northern Territory Department of Health and Menzies School of Health Research (EC00153) and the Vietnam Ministry of Health Ethics Committee. The results, interpretation and conclusions will be presented to parents and guardians, at national and international conferences, and published in peer-reviewed open access journals. Trial registration number NCT01953510; Pre-results. PMID:29884695

  20. The LSST operations simulator

    NASA Astrophysics Data System (ADS)

    Delgado, Francisco; Saha, Abhijit; Chandrasekharan, Srinivasan; Cook, Kem; Petry, Catherine; Ridgway, Stephen

    2014-08-01

    The Operations Simulator for the Large Synoptic Survey Telescope (LSST; http://www.lsst.org) allows the planning of LSST observations that obey explicit science driven observing specifications, patterns, schema, and priorities, while optimizing against the constraints placed by design-specific opto-mechanical system performance of the telescope facility, site specific conditions as well as additional scheduled and unscheduled downtime. It has a detailed model to simulate the external conditions with real weather history data from the site, a fully parameterized kinematic model for the internal conditions of the telescope, camera and dome, and serves as a prototype for an automatic scheduler for the real time survey operations with LSST. The Simulator is a critical tool that has been key since very early in the project, to help validate the design parameters of the observatory against the science requirements and the goals from specific science programs. A simulation run records the characteristics of all observations (e.g., epoch, sky position, seeing, sky brightness) in a MySQL database, which can be queried for any desired purpose. Derivative information digests of the observing history are made with an analysis package called Simulation Survey Tools for Analysis and Reporting (SSTAR). Merit functions and metrics have been designed to examine how suitable a specific simulation run is for several different science applications. Software to efficiently compare the efficacy of different survey strategies for a wide variety of science applications using such a growing set of metrics is under development. A recent restructuring of the code allows us to a) use "look-ahead" strategies that avoid cadence sequences that cannot be completed due to observing constraints; and b) examine alternate optimization strategies, so that the most efficient scheduling algorithm(s) can be identified and used: even few-percent efficiency gains will create substantive scientific opportunity. The enhanced simulator is being used to assess the feasibility of desired observing cadences, study the impact of changing science program priorities and assist with performance margin investigations of the LSST system.

  1. wayGoo recommender system: personalized recommendations for events scheduling, based on static and real-time information

    NASA Astrophysics Data System (ADS)

    Thanos, Konstantinos-Georgios; Thomopoulos, Stelios C. A.

    2016-05-01

    wayGoo is a fully functional application whose main functionalities include content geolocation, event scheduling, and indoor navigation. However, significant information about events do not reach users' attention, either because of the size of this information or because some information comes from real - time data sources. The purpose of this work is to facilitate event management operations by prioritizing the presented events, based on users' interests using both, static and real - time data. Through the wayGoo interface, users select conceptual topics that are interesting for them. These topics constitute a browsing behavior vector which is used for learning users' interests implicitly, without being intrusive. Then, the system estimates user preferences and return an events list sorted from the most preferred one to the least. User preferences are modeled via a Naïve Bayesian Network which consists of: a) the `decision' random variable corresponding to users' decision on attending an event, b) the `distance' random variable, modeled by a linear regression that estimates the probability that the distance between a user and each event destination is not discouraging, ` the seat availability' random variable, modeled by a linear regression, which estimates the probability that the seat availability is encouraging d) and the `relevance' random variable, modeled by a clustering - based collaborative filtering, which determines the relevance of each event users' interests. Finally, experimental results show that the proposed system contribute essentially to assisting users in browsing and selecting events to attend.

  2. Web Site Development Support

    NASA Technical Reports Server (NTRS)

    Abdul, Hameed

    2016-01-01

    This summer I assisted the RPT Program Office in developing a design plan to update their existing website to current NASA web standards. The finished website is intended for the general public, specifically potential customers interested in learning about NASA's chemical rocket test facility capabilities and test assignment process. The goal of the website is to give the public insight about the purpose and function of the RPT Program. Working on this project gave me the opportunity to learn skills necessary for effective project management. The RPT Program Office manages numerous facilities so they are required to travel often to other sites for meetings throughout the year. Maneuvering around the travel schedule of the office and the workload priority of the IT Department proved to be quite the challenge. I overcame the travel schedule of the office by frequently communicating and checking in with my mentor via email and telephone.

  3. Satellite-instrument system engineering best practices and lessons

    NASA Astrophysics Data System (ADS)

    Schueler, Carl F.

    2009-08-01

    This paper focuses on system engineering development issues driving satellite remote sensing instrumentation cost and schedule. A key best practice is early assessment of mission and instrumentation requirements priorities driving performance trades among major instrumentation measurements: Radiometry, spatial field of view and image quality, and spectral performance. Key lessons include attention to technology availability and applicability to prioritized requirements, care in applying heritage, approaching fixed-price and cost-plus contracts with appropriate attention to risk, and assessing design options with attention to customer preference as well as design performance, and development cost and schedule. A key element of success either in contract competition or execution is team experience. Perhaps the most crucial aspect of success, however, is thorough requirements analysis and flowdown to specifications driving design performance with sufficient parameter margin to allow for mistakes or oversights - the province of system engineering from design inception to development, test and delivery.

  4. An applied study using systems engineering methods to prioritize green systems options

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Sonya M; Macdonald, John M

    2009-01-01

    For many years, there have been questions about the effectiveness of applying different green solutions. If you're building a home and wish to use green technologies, where do you start? While all technologies sound promising, which will perform the best over time? All this has to be considered within the cost and schedule of the project. The amount of information available on the topic can be overwhelming. We seek to examine if Systems Engineering methods can be used to help people choose and prioritize technologies that fit within their project and budget. Several methods are used to gain perspective intomore » how to select the green technologies, such as the Analytic Hierarchy Process (AHP) and Kepner-Tregoe. In our study, subjects applied these methods to analyze cost, schedule, and trade-offs. Results will document whether the experimental approach is applicable to defining system priorities for green technologies.« less

  5. Design and architecture of the Mars relay network planning and analysis framework

    NASA Technical Reports Server (NTRS)

    Cheung, K. M.; Lee, C. H.

    2002-01-01

    In this paper we describe the design and architecture of the Mars Network planning and analysis framework that supports generation and validation of efficient planning and scheduling strategy. The goals are to minimize the transmitting time, minimize the delaying time, and/or maximize the network throughputs. The proposed framework would require (1) a client-server architecture to support interactive, batch, WEB, and distributed analysis and planning applications for the relay network analysis scheme, (2) a high-fidelity modeling and simulation environment that expresses link capabilities between spacecraft to spacecraft and spacecraft to Earth stations as time-varying resources, and spacecraft activities, link priority, Solar System dynamic events, the laws of orbital mechanics, and other limiting factors as spacecraft power and thermal constraints, (3) an optimization methodology that casts the resource and constraint models into a standard linear and nonlinear constrained optimization problem that lends itself to commercial off-the-shelf (COTS)planning and scheduling algorithms.

  6. Managing distributed software development in the Virtual Astronomical Observatory

    NASA Astrophysics Data System (ADS)

    Evans, Janet D.; Plante, Raymond L.; Boneventura, Nina; Busko, Ivo; Cresitello-Dittmar, Mark; D'Abrusco, Raffaele; Doe, Stephen; Ebert, Rick; Laurino, Omar; Pevunova, Olga; Refsdal, Brian; Thomas, Brian

    2012-09-01

    The U.S. Virtual Astronomical Observatory (VAO) is a product-driven organization that provides new scientific research capabilities to the astronomical community. Software development for the VAO follows a lightweight framework that guides development of science applications and infrastructure. Challenges to be overcome include distributed development teams, part-time efforts, and highly constrained schedules. We describe the process we followed to conquer these challenges while developing Iris, the VAO application for analysis of 1-D astronomical spectral energy distributions (SEDs). Iris was successfully built and released in less than a year with a team distributed across four institutions. The project followed existing International Virtual Observatory Alliance inter-operability standards for spectral data and contributed a SED library as a by-product of the project. We emphasize lessons learned that will be folded into future development efforts. In our experience, a well-defined process that provides guidelines to ensure the project is cohesive and stays on track is key to success. Internal product deliveries with a planned test and feedback loop are critical. Release candidates are measured against use cases established early in the process, and provide the opportunity to assess priorities and make course corrections during development. Also key is the participation of a stakeholder such as a lead scientist who manages the technical questions, advises on priorities, and is actively involved as a lead tester. Finally, frequent scheduled communications (for example a bi-weekly tele-conference) assure issues are resolved quickly and the team is working toward a common vision.

  7. Performance evaluation of an agent-based occupancy simulation model

    DOE PAGES

    Luo, Xuan; Lam, Khee Poh; Chen, Yixing; ...

    2017-01-17

    Occupancy is an important factor driving building performance. Static and homogeneous occupant schedules, commonly used in building performance simulation, contribute to issues such as performance gaps between simulated and measured energy use in buildings. Stochastic occupancy models have been recently developed and applied to better represent spatial and temporal diversity of occupants in buildings. However, there is very limited evaluation of the usability and accuracy of these models. This study used measured occupancy data from a real office building to evaluate the performance of an agent-based occupancy simulation model: the Occupancy Simulator. The occupancy patterns of various occupant types weremore » first derived from the measured occupant schedule data using statistical analysis. Then the performance of the simulation model was evaluated and verified based on (1) whether the distribution of observed occupancy behavior patterns follows the theoretical ones included in the Occupancy Simulator, and (2) whether the simulator can reproduce a variety of occupancy patterns accurately. Results demonstrated the feasibility of applying the Occupancy Simulator to simulate a range of occupancy presence and movement behaviors for regular types of occupants in office buildings, and to generate stochastic occupant schedules at the room and individual occupant levels for building performance simulation. For future work, model validation is recommended, which includes collecting and using detailed interval occupancy data of all spaces in an office building to validate the simulated occupant schedules from the Occupancy Simulator.« less

  8. Performance evaluation of an agent-based occupancy simulation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Xuan; Lam, Khee Poh; Chen, Yixing

    Occupancy is an important factor driving building performance. Static and homogeneous occupant schedules, commonly used in building performance simulation, contribute to issues such as performance gaps between simulated and measured energy use in buildings. Stochastic occupancy models have been recently developed and applied to better represent spatial and temporal diversity of occupants in buildings. However, there is very limited evaluation of the usability and accuracy of these models. This study used measured occupancy data from a real office building to evaluate the performance of an agent-based occupancy simulation model: the Occupancy Simulator. The occupancy patterns of various occupant types weremore » first derived from the measured occupant schedule data using statistical analysis. Then the performance of the simulation model was evaluated and verified based on (1) whether the distribution of observed occupancy behavior patterns follows the theoretical ones included in the Occupancy Simulator, and (2) whether the simulator can reproduce a variety of occupancy patterns accurately. Results demonstrated the feasibility of applying the Occupancy Simulator to simulate a range of occupancy presence and movement behaviors for regular types of occupants in office buildings, and to generate stochastic occupant schedules at the room and individual occupant levels for building performance simulation. For future work, model validation is recommended, which includes collecting and using detailed interval occupancy data of all spaces in an office building to validate the simulated occupant schedules from the Occupancy Simulator.« less

  9. Improved Cloud resource allocation: how INDIGO-DataCloud is overcoming the current limitations in Cloud schedulers

    NASA Astrophysics Data System (ADS)

    Lopez Garcia, Alvaro; Zangrando, Lisa; Sgaravatto, Massimo; Llorens, Vincent; Vallero, Sara; Zaccolo, Valentina; Bagnasco, Stefano; Taneja, Sonia; Dal Pra, Stefano; Salomoni, Davide; Donvito, Giacinto

    2017-10-01

    Performing efficient resource provisioning is a fundamental aspect for any resource provider. Local Resource Management Systems (LRMS) have been used in data centers for decades in order to obtain the best usage of the resources, providing their fair usage and partitioning for the users. In contrast, current cloud schedulers are normally based on the immediate allocation of resources on a first-come, first-served basis, meaning that a request will fail if there are no resources (e.g. OpenStack) or it will be trivially queued ordered by entry time (e.g. OpenNebula). Moreover, these scheduling strategies are based on a static partitioning of the resources, meaning that existing quotas cannot be exceeded, even if there are idle resources allocated to other projects. This is a consequence of the fact that cloud instances are not associated with a maximum execution time and leads to a situation where the resources are under-utilized. These facts have been identified by the INDIGO-DataCloud project as being too simplistic for accommodating scientific workloads in an efficient way, leading to an underutilization of the resources, a non desirable situation in scientific data centers. In this work, we will present the work done in the scheduling area during the first year of the INDIGO project and the foreseen evolutions.

  10. Use of health information technology by children's hospitals in the United States.

    PubMed

    Menachemi, Nir; Brooks, Robert G; Schwalenstocker, Ellen; Simpson, Lisa

    2009-01-01

    The purpose of this study was to examine the adoption of health information technology by children's hospitals and to document barriers and priorities as they relate to health information technology adoption. Primary data of interest were obtained through the use of a survey instrument distributed to the chief information officers of 199 children's hospitals in the United States. Data were collected on current and future use of a variety of clinical health information technology and telemedicine applications, organizational priorities, barriers to use of health information technology, and hospital and chief information officer characteristics. Among the 109 responding hospitals (55%), common clinical applications included clinical scheduling (86.2%), transcription (85.3%), and pharmacy (81.9%) and laboratory (80.7%) information. Electronic health records (48.6%), computerized order entry (40.4%), and clinical decision support systems (35.8%) were less common. The most common barriers to health information technology adoption were vendors' inability to deliver products or services to satisfaction (85.4%), lack of staffing resources (82.3%), and difficulty in achieving end-user acceptance (80.2%). The most frequent priority for hospitals was to implement technology to reduce medical errors or to promote safety (72.5%). This first national look at health information technology use by children's hospitals demonstrates the progress in health information technology adoption, current barriers, and priorities for these institutions. In addition, the findings can serve as important benchmarks for future study in this area.

  11. Clinical risk analysis with failure mode and effect analysis (FMEA) model in a dialysis unit.

    PubMed

    Bonfant, Giovanna; Belfanti, Pietro; Paternoster, Giuseppe; Gabrielli, Danila; Gaiter, Alberto M; Manes, Massimo; Molino, Andrea; Pellu, Valentina; Ponzetti, Clemente; Farina, Massimo; Nebiolo, Pier E

    2010-01-01

    The aim of clinical risk management is to improve the quality of care provided by health care organizations and to assure patients' safety. Failure mode and effect analysis (FMEA) is a tool employed for clinical risk reduction. We applied FMEA to chronic hemodialysis outpatients. FMEA steps: (i) process study: we recorded phases and activities. (ii) Hazard analysis: we listed activity-related failure modes and their effects; described control measures; assigned severity, occurrence and detection scores for each failure mode and calculated the risk priority numbers (RPNs) by multiplying the 3 scores. Total RPN is calculated by adding single failure mode RPN. (iii) Planning: we performed a RPNs prioritization on a priority matrix taking into account the 3 scores, and we analyzed failure modes causes, made recommendations and planned new control measures. (iv) Monitoring: after failure mode elimination or reduction, we compared the resulting RPN with the previous one. Our failure modes with the highest RPN came from communication and organization problems. Two tools have been created to ameliorate information flow: "dialysis agenda" software and nursing datasheets. We scheduled nephrological examinations, and we changed both medical and nursing organization. Total RPN value decreased from 892 to 815 (8.6%) after reorganization. Employing FMEA, we worked on a few critical activities, and we reduced patients' clinical risk. A priority matrix also takes into account the weight of the control measures: we believe this evaluation is quick, because of simple priority selection, and that it decreases action times.

  12. Utilizing Novel Non-traditional Sensor Tasking Approaches to Enhance the Space Situational Awareness Picture Maintained by the Space Surveillance Network

    NASA Astrophysics Data System (ADS)

    Herz, A.; Herz, E.; Center, K.; George, P.; Axelrad, P.; Mutschler, S.; Jones, B.

    2016-09-01

    The Space Surveillance Network (SSN) is tasked with the increasingly difficult mission of detecting, tracking, cataloging and identifying artificial objects orbiting the Earth, including active and inactive satellites, spent rocket bodies, and fragmented debris. Much of the architecture and operations of the SSN are limited and outdated. Efforts are underway to modernize some elements of the systems. Even so, the ability to maintain the best current Space Situational Awareness (SSA) picture and identify emerging events in a timely fashion could be significantly improved by leveraging non-traditional sensor sites. Orbit Logic, the University of Colorado and the University of Texas at Austin are developing an innovative architecture and operations concept to coordinate the tasking and observation information processing of non - traditional assets based on information-theoretic approaches. These confirmed tasking schedules and the resulting data can then be used to "inform" the SSN tasking process. The 'Heimdall Web' system is comprised of core tasking optimization components and accompanying Web interfaces within a secure, split architecture that will for the first time allow non-traditional sensors to support SSA and improve SSN tasking. Heimdall Web application components appropriately score/prioritize space catalog objects based on covariance, priority, observability, expected information gain, and probability of detect - then coordinate an efficient sensor observation schedule for non-SSN sensors contributing to the overall SSA picture maintained by the Joint Space Operations Center (JSpOC). The Heimdall Web Ops concept supports sensor participation levels of "Scheduled", "Tasked" and "Contributing". Scheduled and Tasked sensors are provided optimized observation schedules or object tracking lists from central algorithms, while Contributing sensors review and select from a list of "desired track objects". All sensors are "Web Enabled" for tasking and feedback, supplying observation schedules, confirmed observations and related data back to Heimdall Web to complete the feedback loop for the next scheduling iteration.

  13. Design and evaluation of a theory-based, culturally relevant outreach model for breast and cervical cancer screening for Latina immigrants

    PubMed Central

    White, Kari; Garces, Isabel C.; Bandura, Lisa; McGuire, Allison A.; Scarinci, Isabel C.

    2013-01-01

    Objectives Breast and cervical cancer are common among Latinas, but screening rates among foreign-born Latinas are relatively low. In this article we describe the design and implementation of a theory-based (PEN-3) outreach program to promote breast and cervical cancer screening to Latina immigrants, and evaluate the program’s effectiveness. Methods We used data from self-administered questionnaires completed at six annual outreach events to examine the sociodemographic characteristics of attendees and evaluate whether the program reached the priority population – foreign-born Latina immigrants with limited access to health care and screening services. To evaluate the program’s effectiveness in connecting women to screening, we examined the proportion and characteristics of women who scheduled and attended Pap smear and mammography appointments. Results Among the 782 Latinas who attended the outreach program, 60% and 83% had not had a Pap smear or mammogram, respectively, in at least a year. Overall, 80% scheduled a Pap smear and 78% scheduled a mammogram. Women without insurance, who did not know where to get screening and had not been screened in the last year were more likely to schedule appointments (p < 0.05). Among women who scheduled appointments, 65% attended their Pap smear and 79% attended the mammogram. We did not identify significant differences in sociodemographic characteristics associated with appointment attendance. Conclusions Using a theoretical approach to outreach design and implementation, it is possible to reach a substantial number of Latina immigrants and connect them to cancer screening services. PMID:22870569

  14. Integrated management of thesis using clustering method

    NASA Astrophysics Data System (ADS)

    Astuti, Indah Fitri; Cahyadi, Dedy

    2017-02-01

    Thesis is one of major requirements for student in pursuing their bachelor degree. In fact, finishing the thesis involves a long process including consultation, writing manuscript, conducting the chosen method, seminar scheduling, searching for references, and appraisal process by the board of mentors and examiners. Unfortunately, most of students find it hard to match all the lecturers' free time to sit together in a seminar room in order to examine the thesis. Therefore, seminar scheduling process should be on the top of priority to be solved. Manual mechanism for this task no longer fulfills the need. People in campus including students, staffs, and lecturers demand a system in which all the stakeholders can interact each other and manage the thesis process without conflicting their timetable. A branch of computer science named Management Information System (MIS) could be a breakthrough in dealing with thesis management. This research conduct a method called clustering to distinguish certain categories using mathematics formulas. A system then be developed along with the method to create a well-managed tool in providing some main facilities such as seminar scheduling, consultation and review process, thesis approval, assessment process, and also a reliable database of thesis. The database plays an important role in present and future purposes.

  15. Three list scheduling temporal partitioning algorithm of time space characteristic analysis and compare for dynamic reconfigurable computing

    NASA Astrophysics Data System (ADS)

    Chen, Naijin

    2013-03-01

    Level Based Partitioning (LBP) algorithm, Cluster Based Partitioning (CBP) algorithm and Enhance Static List (ESL) temporal partitioning algorithm based on adjacent matrix and adjacent table are designed and implemented in this paper. Also partitioning time and memory occupation based on three algorithms are compared. Experiment results show LBP partitioning algorithm possesses the least partitioning time and better parallel character, as far as memory occupation and partitioning time are concerned, algorithms based on adjacent table have less partitioning time and less space memory occupation.

  16. Data analysis of P sub T/P sub S noseboom probe testing on F100 engine P680072 at NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Foote, C. H.

    1980-01-01

    Results from the altitude testing of a P sub T/P sub S noseboom probe on the F100 engine are discused. The results are consistent with sea level test results. The F100 engine altitude test verified automatic downmatch with the engine pressure ratio control, and backup control inlet case static pressure demonstrated sufficient accuracy for backup control fuel flow scheduling. The production P6 probe measured Station 6 pressures accurately for both undistorted and distorted inlet airflows.

  17. Integration of Openstack cloud resources in BES III computing cluster

    NASA Astrophysics Data System (ADS)

    Li, Haibo; Cheng, Yaodong; Huang, Qiulan; Cheng, Zhenjing; Shi, Jingyan

    2017-10-01

    Cloud computing provides a new technical means for data processing of high energy physics experiment. However, the resource of each queue is fixed and the usage of the resource is static in traditional job management system. In order to make it simple and transparent for physicist to use, we developed a virtual cluster system (vpmanager) to integrate IHEPCloud and different batch systems such as Torque and HTCondor. Vpmanager provides dynamic virtual machines scheduling according to the job queue. The BES III use case results show that resource efficiency is greatly improved.

  18. Mitigating Upsets in SRAM-Based FPGAs from the Xilinx Virtex 2 Family

    NASA Technical Reports Server (NTRS)

    Swift, G. M.; Yui, C. C.; Carmichael, C.; Koga, R.; George, J. S.

    2003-01-01

    Static random access memory (SRAM) upset rates in field programmable gate arrays (FPGAs) from the Xilinx Virtex 2 family have been tested for radiation effects on configuration memory, block RAM and the power-on-reset (POR) and SelectMAP single event functional interrupts (SEFIs). Dynamic testing has shown the effectiveness and value of Triple Module Redundancy (TMR) and partial reconfiguration when used in conjunction. Continuing dynamic testing for more complex designs and other Virtex 2 capabilities (i.e., I/O standards, digital clock managers (DCM), etc.) is scheduled.

  19. Shared-resource computing for small research labs.

    PubMed

    Ackerman, M J

    1982-04-01

    A real time laboratory computer network is described. This network is composed of four real-time laboratory minicomputers located in each of four division laboratories and a larger minicomputer in a centrally located computer room. Off the shelf hardware and software were used with no customization. The network is configured for resource sharing using DECnet communications software and the RSX-11-M multi-user real-time operating system. The cost effectiveness of the shared resource network and multiple real-time processing using priority scheduling is discussed. Examples of utilization within a medical research department are given.

  20. Molecular Foundry Workshop draws overflow crowd to BerkeleyLab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, Art

    2002-11-27

    Nanoscale science and technology is now one of the top research priorities in the United States. With this background, it is no surprise that an overflow crowd or more than 350 registrants filled two auditoriums to hear about and contribute ideas for the new Molecular Foundry during a two-day workshop at the Lawrence Berkeley National Laboratory (Berkeley Lab). Scheduled to open for business at Berkeley Labin early 2006, the Molecular Foundry is one of three Nanoscale Science Research Centers (NSRCs) put forward for funding by the DOE's Office of Basic Energy Sciences (BES).

  1. Ketamine - A Multifaceted Drug.

    PubMed

    Meng, Lingzhong; Li, Jian; Lu, Yi; Sun, Dajin; Tao, Yuan-Xiang; Liu, Renyu; Luo, Jin Jun

    There is a petition for tight control of ketamine from the Chinese government to classify ketamine as a Schedule I drug, which is defined as a drug with no currently accepted medical use but a high potential for abuse. However, ketamine has unique properties that can benefit different patient populations. Scholars from the Translational Perioperative and Pain Medicine and the International Chinese Academy of Anesthesiology WeChat groups had an interactive discussion on ketamine, including its current medical applications, future research priorities, and benefits versus risks. The discussion is summarized in this manuscript with some minor edits.

  2. A Comparison of Six Repair Scheduling Policies for the P3 Aircraft.

    DTIC Science & Technology

    1988-03-01

    each type component i: RHO(i) = LAMBDA(i) / SRATE(i) LINEUPti) - RHO(i) x COUNT(i) Step 14c: Sort components by LINEUP (i), reorder position in line in...favor of the largest LINEUP (i). Return to step 7. Dynamic 3 Model Modifications: Step 14a: Count the number of operating parts of each component i...STOCK(i)). Step 14b: Assign a priority to each component type based on the count of current stock in step 14a: LINEUP (i) < LINEUP (J) iff STOCK(i

  3. A multi-group and preemptable scheduling of cloud resource based on HTCondor

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaowei; Zou, Jiaheng; Cheng, Yaodong; Shi, Jingyan

    2017-10-01

    Due to the features of virtual machine-flexibility, easy controlling and various system environments, more and more fields utilize the virtualization technology to construct the distributed system with the virtual resources, also including high energy physics. This paper introduce a method used in high energy physics that supports multiple resource group and preemptable cloud resource scheduling, combining virtual machine with HTCondor (a batch system). It makes resource controlling more flexible and more efficient and makes resource scheduling independent of job scheduling. Firstly, the resources belong to different experiment-groups, and the type of user-groups mapping to resource-groups(same as experiment-group) is one-to-one or many-to-one. In order to make the confused group simply to be managed, we designed the permission controlling component to ensure that the different resource-groups can get the suitable jobs. Secondly, for the purpose of elastically allocating resources for suitable resource-group, it is necessary to schedule resources like scheduling jobs. So this paper designs the cloud resource scheduling to maintain a resource queue and allocate an appropriate amount of virtual resources to the request resource-group. Thirdly, in some kind of situations, because of the resource occupied for a long time, resources need to be preempted. This paper adds the preemption function for the resource scheduling that implement resource preemption based on the group priority. Additionally, the way to preempting is soft that when virtual resources are preempted, jobs will not be killed but also be held and rematched later. It is implemented with the help of HTCondor, storing the held job information in scheduler, releasing the job to idle status and doing second matcher. In IHEP (institute of high energy physics), we have built a batch system based on HTCondor with a virtual resources pool based on Openstack. And this paper will show some cases of experiment JUNO and LHAASO. The result indicates that multi-group and preemptable resource scheduling is efficient to support multi-group and soft preemption. Additionally, the permission controlling component has been used in the local computing cluster, supporting for experiment JUNO, CMS and LHAASO, and the scale will be expanded to more experiments at the first half year, including DYW, BES and so on. Its evidence that the permission controlling is efficient.

  4. Computational Investigation of the Aerodynamic Effects on Fluidic Thrust Vectoring

    NASA Technical Reports Server (NTRS)

    Deere, K. A.

    2000-01-01

    A computational investigation of the aerodynamic effects on fluidic thrust vectoring has been conducted. Three-dimensional simulations of a two-dimensional, convergent-divergent (2DCD) nozzle with fluidic injection for pitch vector control were run with the computational fluid dynamics code PAB using turbulence closure and linear Reynolds stress modeling. Simulations were computed with static freestream conditions (M=0.05) and at Mach numbers from M=0.3 to 1.2, with scheduled nozzle pressure ratios (from 3.6 to 7.2) and secondary to primary total pressure ratios of p(sub t,s)/p(sub t,p)=0.6 and 1.0. Results indicate that the freestream flow decreases vectoring performance and thrust efficiency compared with static (wind-off) conditions. The aerodynamic penalty to thrust vector angle ranged from 1.5 degrees at a nozzle pressure ratio of 6 with M=0.9 freestream conditions to 2.9 degrees at a nozzle pressure ratio of 5.2 with M=0.7 freestream conditions, compared to the same nozzle pressure ratios with static freestream conditions. The aerodynamic penalty to thrust ratio decreased from 4 percent to 0.8 percent as nozzle pressure ratio increased from 3.6 to 7.2. As expected, the freestream flow had little influence on discharge coefficient.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caliandro, Rocco; Sibillano, Teresa; Belviso, B. Danilo

    In this study, we have developed a general X-ray powder diffraction (XPD) methodology for the simultaneous structural and compositional characterization of inorganic nanomaterials. The approach is validated on colloidal tungsten oxide nanocrystals (WO 3-x NCs), as a model polymorphic nanoscale material system. Rod-shaped WO 3-x NCs with different crystal structure and stoichiometry are comparatively investigated under an inert atmosphere and after prolonged air exposure. An initial structural model for the as-synthesized NCs is preliminarily identified by means of Rietveld analysis against several reference crystal phases, followed by atomic pair distribution function (PDF) refinement of the best-matching candidates (static analysis). Subtlemore » stoichiometry deviations from the corresponding bulk standards are revealed. NCs exposed to air at room temperature are monitored by XPD measurements at scheduled time intervals. The static PDF analysis is complemented with an investigation into the evolution of the WO 3-x NC structure, performed by applying the modulation enhanced diffraction technique to the whole time series of XPD profiles (dynamical analysis). Prolonged contact with ambient air is found to cause an appreciable increase in the static disorder of the O atoms in the WO 3-x NC lattice, rather than a variation in stoichiometry. Finally, the time behavior of such structural change is identified on the basis of multivariate analysis.« less

  6. Five-Hole Flow Angle Probe Calibration for the NASA Glenn Icing Research Tunnel

    NASA Technical Reports Server (NTRS)

    Gonsalez, Jose C.; Arrington, E. Allen

    1999-01-01

    A spring 1997 test section calibration program is scheduled for the NASA Glenn Research Center Icing Research Tunnel following the installation of new water injecting spray bars. A set of new five-hole flow angle pressure probes was fabricated to properly calibrate the test section for total pressure, static pressure, and flow angle. The probes have nine pressure ports: five total pressure ports on a hemispherical head and four static pressure ports located 14.7 diameters downstream of the head. The probes were calibrated in the NASA Glenn 3.5-in.-diameter free-jet calibration facility. After completing calibration data acquisition for two probes, two data prediction models were evaluated. Prediction errors from a linear discrete model proved to be no worse than those from a full third-order multiple regression model. The linear discrete model only required calibration data acquisition according to an abridged test matrix, thus saving considerable time and financial resources over the multiple regression model that required calibration data acquisition according to a more extensive test matrix. Uncertainties in calibration coefficients and predicted values of flow angle, total pressure, static pressure. Mach number. and velocity were examined. These uncertainties consider the instrumentation that will be available in the Icing Research Tunnel for future test section calibration testing.

  7. [Immunisation schedule of the Spanish Association of Paediatrics: 2013 recommendations].

    PubMed

    Moreno-Pérez, D; Álvarez García, F J; Arístegui Fernández, J; Barrio Corrales, F; Cilleruelo Ortega, M J; Corretger Rauet, J M; González-Hachero, J; Hernández-Sampelayo Matos, T; Merino Moína, M; Ortigosa Del Castillo, L; Ruiz-Contreras, J

    2013-01-01

    The Advisory Committee on Vaccines of the Spanish Association of Paediatrics (CAV-AEP) updates the immunisation schedule every year, taking into account epidemiological data as well as evidence on the safety, effectiveness and efficiency of vaccines. The present schedule includes levels of recommendation. We have graded as routine vaccinations those that the CAV-AEP consider all children should receive; as recommended those that fit the profile for universal childhood immunisation and would ideally be given to all children, but that can be prioritised according to the resources available for their public funding; and as risk group vaccinations those that specifically target individuals in situations of risk. Immunisation schedules tend to be dynamic and adaptable to ongoing epidemiological changes. Nevertheless, the achievement of a unified immunisation schedule in all regions of Spain is a top priority for the CAV-AEP. Based on the latest epidemiological trends, CAV-AEP follows the innovations proposed in the last year's schedule, such as the administration of the first dose of the MMR and the varicella vaccines at age 12 months and the second dose at age 2-3 years, as well as the administration of the Tdap vaccine at age 4-6 years, always followed by another dose at 11-14 years of age, preferably at 11-12 years. The CAV-AEP believes that the coverage of vaccination against human papillomavirus in girls aged 11-14 years, preferably at 11-12 years, must increase. It reasserts its recommendation to include vaccination against pneumococcal disease in the routine immunisation schedule. Universal vaccination against varicella in the second year of life is an effective strategy and therefore a desirable objective. Vaccination against rotavirus is recommended in all infants due to the morbidity and elevated healthcare burden of the virus. The Committee stresses the need to vaccinate population groups considered at risk against influenza and hepatitis A. Finally, it emphasizes the need to bring incomplete vaccinations up to date following the catch-up immunisation schedule. Copyright © 2012 Asociación Española de Pediatría. Published by Elsevier España, S.L. All rights reserved.

  8. A Survey on Urban Traffic Management System Using Wireless Sensor Networks.

    PubMed

    Nellore, Kapileswar; Hancke, Gerhard P

    2016-01-27

    Nowadays, the number of vehicles has increased exponentially, but the bedrock capacities of roads and transportation systems have not developed in an equivalent way to efficiently cope with the number of vehicles traveling on them. Due to this, road jamming and traffic correlated pollution have increased with the associated adverse societal and financial effect on different markets worldwide. A static control system may block emergency vehicles due to traffic jams. Wireless Sensor networks (WSNs) have gained increasing attention in traffic detection and avoiding road congestion. WSNs are very trendy due to their faster transfer of information, easy installation, less maintenance, compactness and for being less expensive compared to other network options. There has been significant research on Traffic Management Systems using WSNs to avoid congestion, ensure priority for emergency vehicles and cut the Average Waiting Time (AWT) of vehicles at intersections. In recent decades, researchers have started to monitor real-time traffic using WSNs, RFIDs, ZigBee, VANETs, Bluetooth devices, cameras and infrared signals. This paper presents a survey of current urban traffic management schemes for priority-based signalling, and reducing congestion and the AWT of vehicles. The main objective of this survey is to provide a taxonomy of different traffic management schemes used for avoiding congestion. Existing urban traffic management schemes for the avoidance of congestion and providing priority to emergency vehicles are considered and set the foundation for further research.

  9. A Survey on Urban Traffic Management System Using Wireless Sensor Networks

    PubMed Central

    Nellore, Kapileswar; Hancke, Gerhard P.

    2016-01-01

    Nowadays, the number of vehicles has increased exponentially, but the bedrock capacities of roads and transportation systems have not developed in an equivalent way to efficiently cope with the number of vehicles traveling on them. Due to this, road jamming and traffic correlated pollution have increased with the associated adverse societal and financial effect on different markets worldwide. A static control system may block emergency vehicles due to traffic jams. Wireless Sensor networks (WSNs) have gained increasing attention in traffic detection and avoiding road congestion. WSNs are very trendy due to their faster transfer of information, easy installation, less maintenance, compactness and for being less expensive compared to other network options. There has been significant research on Traffic Management Systems using WSNs to avoid congestion, ensure priority for emergency vehicles and cut the Average Waiting Time (AWT) of vehicles at intersections. In recent decades, researchers have started to monitor real-time traffic using WSNs, RFIDs, ZigBee, VANETs, Bluetooth devices, cameras and infrared signals. This paper presents a survey of current urban traffic management schemes for priority-based signalling, and reducing congestion and the AWT of vehicles. The main objective of this survey is to provide a taxonomy of different traffic management schemes used for avoiding congestion. Existing urban traffic management schemes for the avoidance of congestion and providing priority to emergency vehicles are considered and set the foundation for further research. PMID:26828489

  10. Otolaryngology residents' objectives in entering the workforce.

    PubMed

    Kay, David J; Lucente, Frank E

    2002-10-01

    To determine the priorities of current otolaryngologists-in-training in considering their first employment opportunities. Twenty-one-item survey measuring the importance of various first job issues, with all items scored on a five-point Likert-type ordinal scale. The resident membership of the American Academy of Otolaryngology-Head and Neck Surgery was anonymously surveyed by means of mail-in questionnaires. Results were stratified by years of training. Responses from 242 of 1174 mail-in surveys (21% response rate) exhibited a wide distribution of responses for all 21 questions. The availability of free time to spend with one's family was regarded by more than half of the respondents to have the highest overall importance. As years of training increased, priorities shifted toward geographic location, away from issues such as the on-call schedules. The availability of research time and resources received the overall lowest priority, with more than half of the respondents ranking it as only somewhat important or lower. Otolaryngologists-in-training feel strongest about the availability of free time to spend with their families as they finish formal training and consider employment opportunities. By acknowledging the concerns of graduating residents, including the ability to pursue their primary interests when they start working, we can better adapt conditions to create a more comfortable and stable entry into the workforce.

  11. Results from the First Two Flights of the Static Computer Memory Integrity Testing Experiment

    NASA Technical Reports Server (NTRS)

    Hancock, Thomas M., III

    1999-01-01

    This paper details the scientific objectives, experiment design, data collection method, and post flight analysis following the first two flights of the Static Computer Memory Integrity Testing (SCMIT) experiment. SCMIT is designed to detect soft-event upsets in passive magnetic memory. A soft-event upset is a change in the logic state of active or passive forms of magnetic memory, commonly referred to as a "Bitflip". In its mildest form a soft-event upset can cause software exceptions, unexpected events, start spacecraft safeing (ending data collection) or corrupted fault protection and error recovery capabilities. In it's most severe form loss of mission or spacecraft can occur. Analysis after the first flight (in 1991 during STS-40) identified possible soft-event upsets to 25% of the experiment detectors. Post flight analysis after the second flight (in 1997 on STS-87) failed to find any evidence of soft-event upsets. The SCMIT experiment is currently scheduled for a third flight in December 1999 on STS-101.

  12. Analysis of the workload imposed on the workers of the imprint and cutting/welding sectors of a flexible packaging manufacturer.

    PubMed

    de M Guimarães, L B; Pessa, S L R; Biguelini, C

    2012-01-01

    This article presents a study on the evaluation of the workload imposed to workers of two sectors of a flexible packaging manufacturer that operates in three shifts. The workers are allocated in one of the shifts (morning, evening and night shifts) without evaluation of their chronotype and/or social needs. The Imprint sector has a more dynamic work, which is done only by man due to the effort demanded by handling loads. The work in the Cutting/Welding sector is static, done mainly by women. The results showed that the overall workload was the same in the Imprint and Cutting/Welding sectors, because physical effort for load handling is higher in the former but the latter involves high static load. The levels of urinary catecholamines and salivary cortisol were consistent with the workers biological clock showing that none of the workers changed his/her biological cycle to accommodate to the time of the shift schedule.

  13. Evaluation of High-Precision Sensors in Structural Monitoring

    PubMed Central

    Erol, Bihter

    2010-01-01

    One of the most intricate branches of metrology involves the monitoring of displacements and deformations of natural and anthropogenic structures under environmental forces, such as tidal or tectonic phenomena, or ground water level changes. Technological progress has changed the measurement process, and steadily increasing accuracy requirements have led to the continued development of new measuring instruments. The adoption of an appropriate measurement strategy, with proper instruments suited for the characteristics of the observed structure and its environmental conditions, is of high priority in the planning of deformation monitoring processes. This paper describes the use of precise digital inclination sensors in continuous monitoring of structural deformations. The topic is treated from two viewpoints: (i) evaluation of the performance of inclination sensors by comparing them to static and continuous GPS observations in deformation monitoring and (ii) providing a strategy for analyzing the structural deformations. The movements of two case study objects, a tall building and a geodetic monument in Istanbul, were separately monitored using dual-axes micro-radian precision inclination sensors (inclinometers) and GPS. The time series of continuous deformation observations were analyzed using the Least Squares Spectral Analysis Technique (LSSA). Overall, the inclinometers showed good performance for continuous monitoring of structural displacements, even at the sub-millimeter level. Static GPS observations remained insufficient for resolving the deformations to the sub-centimeter level due to the errors that affect GPS signals. With the accuracy advantage of inclination sensors, their use with GPS provides more detailed investigation of deformation phenomena. Using inclinometers and GPS is helpful to be able to identify the components of structural responses to the natural forces as static, quasi-static, or resonant. PMID:22163499

  14. RACOON: a multiuser QoS design for mobile wireless body area networks.

    PubMed

    Cheng, Shihheng; Huang, Chingyao; Tu, Chun Chen

    2011-10-01

    In this study, Random Contention-based Resource Allocation (RACOON) medium access control (MAC) protocol is proposed to support the quality of service (QoS) for multi-user mobile wireless body area networks (WBANs). Different from existing QoS designs that focus on a single WBAN, a multiuser WBAN QoS should further consider both inter-WBAN interference and inter-WBAN priorities. Similar problems have been studied in both overlapped wireless local area networks (WLANs) and Bluetooth piconets that need QoS supports. However, these solutions are designed for non-medical transmissions that do not consider any priority scheme for medical applications. Most importantly, these studies focus on only static or low mobility networks. Network mobility of WBANs will introduce unnecessary inter-network collisions and energy waste, which are not considered by these solutions. The proposed multiuser-QoS protocol, RACOON, simultaneously satisfies the inter WBAN QoS requirements and overcomes the performance degradation caused by WBAN mobility. Simulation results verify that RACOON provides better latency and energy control, as compared with WBAN QoS protocols without considering the inter-WBAN requirements.

  15. A new Self-Adaptive disPatching System for local clusters

    NASA Astrophysics Data System (ADS)

    Kan, Bowen; Shi, Jingyan; Lei, Xiaofeng

    2015-12-01

    The scheduler is one of the most important components of a high performance cluster. This paper introduces a self-adaptive dispatching system (SAPS) based on Torque[1] and Maui[2]. It promotes cluster resource utilization and improves the overall speed of tasks. It provides some extra functions for administrators and users. First of all, in order to allow the scheduling of GPUs, a GPU scheduling module based on Torque and Maui has been developed. Second, SAPS analyses the relationship between the number of queueing jobs and the idle job slots, and then tunes the priority of users’ jobs dynamically. This means more jobs run and fewer job slots are idle. Third, integrating with the monitoring function, SAPS excludes nodes in error states as detected by the monitor, and returns them to the cluster after the nodes have recovered. In addition, SAPS provides a series of function modules including a batch monitoring management module, a comprehensive scheduling accounting module and a real-time alarm module. The aim of SAPS is to enhance the reliability and stability of Torque and Maui. Currently, SAPS has been running stably on a local cluster at IHEP (Institute of High Energy Physics, Chinese Academy of Sciences), with more than 12,000 cpu cores and 50,000 jobs running each day. Monitoring has shown that resource utilization has been improved by more than 26%, and the management work for both administrator and users has been reduced greatly.

  16. A Photo Album of Earth Scheduling Landsat 7 Mission Daily Activities

    NASA Technical Reports Server (NTRS)

    Potter, William; Gasch, John; Bauer, Cynthia

    1998-01-01

    Landsat7 is a member of a new generation of Earth observation satellites. Landsat7 will carry on the mission of the aging Landsat 5 spacecraft by acquiring high resolution, multi-spectral images of the Earth surface for strategic, environmental, commercial, agricultural and civil analysis and research. One of the primary mission goals of Landsat7 is to accumulate and seasonally refresh an archive of global images with full coverage of Earth's landmass, less the central portion of Antarctica. This archive will enable further research into seasonal, annual and long-range trending analysis in such diverse research areas as crop yields, deforestation, population growth, and pollution control, to name just a few. A secondary goal of Landsat7 is to fulfill imaging requests from our international partners in the mission. Landsat7 will transmit raw image data from the spacecraft to 25 ground stations in 20 subscribing countries. Whereas earlier Landsat missions were scheduled manually (as are the majority of current low-orbit satellite missions), the task of manually planning and scheduling Landsat7 mission activities would be overwhelmingly complex when considering the large volume of image requests, the limited resources available, spacecraft instrument limitations, and the limited ground image processing capacity, not to mention avoidance of foul weather systems. The Landsat7 Mission Operation Center (MOC) includes an image scheduler subsystem that is designed to automate the majority of mission planning and scheduling, including selection of the images to be acquired, managing the recording and playback of the images by the spacecraft, scheduling ground station contacts for downlink of images, and generating the spacecraft commands for controlling the imager, recorder, transmitters and antennas. The image scheduler subsystem autonomously generates 90% of the spacecraft commanding with minimal manual intervention. The image scheduler produces a conflict-free schedule for acquiring images of the "best" 250 scenes daily for refreshing the global archive. It then equitably distributes the remaining resources for acquiring up to 430 scenes to satisfy requests by international subscribers. The image scheduler selects candidate scenes based on priority and age of the requests, and predicted cloud cover and sun angle at each scene. It also selects these scenes to avoid instrument constraint violations and maximizes efficiency of resource usage by encouraging acquisition of scenes in clusters. Of particular interest to the mission planners, it produces the resulting schedule in a reasonable time, typically within 15 minutes.

  17. A Hyper-Heuristic Ensemble Method for Static Job-Shop Scheduling.

    PubMed

    Hart, Emma; Sim, Kevin

    2016-01-01

    We describe a new hyper-heuristic method NELLI-GP for solving job-shop scheduling problems (JSSP) that evolves an ensemble of heuristics. The ensemble adopts a divide-and-conquer approach in which each heuristic solves a unique subset of the instance set considered. NELLI-GP extends an existing ensemble method called NELLI by introducing a novel heuristic generator that evolves heuristics composed of linear sequences of dispatching rules: each rule is represented using a tree structure and is itself evolved. Following a training period, the ensemble is shown to outperform both existing dispatching rules and a standard genetic programming algorithm on a large set of new test instances. In addition, it obtains superior results on a set of 210 benchmark problems from the literature when compared to two state-of-the-art hyper-heuristic approaches. Further analysis of the relationship between heuristics in the evolved ensemble and the instances each solves provides new insights into features that might describe similar instances.

  18. Innovation and The Welfare Effects of Public Drug Insurance*

    PubMed Central

    Lakdawalla, Darius; Sood, Neeraj

    2010-01-01

    Rewarding inventors with inefficient monopoly power has long been regarded as the price of encouraging innovation. Prescription drug insurance escapes that trade-off and achieves an elusive goal: lowering static deadweight loss, without reducing incentives for innovation. As a result of this feature, the public provision of drug insurance can be welfare-improving, even for risk-neutral and purely self-interested consumers. The design of insurers’ cost-sharing schedules can either reinforce or mitigate this result. Schedules that impose higher consumer cost-sharing requirements on more expensive drugs help ensure that insurance subsidies translate into higher utilization, rather than pure increases in manufacturer profits. Moreover, some degree of price-negotiation with manufacturers is likely to be welfare-improving, but the optimal degree depends on the size of such transactions costs, as well as the social cost of weakening innovation incentives by lowering innovator profits. These results have practical implications for the evaluation of public drug insurance programs like the US Medicaid and Medicare Part D programs, along with European insurance schemes. PMID:20454467

  19. A Demonstrator Intelligent Scheduler For Sensor-Based Robots

    NASA Astrophysics Data System (ADS)

    Perrotta, Gabriella; Allen, Charles R.; Shepherd, Andrew J.

    1987-10-01

    The development of an execution module capable of functioning as as on-line supervisor for a robot equipped with a vision sensor and tactile sensing gripper system is described. The on-line module is supported by two off-line software modules which provide a procedural based assembly constraints language to allow the assembly task to be defined. This input is then converted into a normalised and minimised form. The host Robot programming language permits high level motions to be issued at the to level, hence allowing a low programming overhead to the designer, who must describe the assembly sequence. Components are selected for pick and place robot movement, based on information derived from two cameras, one static and the other mounted on the end effector of the robot. The approach taken is multi-path scheduling as described by Fox pi. The system is seen to permit robot assembly in a less constrained parts presentation environment making full use of the sensory detail available on the robot.

  20. Facilitating preemptive hardware system design using partial reconfiguration techniques.

    PubMed

    Dondo Gazzano, Julio; Rincon, Fernando; Vaderrama, Carlos; Villanueva, Felix; Caba, Julian; Lopez, Juan Carlos

    2014-01-01

    In FPGA-based control system design, partial reconfiguration is especially well suited to implement preemptive systems. In real-time systems, the deadline for critical task can compel the preemption of noncritical one. Besides, an asynchronous event can demand immediate attention and, then, force launching a reconfiguration process for high-priority task implementation. If the asynchronous event is previously scheduled, an explicit activation of the reconfiguration process is performed. If the event cannot be previously programmed, such as in dynamically scheduled systems, an implicit activation to the reconfiguration process is demanded. This paper provides a hardware-based approach to explicit and implicit activation of the partial reconfiguration process in dynamically reconfigurable SoCs and includes all the necessary tasks to cope with this issue. Furthermore, the reconfiguration service introduced in this work allows remote invocation of the reconfiguration process and then the remote integration of off-chip components. A model that offers component location transparency is also presented to enhance and facilitate system integration.

  1. Facilitating Preemptive Hardware System Design Using Partial Reconfiguration Techniques

    PubMed Central

    Rincon, Fernando; Vaderrama, Carlos; Villanueva, Felix; Caba, Julian; Lopez, Juan Carlos

    2014-01-01

    In FPGA-based control system design, partial reconfiguration is especially well suited to implement preemptive systems. In real-time systems, the deadline for critical task can compel the preemption of noncritical one. Besides, an asynchronous event can demand immediate attention and, then, force launching a reconfiguration process for high-priority task implementation. If the asynchronous event is previously scheduled, an explicit activation of the reconfiguration process is performed. If the event cannot be previously programmed, such as in dynamically scheduled systems, an implicit activation to the reconfiguration process is demanded. This paper provides a hardware-based approach to explicit and implicit activation of the partial reconfiguration process in dynamically reconfigurable SoCs and includes all the necessary tasks to cope with this issue. Furthermore, the reconfiguration service introduced in this work allows remote invocation of the reconfiguration process and then the remote integration of off-chip components. A model that offers component location transparency is also presented to enhance and facilitate system integration. PMID:24672292

  2. Schedule Optimization of Imaging Missions for Multiple Satellites and Ground Stations Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Lee, Junghyun; Kim, Heewon; Chung, Hyun; Kim, Haedong; Choi, Sujin; Jung, Okchul; Chung, Daewon; Ko, Kwanghee

    2018-04-01

    In this paper, we propose a method that uses a genetic algorithm for the dynamic schedule optimization of imaging missions for multiple satellites and ground systems. In particular, the visibility conflicts of communication and mission operation using satellite resources (electric power and onboard memory) are integrated in sequence. Resource consumption and restoration are considered in the optimization process. Image acquisition is an essential part of satellite missions and is performed via a series of subtasks such as command uplink, image capturing, image storing, and image downlink. An objective function for optimization is designed to maximize the usability by considering the following components: user-assigned priority, resource consumption, and image-acquisition time. For the simulation, a series of hypothetical imaging missions are allocated to a multi-satellite control system comprising five satellites and three ground stations having S- and X-band antennas. To demonstrate the performance of the proposed method, simulations are performed via three operation modes: general, commercial, and tactical.

  3. Mothers' Perspectives on the Development of Their Preschoolers' Dietary and Physical Activity Behaviors and Parent-Child Relationship: Implications for Pediatric Primary Care Physicians.

    PubMed

    Pratt, Keeley J; Van Fossen, Catherine; Cotto-Maisonet, Jennifer; Palmer, Elizabeth N; Eneli, Ihuoma

    2017-07-01

    The study explores female caregivers' reflections on their relationship with their child (2-5 years old) and the development of their child's dietary and physical activity behaviors. Five, 90-minute semistructured focus groups were conducted to inquire about children's growth, eating behaviors and routines, physical activity, personality, and the parent-child relationship. Nineteen female caregivers diverse in race/ethnicity, age, and educational attainment participated. Participants reported that they maintained a schedule, but needed to be flexible to accommodate daily responsibilities. Family, social factors, and day care routines were influences on their children's behaviors. The main physical activity barriers were safety and time constraints. Guidance from pediatric primary care providers aimed at supporting female caregivers to build a positive foundation in their parent-child relationship, and to adopt and model healthy diet and physical activity behaviors that are respectful of schedules and barriers should be a priority for childhood obesity prevention.

  4. Efficient Access to Massive Amounts of Tape-Resident Data

    NASA Astrophysics Data System (ADS)

    Yu, David; Lauret, Jérôme

    2017-10-01

    Randomly restoring files from tapes degrades the read performance primarily due to frequent tape mounts. The high latency and time-consuming tape mount and dismount is a major issue when accessing massive amounts of data from tape storage. BNL’s mass storage system currently holds more than 80 PB of data on tapes, managed by HPSS. To restore files from HPSS, we make use of a scheduler software, called ERADAT. This scheduler system was originally based on code from Oak Ridge National Lab, developed in the early 2000s. After some major modifications and enhancements, ERADAT now provides advanced HPSS resource management, priority queuing, resource sharing, web-browser visibility of real-time staging activities and advanced real-time statistics and graphs. ERADAT is also integrated with ACSLS and HPSS for near real-time mount statistics and resource control in HPSS. ERADAT is also the interface between HPSS and other applications such as the locally developed Data Carousel, providing fair resource-sharing policies and related capabilities. ERADAT has demonstrated great performance at BNL.

  5. Workforce deployment--a critical organizational competency.

    PubMed

    Harms, Roxanne

    2009-01-01

    Staff scheduling has historically been embedded within hospital operations, often defined by each new manager of a unit or program, and notably absent from the organization's practice and standards infrastructure and accountabilities of the executive team. Silvestro and Silvestro contend that "there is a need to recognize that hospital performance relies critically on the competence and effectiveness of roster planning activities, and that these activities are therefore of strategic importance." This article highlights the importance of including staff scheduling--or workforce deployment--in health care organizations' long-term strategic solutions to cope with the deepening workforce shortage (which is likely to hit harder than ever as the economy begins to recover). Viewing workforce deployment as a key organizational competency is a critical success factor for health care in the next decade, and the Workforce Deployment Maturity Model is discussed as a framework to enable organizations to measure their current capabilities, identify priorities and set goals for increasing organizational competency using a methodical and deliberate approach.

  6. Optimal pre-scheduling of problem remappings

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Saltz, Joel H.

    1987-01-01

    A large class of scientific computational problems can be characterized as a sequence of steps where a significant amount of computation occurs each step, but the work performed at each step is not necessarily identical. Two good examples of this type of computation are: (1) regridding methods which change the problem discretization during the course of the computation, and (2) methods for solving sparse triangular systems of linear equations. Recent work has investigated a means of mapping such computations onto parallel processors; the method defines a family of static mappings with differing degrees of importance placed on the conflicting goals of good load balance and low communication/synchronization overhead. The performance tradeoffs are controllable by adjusting the parameters of the mapping method. To achieve good performance it may be necessary to dynamically change these parameters at run-time, but such changes can impose additional costs. If the computation's behavior can be determined prior to its execution, it can be possible to construct an optimal parameter schedule using a low-order-polynomial-time dynamic programming algorithm. Since the latter can be expensive, the performance is studied of the effect of a linear-time scheduling heuristic on one of the model problems, and it is shown to be effective and nearly optimal.

  7. Experience with dynamic reinforcement rates decreases resistance to extinction.

    PubMed

    Craig, Andrew R; Shahan, Timothy A

    2016-03-01

    The ability of organisms to detect reinforcer-rate changes in choice preparations is positively related to two factors: the magnitude of the change in rate and the frequency with which rates change. Gallistel (2012) suggested similar rate-detection processes are responsible for decreases in responding during operant extinction. Although effects of magnitude of change in reinforcer rate on resistance to extinction are well known (e.g., the partial-reinforcement-extinction effect), effects of frequency of changes in rate prior to extinction are unknown. Thus, the present experiments examined whether frequency of changes in baseline reinforcer rates impacts resistance to extinction. Pigeons pecked keys for variable-interval food under conditions where reinforcer rates were stable and where they changed within and between sessions. Overall reinforcer rates between conditions were controlled. In Experiment 1, resistance to extinction was lower following exposure to dynamic reinforcement schedules than to static schedules. Experiment 2 showed that resistance to presession feeding, a disruptor that should not involve change-detection processes, was unaffected by baseline-schedule dynamics. These findings are consistent with the suggestion that change detection contributes to extinction. We discuss implications of change-detection processes for extinction of simple and discriminated operant behavior and relate these processes to the behavioral-momentum based approach to understanding extinction. © 2016 Society for the Experimental Analysis of Behavior.

  8. Nano-JASMINE: Simulation of Data Outputs

    NASA Astrophysics Data System (ADS)

    Kobayashi, Y.; Yano, T.; Hatsutori, Y.; Gouda, N.; Murooka, J.; Niwa, Y.; Yamada, Y.

    We simulated the data outputs of the first Japanese astrometry satellite Nano-JASMINE, which is scheduled to be launched by the Cyclone-4 rocket in August 2011. The simulations were carried out using existing stellar catalogues such as the Hipparcos catalogue, the Tycho catalogue, and the Guide Star catalogue version 2.3. Several statics are shown such as the number of stars those will be measured distances using annual aberration observations. The method for determining the initial direction of the satellite's spin axis has also been discussed. In this case, the frequency of bright stars observed by the satellite is an important factor.

  9. ATAMM enhancement and multiprocessing performance evaluation

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.

    1994-01-01

    The algorithm to architecture mapping model (ATAAM) is a Petri net based model which provides a strategy for periodic execution of a class of real-time algorithms on multicomputer dataflow architecture. The execution of large-grained, decision-free algorithms on homogeneous processing elements is studied. The ATAAM provides an analytical basis for calculating performance bounds on throughput characteristics. Extension of the ATAMM as a strategy for cyclo-static scheduling provides for a truly distributed ATAMM multicomputer operating system. An ATAAM testbed consisting of a centralized graph manager and three processors is described using embedded firmware on 68HC11 microcontrollers.

  10. Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition

    NASA Technical Reports Server (NTRS)

    Kobayashi, Kayla N.; He, Yutao; Zheng, Jason X.

    2011-01-01

    Multi-rate finite impulse response (MRFIR) filters are among the essential signal-processing components in spaceborne instruments where finite impulse response filters are often used to minimize nonlinear group delay and finite precision effects. Cascaded (multistage) designs of MRFIR filters are further used for large rate change ratio in order to lower the required throughput, while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this innovation, an alternative representation and implementation technique called TD-MRFIR (Thread Decomposition MRFIR) is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. A naive implementation of a decimation filter consisting of a full FIR followed by a downsampling stage is very inefficient, as most of the computations performed by the FIR state are discarded through downsampling. In fact, only 1/M of the total computations are useful (M being the decimation factor). Polyphase decomposition provides an alternative view of decimation filters, where the downsampling occurs before the FIR stage, and the outputs are viewed as the sum of M sub-filters with length of N/M taps. Although this approach leads to more efficient filter designs, in general the implementation is not straightforward if the numbers of multipliers need to be minimized. In TD-MRFIR, each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. Each of the threads completes when a convolution result (filter output value) is computed, and activated when the first input of the convolution becomes available. Thus, the new threads get spawned at exactly the rate of N/M, where N is the total number of taps, and M is the decimation factor. Existing threads retire at the same rate of N/M. The implementation of an MRFIR is thus transformed into a problem to statically schedule the minimum number of multipliers such that all threads can be completed on time. Solving the static scheduling problem is rather straightforward if one examines the Thread Decomposition Diagram, which is a table-like diagram that has rows representing computation threads and columns representing time. The control logic of the MRFIR can be implemented using simple counters. Instead of decomposing MRFIRs into subfilters as suggested by polyphase decomposition, the thread decomposition diagrams transform the problem into a familiar one of static scheduling, which can be easily solved as the input rate is constant.

  11. Logic Model Checking of Time-Periodic Real-Time Systems

    NASA Technical Reports Server (NTRS)

    Florian, Mihai; Gamble, Ed; Holzmann, Gerard

    2012-01-01

    In this paper we report on the work we performed to extend the logic model checker SPIN with built-in support for the verification of periodic, real-time embedded software systems, as commonly used in aircraft, automobiles, and spacecraft. We first extended the SPIN verification algorithms to model priority based scheduling policies. Next, we added a library to support the modeling of periodic tasks. This library was used in a recent application of the SPIN model checker to verify the engine control software of an automobile, to study the feasibility of software triggers for unintended acceleration events.

  12. Design for mosquito abundance, diversity, and phenology sampling within the National Ecological Observatory Network

    USGS Publications Warehouse

    Hoekman, D.; Springer, Yuri P.; Barker, C.M.; Barrera, R.; Blackmore, M.S.; Bradshaw, W.E.; Foley, D. H.; Ginsberg, Howard; Hayden, M. H.; Holzapfel, C. M.; Juliano, S. A.; Kramer, L. D.; LaDeau, S. L.; Livdahl, T. P.; Moore, C. G.; Nasci, R.S.; Reisen, W.K.; Savage, H. M.

    2016-01-01

    The National Ecological Observatory Network (NEON) intends to monitor mosquito populations across its broad geographical range of sites because of their prevalence in food webs, sensitivity to abiotic factors and relevance for human health. We describe the design of mosquito population sampling in the context of NEON’s long term continental scale monitoring program, emphasizing the sampling design schedule, priorities and collection methods. Freely available NEON data and associated field and laboratory samples, will increase our understanding of how mosquito abundance, demography, diversity and phenology are responding to land use and climate change.

  13. Assuring quality in high-consequence engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoover, Marcey L.; Kolb, Rachel R.

    2014-03-01

    In high-consequence engineering organizations, such as Sandia, quality assurance may be heavily dependent on staff competency. Competency-dependent quality assurance models are at risk when the environment changes, as it has with increasing attrition rates, budget and schedule cuts, and competing program priorities. Risks in Sandia's competency-dependent culture can be mitigated through changes to hiring, training, and customer engagement approaches to manage people, partners, and products. Sandia's technical quality engineering organization has been able to mitigate corporate-level risks by driving changes that benefit all departments, and in doing so has assured Sandia's commitment to excellence in high-consequence engineering and national service.

  14. A new Nawaz-Enscore-Ham-based heuristic for permutation flow-shop problems with bicriteria of makespan and machine idle time

    NASA Astrophysics Data System (ADS)

    Liu, Weibo; Jin, Yan; Price, Mark

    2016-10-01

    A new heuristic based on the Nawaz-Enscore-Ham algorithm is proposed in this article for solving a permutation flow-shop scheduling problem. A new priority rule is proposed by accounting for the average, mean absolute deviation, skewness and kurtosis, in order to fully describe the distribution style of processing times. A new tie-breaking rule is also introduced for achieving effective job insertion with the objective of minimizing both makespan and machine idle time. Statistical tests illustrate better solution quality of the proposed algorithm compared to existing benchmark heuristics.

  15. A Delphi approach to reach consensus on primary care guidelines regarding youth violence prevention.

    PubMed

    De Vos, Edward; Spivak, Howard; Hatmaker-Flanigan, Elizabeth; Sege, Robert D

    2006-10-01

    Anticipatory guidance is a cornerstone of modern pediatric practice. In recognition of its importance for child well being, injury prevention counseling is a standard element of that guidance. Over the last 20 years, there has been growing recognition that intentional injury or violence is one of the leading causes of morbidity and mortality among youth. The US Surgeon General identified youth violence as a major public health issue and a top priority. Yet, only recently has the scope of injury prevention counseling been expanded to include violence. Pediatric health care providers agree that youth violence-prevention counseling should be provided, yet the number of topics available, the already lengthy list of other anticipatory guidance topics to be covered, developmental considerations, and the evidence base make the selection of an agreed-on set a considerable challenge. The purpose of this study was to systematically identify and prioritize specific counseling topics in violence prevention that could be integrated into anticipatory guidance best practice. A modified electronic Delphi process was used to gain consensus among 50 national multidisciplinary violence-prevention experts. Participants were unaware of other participants' identities. The process consisted of 4 serial rounds of inquiry beginning with a broad open-ended format for the generation of anticipatory guidance and screening topics across 5 age groups (infant, toddler, school age, adolescent, and all ages). Each subsequent round narrowed the list of topics toward the development of a manageable set of essential topics for screening and counseling about positive youth development and violence prevention. Forty-seven unique topics were identified, spanning birth to age 21 years. Topics cover 4 broad categories (building blocks): physical safety, parent centered, child centered, and community connection. Participants placed topics into their developmentally appropriate visit-based schedule and made suggestions for an appropriate topic reinforcement schedule. The resulting schedule provides topics for introduction and reinforcement at each visit. The Delphi technique proved a useful approach for accessing expert opinion, for analyzing and synthesizing results, for achieving consensus, and for setting priorities among the numerous anticipatory guidance and assessment topics relevant for raising resilient, violence-free youth.

  16. Applicability of Online Education to Large Undergraduate Engineering Courses

    NASA Astrophysics Data System (ADS)

    Bir, Devayan Debashis

    With the increase in undergraduate engineering enrollment, many universities have chosen to teach introductory engineering courses such as Statics of Engineering and Mechanics of Materials in large classes due to budget limitations. With the overwhelming literature against traditionally taught large classes, this study aims to see the effects of the trending online pedagogy. Online courses are the latest trend in education due to the flexibility they provide to students in terms of schedule and pace of learning with the added advantage of being less expensive for the university over a period. In this research, the effects of online lectures on engineering students' course performances and students' attitudes towards online learning were examined. Specifically, the academic performances of students enrolled in a traditionally taught, lecture format Mechanics of Materials course with the performance of students in an online Mechanics of Materials course in summer 2016 were compared. To see the effect of the two different teaching approaches across student types, students were categorized by gender, enrollment status, nationality, and by the grades students obtained for Statics, one of the prerequisite courses for Mechanics of Materials. Student attitudes towards the online course will help to keep the process of continuously improving the online course, specifically, to provide quality education through the online medium in terms of course content and delivery. The findings of the study show that the online pedagogy negatively affects student academic performance when compared to the traditional face-to-face pedagogy across all categories, except for the high scoring students. Student attitudes reveal that while they enjoyed the flexibility schedule and control over their pace of studying, they faced issues with self-regulation and face-to-face interaction.

  17. Leptin-sensitive neurons in the arcuate nucleus integrate activity and temperature circadian rhythms and anticipatory responses to food restriction

    PubMed Central

    Li, Ai-Jun; Dinh, Thu T.; Jansen, Heiko T.; Ritter, Sue

    2013-01-01

    Previously, we investigated the role of neuropeptide Y and leptin-sensitive networks in the mediobasal hypothalamus in sleep and feeding and found profound homeostatic and circadian deficits with an intact suprachiasmatic nucleus. We propose that the arcuate nuclei (Arc) are required for the integration of homeostatic circadian systems, including temperature and activity. We tested this hypothesis using saporin toxin conjugated to leptin (Lep-SAP) injected into Arc in rats. Lep-SAP rats became obese and hyperphagic and progressed through a dynamic phase to a static phase of growth. Circadian rhythms were examined over 49 days during the static phase. Rats were maintained on a 12:12-h light-dark (LD) schedule for 13 days and, thereafter, maintained in continuous dark (DD). After the first 13 days of DD, food was restricted to 4 h/day for 10 days. We found that the activity of Lep-SAP rats was arrhythmic in DD, but that food anticipatory activity was, nevertheless, entrainable to the restricted feeding schedule, and the entrained rhythm persisted during the subsequent 3-day fast in DD. Thus, for activity, the circuitry for the light-entrainable oscillator, but not for the food-entrainable oscillator, was disabled by the Arc lesion. In contrast, temperature remained rhythmic in DD in the Lep-SAP rats and did not entrain to restricted feeding. We conclude that the leptin-sensitive network that includes the Arc is required for entrainment of activity by photic cues and entrainment of temperature by food, but is not required for entrainment of activity by food or temperature by photic cues. PMID:23986359

  18. Patients' views on priority setting in neurosurgery: A qualitative study.

    PubMed

    Gunaratnam, Caroline; Bernstein, Mark

    2016-01-01

    Accountability for Reasonableness is an ethical framework which has been implemented in various health care systems to improve and evaluate the fairness of priority setting. This framework is grounded on four mandatory conditions: relevance, publicity, appeals, and enforcement. There have been few studies which have evaluated the patient stakeholders' acceptance of this framework; certainly no studies have been done on patients' views on the prioritization system for allocating patients for operating time in a system with pressure on the resource of inpatient beds. The aim of this study is to examine neurosurgical patients' views on the prioritization of patients for operating theater (OT) time on a daily basis at a tertiary and quaternary referral neurosurgery center. Semi-structured face-to-face interviews were conducted with thirty-seven patients, recruited from the neurosurgery clinic at Toronto Western Hospital. Family members and friends who accompanied the patient to their clinic visit were encouraged to contribute to the discussion. Interviews were audio recorded, transcribed verbatim, and subjected to thematic analysis using open and axial coding. Overall, patients are supportive of the concept of a priority-setting system based on fairness, but felt that a few changes would help to improve the fairness of the current system. These changes include lowering the level of priority given to volume-funded cases and providing scheduled surgeries that were previously canceled a higher level of prioritization. Good communication, early notification, and rescheduling canceled surgeries as soon as possible were important factors that directly reflected the patients' confidence level in their doctor, the hospital, and the health care system. This study is the first clinical qualitative study of patients' perspective on a prioritization system used for allocating neurosurgical patients for OT time on a daily basis in a socialized not-for-profit health care system with fixed resources.

  19. [Health promotion and prevention in the economic crisis: the role of the health sector. SESPAS report 2014].

    PubMed

    Márquez-Calderón, Soledad; Villegas-Portero, Román; Gosalbes Soler, Victoria; Martínez-Pecino, Flora

    2014-06-01

    This article reviews trends in lifestyle factors and identifies priorities in the fields of prevention and health promotion in the current economic recession. Several information sources were used, including a survey of 30 public health and primary care experts. Between 2006 and 2012, no significant changes in lifestyle factors were detected except for a decrease in habitual alcohol drinking. There was a slight decrease in the use of illegal drugs and a significant increase in the use of psychoactive drugs. Most experts believe that decision-making about new mass screening programs and changes in vaccination schedules needs to be improved by including opportunity cost analysis and increasing the transparency and independence of the professionals involved. Preventive health services are contributing to medicalization, but experts' opinions are divided on the need for some preventive activities. Priorities in preventive services are mental health and HIV infection in vulnerable populations. Most experts trust in the potential of health promotion to mitigate the health effects of the economic crisis. Priority groups are children, unemployed people and other vulnerable groups. Priority interventions are community health activities (working in partnership with local governments and other sectors), advocacy, and mental health promotion. Effective tools for health promotion that are currently underused are legislation and mass media. There is a need to clarify the role of the healthcare sector in intersectorial activities, as well as to acknowledge that social determinants of health depend on other sectors. Experts also warn of the consequences of austerity and of policies that negatively impact on living conditions. Copyright © 2013 SESPAS. Published by Elsevier Espana. All rights reserved.

  20. Closed-loop control of a core free rolled EAP actuator

    NASA Astrophysics Data System (ADS)

    Sarban, Rahimullah; Oubaek, Jakob; Jones, Richard W.

    2009-03-01

    Tubular dielectric electro-active polymer actuators, also referred as tubular InLastors, have many possible applications. One of the most obvious is as a positioning push-type device. This work examines the feedback closed-loop control of a core-free tubular InLastor fabricated from sheets of PolyPowerTM, an EAP material developed by Danfoss PolyPower A/S, which uses a silicone elastomer in conjunction with smart compliant electrode technology. This is part of an ongoing study to develop a precision positioning feedback control system for this device. Initially proportional and integral (PI) control is considered to provide position control of the tubular InLastor. Control of the tubular Inlastors require more than conventional control, used for linear actuators, because the InLastors display highly nonlinear static voltage-strain and voltage-force characteristics as well as dynamic hysteresis and time-dependent strain behavior. In an attempt to overcome the nonlinear static voltage-strain characteristics of the Inlastors and for improving the dynamic performance of the controlled device, a gain scheduling algorithm is then integrated into the PI controlled system.

  1. Control-Relevant Modeling, Analysis, and Design for Scramjet-Powered Hypersonic Vehicles

    NASA Technical Reports Server (NTRS)

    Rodriguez, Armando A.; Dickeson, Jeffrey J.; Sridharan, Srikanth; Benavides, Jose; Soloway, Don; Kelkar, Atul; Vogel, Jerald M.

    2009-01-01

    Within this paper, control-relevant vehicle design concepts are examined using a widely used 3 DOF (plus flexibility) nonlinear model for the longitudinal dynamics of a generic carrot-shaped scramjet powered hypersonic vehicle. Trade studies associated with vehicle/engine parameters are examined. The impact of parameters on control-relevant static properties (e.g. level-flight trimmable region, trim controls, AOA, thrust margin) and dynamic properties (e.g. instability and right half plane zero associated with flight path angle) are examined. Specific parameters considered include: inlet height, diffuser area ratio, lower forebody compression ramp inclination angle, engine location, center of gravity, and mass. Vehicle optimizations is also examined. Both static and dynamic considerations are addressed. The gap-metric optimized vehicle is obtained to illustrate how this control-centric concept can be used to "reduce" scheduling requirements for the final control system. A classic inner-outer loop control architecture and methodology is used to shed light on how specific vehicle/engine design parameter selections impact control system design. In short, the work represents an important first step toward revealing fundamental tradeoffs and systematically treating control-relevant vehicle design.

  2. Hanford Site Asbestos Abatement Plan. Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mewes, B.S.

    The Hanford Site Asbestos Abatement Plan (Plan) lists priorities for asbestos abatement activities to be conducted in Hanford Site facilities. The Plan is based on asbestos assessment information gathered in fiscal year 1989 that evaluated all Hanford Site facilities for the presence and condition of asbestos. Of those facilities evaluated, 414 contain asbestos-containing materials and are classified according to the potential risk of asbestos exposure to building personnel. The Plan requires that asbestos condition update reports be prepared for all affected facilities. The reporting is completed by the asbestos coordinator for each of the 414 affected facilities and transmitted tomore » the Plan manager annually. The Plan manager uses this information to reprioritize future project lists. Currently, five facilities are determined to be Class Al, indicating a high potential for asbestos exposure. Class Al and B1 facilities are the highest priority for asbestos abatement. Abatement of the Class A1 and Bl facilities is scheduled through fiscal year 1997. Removal of asbestos in B1 facilities will reduce the risk for further Class ``A`` conditions to arise.« less

  3. Earth Observing System (EOS)/Advanced Microwave Sounding Unit-A (AMSU-A)

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is the twentieth monthly report for the Earth Observing System/Advanced Microwave Sounding Unit-A (EOS/AMSU-A), Contract NAS5-32314, and covers the period from 1 August 1994 through 31 August 1994. This period is the eighth month of the Implementation Phase which provides for the design, fabrication, assembly, and test of the first EOS/AMSU-A, the Protoflight Model. During this period the number one priority for the program continued to be the issuance of Requests for Quotations (RFQ) to suppliers and the procurement of the long-lead receiver components. Significant effort was also dedicated to preparation and conduct of internal design reviews and preparation for the PDR scheduled in September. An overview of the program status, including key events, action items, and documentation submittals, is provided in Section 2 of this report. The Program Manager's 'Priority Issues' are defined in Section 3. Section 4 through 7 provide detailed progress reports for the system engineering effort, each subsystem, performance assurance, and configuration/data management. Contractual matters are discussed in Section 8.

  4. Comparing the Performance of Two Dynamic Load Distribution Methods

    NASA Technical Reports Server (NTRS)

    Kale, L. V.

    1987-01-01

    Parallel processing of symbolic computations on a message-passing multi-processor presents one challenge: To effectively utilize the available processors, the load must be distributed uniformly to all the processors. However, the structure of these computations cannot be predicted in advance. go, static scheduling methods are not applicable. In this paper, we compare the performance of two dynamic, distributed load balancing methods with extensive simulation studies. The two schemes are: the Contracting Within a Neighborhood (CWN) scheme proposed by us, and the Gradient Model proposed by Lin and Keller. We conclude that although simpler, the CWN is significantly more effective at distributing the work than the Gradient model.

  5. Optimization of an Aeroservoelastic Wing with Distributed Multiple Control Surfaces

    NASA Technical Reports Server (NTRS)

    Stanford, Bret K.

    2015-01-01

    This paper considers the aeroelastic optimization of a subsonic transport wingbox under a variety of static and dynamic aeroelastic constraints. Three types of design variables are utilized: structural variables (skin thickness, stiffener details), the quasi-steady deflection scheduling of a series of control surfaces distributed along the trailing edge for maneuver load alleviation and trim attainment, and the design details of an LQR controller, which commands oscillatory hinge moments into those same control surfaces. Optimization problems are solved where a closed loop flutter constraint is forced to satisfy the required flight margin, and mass reduction benefits are realized by relaxing the open loop flutter requirements.

  6. Evolution of Query Optimization Methods

    NASA Astrophysics Data System (ADS)

    Hameurlain, Abdelkader; Morvan, Franck

    Query optimization is the most critical phase in query processing. In this paper, we try to describe synthetically the evolution of query optimization methods from uniprocessor relational database systems to data Grid systems through parallel, distributed and data integration systems. We point out a set of parameters to characterize and compare query optimization methods, mainly: (i) size of the search space, (ii) type of method (static or dynamic), (iii) modification types of execution plans (re-optimization or re-scheduling), (iv) level of modification (intra-operator and/or inter-operator), (v) type of event (estimation errors, delay, user preferences), and (vi) nature of decision-making (centralized or decentralized control).

  7. KSC-02pd0736

    NASA Image and Video Library

    2002-05-16

    KENNEDY SPACE CENTER, FLA. - Suspended from the overhead crane, the SHI Research Double Module (SHI/RDM) travels across the Space Station Processing Facility to the payload canister waiting at right. The module will be placed in the canister for transport to the Orbiter Processing Facility where it will be installed in Columbia's payload bay for mission STS-107. SHI/RDM is the primary payload of the research mission, with experiments ranging from material sciences to life sciences (many rats). Also part of the payload is the Fast Reaction Experiments Enabling Science, Technology, Applications and Research (FREESTAR) that incorporates eight high priority secondary attached shuttle experiments. STS-107 is scheduled to launch July 19, 2002

  8. KSC01pd1765

    NASA Image and Video Library

    2001-12-04

    KENNEDY SPACE CENTER, Fla. - STS-108 Mission Specialist Daniel M. Tani is happy to be suiting up for launch before heading to Launch Pad 39B and Space Shuttle Endeavour. Top priorities for the STS-108 (UF-1) mission of Endeavour are rotation of the International Space Station Expedition 3 and Expedition 4 crews; bringing water, equipment and supplies to the station in the Multi-Purpose Logistics Module Raffaello; and the crew's completion of robotics tasks and a spacewalk to install thermal blankets over two pieces of equipment at the bases of the Space Station's solar wings. Launch is scheduled for 5:45 p.m. EST Dec. 4, 2001, from Launch Pad 39B

  9. Interleaved Observation Execution and Rescheduling on Earth Observing Systems

    NASA Technical Reports Server (NTRS)

    Khatib, Lina; Frank, Jeremy; Smith, David; Morris, Robert; Dungan, Jennifer

    2003-01-01

    Observation scheduling for Earth orbiting satellites solves the following problem: given a set of requests for images of the Earth, a set of instruments for acquiring those images distributed on a collecting of orbiting satellites, and a set of temporal and resource constraints, generate a set of assignments of instruments and viewing times to those requests that satisfy those constraints. Observation scheduling is often construed as a constrained optimization problem with the objective of maximizing the overall utility of the science data acquired. The utility of an image is typically based on the intrinsic importance of acquiring it (for example, its importance in meeting a mission or science campaign objective) as well as the expected value of the data given current viewing conditions (for example, if the image is occluded by clouds, its value is usually diminished). Currently, science observation scheduling for Earth Observing Systems is done on the ground, for periods covering a day or more. Schedules are uplinked to the satellites and are executed rigorously. An alternative to this scenario is to do some of the decision-making about what images are to be acquired on-board. The principal argument for this capability is that the desirability of making an observation can change dynamically, because of changes in meteorological conditions (e.g. cloud cover), unforeseen events such as fires, floods, or volcanic eruptions, or un-expected changes in satellite or ground station capability. Furthermore, since satellites can only communicate with the ground between 5% to 10% of the time, it may be infeasible to make the desired changes to the schedule on the ground, and uplink the revisions in time for the on-board system to execute them. Examples of scenarios that motivate an on-board capability for revising schedules include the following. First, if a desired visual scene is completely obscured by clouds, then there is little point in taking it. In this case, satellite resources, such as power and storage space can be better utilized taking another image that is higher quality. Second, if an unexpected but important event occurs (such as a fire, flood, or volcanic eruption), there may be good reason to take images of it, instead of expending satellite resources on some of the lower priority scheduled observations. Finally, if there is unexpected loss of capability, it may be impossible to carry out the schedule of planned observations. For example, if a ground station goes down temporarily, a satellite may not be able to free up enough storage space to continue with the remaining schedule of observations. This paper describes an approach for interleaving execution of observation schedules with dynamic schedule revision based on changes to the expected utility of the acquired images. We describe the problem in detail, formulate an algorithm for interleaving schedule revision and execution, and discuss refinements to the algorithm based on the need for search efficiency. We summarize with a brief discussion of the tests performed on the system.

  10. SAFARI-1: Achieving conversion to LEU - A local challenge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piani, C.S.B.

    2008-07-15

    Two years have passed since the South African Department of Minerals and Energy authorised the conversion from High Enriched Uranium (HEU) to Low Enriched Uranium (LEU) of the South African Research Reactor (SAFARI-1) and the associated fuel manufacturing at Pelindaba. The scheduling, as originally proposed, allowed approximately three years for the full conversion of the reactor, anticipating simultaneous manufacturing ability from the fuel production plant. Due to technical difficulties experienced in the conversion of the local manufacturing plant from HEU (UAl alloy) to LEU (U Silicide) and the uncertainty as to costing and scheduling of such an achievement, the conversionmore » of SAFARI-1 based on local supply has been allocated a lower priority. The acquisition in mid-2006 of 2 LEU silicide elements of SA design, manufactured by AREVA- CERCA and irradiated as test elements in SAFARI-1 to burn-ups of {approx}65% each; was successfully accomplished within 9 cycles of irradiation each. Furthermore, four 'Hybrid' elements (AREVA-CERCA plates assembled locally at Pelindaba) are ready for irradiation and have received regulatory authorisation to load. This will enable the SAFARI-1 conversion program to continue systematically according to an agreed schedule. This paper will trace the developments of the above and reflect the current status and the rescheduled conversion phases of the reactor according to latest expectations. (author)« less

  11. Parallel Directionally Split Solver Based on Reformulation of Pipelined Thomas Algorithm

    NASA Technical Reports Server (NTRS)

    Povitsky, A.

    1998-01-01

    In this research an efficient parallel algorithm for 3-D directionally split problems is developed. The proposed algorithm is based on a reformulated version of the pipelined Thomas algorithm that starts the backward step computations immediately after the completion of the forward step computations for the first portion of lines This algorithm has data available for other computational tasks while processors are idle from the Thomas algorithm. The proposed 3-D directionally split solver is based on the static scheduling of processors where local and non-local, data-dependent and data-independent computations are scheduled while processors are idle. A theoretical model of parallelization efficiency is used to define optimal parameters of the algorithm, to show an asymptotic parallelization penalty and to obtain an optimal cover of a global domain with subdomains. It is shown by computational experiments and by the theoretical model that the proposed algorithm reduces the parallelization penalty about two times over the basic algorithm for the range of the number of processors (subdomains) considered and the number of grid nodes per subdomain.

  12. Time: a vital resource.

    PubMed

    Collins, Sandra K; Collins, Kevin S

    2004-01-01

    Resolving problems with time management requires an understanding of the concept of working smarter rather than harder. Therefore, managing time effectively is a vital responsibility of department managers. When developing a plan for more effectively managing time, it is important to carefully analyze where time is currently being used/lost. Keeping a daily log can be a time consuming effort. However, the log can provide information about ways that time may be saved and how to organize personal schedules to maximize time efficiency. The next step is to develop a strategy to decrease wasted time and create a more cohesive radiology department. The following list of time management strategies provides some suggestions for developing a plan. Get focused. Set goals and priorities. Get organized. Monitor individual motivation factors. Develop memory techniques. In healthcare, success means delivering the highest quality of care by getting organized, meeting deadlines, creating efficient schedules and appropriately budgeting resources. Effective time management focuses on knowing what needs to be done when. The managerial challenge is to shift the emphasis from doing everything all at once to orchestrating the departmental activities in order to maximize the time given in a normal workday.

  13. Stakeholders’ Views on Barriers to Research on Controversial Controlled Substances

    PubMed Central

    Rhodes; Andreae; Bourgiose; Indyk; Rhodes; Sacks

    2017-01-01

    Many diseases and disease symptoms still lack effective treatment. At the same time, certain controversial Schedule I drugs, such as heroin and cannabis, have been reputed to have considerable therapeutic potential for addressing significant medical problems. Yet, there is a paucity of U.S. clinical studies on the therapeutic uses of controlled drugs. For example, people living with HIV/AIDS experience a variety of disease- and medication-related symptoms. Their chronic pain is intense, frequent, and difficult to treat. Nevertheless, clinical trials of compassionate management for their chronic symptoms that should be a research priority, is stymied. We employed qualitative methods to develop an understanding of the barriers to research on potential therapeutic uses of Schedule I drugs so that they might be addressed. We elicited the perspectives of key stakeholder groups that would be involved in such studies: people living with HIV/AIDS, clinicians, and Institutional Review Board members. As we identified obstacles to research, we found all stakeholder groups to arrive at the same conclusion, that clinical research on the therapeutic potential of these drugs is ethically required. PMID:28001138

  14. Enabling Remote Health-Caring Utilizing IoT Concept over LTE-Femtocell Networks.

    PubMed

    Hindia, M N; Rahman, T A; Ojukwu, H; Hanafi, E B; Fattouh, A

    2016-01-01

    As the enterprise of the "Internet of Things" is rapidly gaining widespread acceptance, sensors are being deployed in an unrestrained manner around the world to make efficient use of this new technological evolution. A recent survey has shown that sensor deployments over the past decade have increased significantly and has predicted an upsurge in the future growth rate. In health-care services, for instance, sensors are used as a key technology to enable Internet of Things oriented health-care monitoring systems. In this paper, we have proposed a two-stage fundamental approach to facilitate the implementation of such a system. In the first stage, sensors promptly gather together the particle measurements of an android application. Then, in the second stage, the collected data are sent over a Femto-LTE network following a new scheduling technique. The proposed scheduling strategy is used to send the data according to the application's priority. The efficiency of the proposed technique is demonstrated by comparing it with that of well-known algorithms, namely, proportional fairness and exponential proportional fairness.

  15. Cell transmission model of dynamic assignment for urban rail transit networks.

    PubMed

    Xu, Guangming; Zhao, Shuo; Shi, Feng; Zhang, Feilian

    2017-01-01

    For urban rail transit network, the space-time flow distribution can play an important role in evaluating and optimizing the space-time resource allocation. For obtaining the space-time flow distribution without the restriction of schedules, a dynamic assignment problem is proposed based on the concept of continuous transmission. To solve the dynamic assignment problem, the cell transmission model is built for urban rail transit networks. The priority principle, queuing process, capacity constraints and congestion effects are considered in the cell transmission mechanism. Then an efficient method is designed to solve the shortest path for an urban rail network, which decreases the computing cost for solving the cell transmission model. The instantaneous dynamic user optimal state can be reached with the method of successive average. Many evaluation indexes of passenger flow can be generated, to provide effective support for the optimization of train schedules and the capacity evaluation for urban rail transit network. Finally, the model and its potential application are demonstrated via two numerical experiments using a small-scale network and the Beijing Metro network.

  16. Enabling Remote Health-Caring Utilizing IoT Concept over LTE-Femtocell Networks

    PubMed Central

    Hindia, M. N.; Rahman, T. A.; Ojukwu, H.; Hanafi, E. B.; Fattouh, A.

    2016-01-01

    As the enterprise of the “Internet of Things” is rapidly gaining widespread acceptance, sensors are being deployed in an unrestrained manner around the world to make efficient use of this new technological evolution. A recent survey has shown that sensor deployments over the past decade have increased significantly and has predicted an upsurge in the future growth rate. In health-care services, for instance, sensors are used as a key technology to enable Internet of Things oriented health-care monitoring systems. In this paper, we have proposed a two-stage fundamental approach to facilitate the implementation of such a system. In the first stage, sensors promptly gather together the particle measurements of an android application. Then, in the second stage, the collected data are sent over a Femto-LTE network following a new scheduling technique. The proposed scheduling strategy is used to send the data according to the application’s priority. The efficiency of the proposed technique is demonstrated by comparing it with that of well-known algorithms, namely, proportional fairness and exponential proportional fairness. PMID:27152423

  17. Common meanings of good and bad sleep in a healthy population sample.

    PubMed

    Dickerson, Suzanne S; Klingman, Karen J; Jungquist, Carla R

    2016-09-01

    The study's purpose was to understand the common meanings and shared practices related to good and bad sleep from narratives of a sample of healthy participants. Interpretive phenomenology was the approach to analyze narratives of the participants' everyday experiences with sleep. Participants were interviewed and asked to describe typical good and bad nights' sleep, what contributes to their sleep experience, and the importance of sleep in their lives. Team interpretations of narratives identified common themes by consensus. Medium sized city in New York State (upper west region). A sample of 30 healthy participants were from a parent study (n=300) on testing the sleep questions from the Behavioral Risk Factor Surveillance System from the Centers for Disease Control and Prevention. Interpretations of good and bad sleep. Participants described similar experiences of good and bad sleep often directly related to their ability to schedule time to sleep, fall asleep, and maintain sleep. Worrying about life stresses and interruptions prevented participants from falling asleep and staying asleep. Yet, based on current life priorities (socializers, family work focused, and optimum health seekers), they had differing values related to seeking sleep opportunities and strategizing to overcome challenges. The participants' priorities reflected the context of their main concerns and stresses in life that influenced the importance given to promoting sleep opportunities. Public health messages tailored to life priorities could be developed to promote healthy sleep practices. Copyright © 2016 National Sleep Foundation. Published by Elsevier Inc. All rights reserved.

  18. High-precision control of LSRM based X-Y table for industrial applications.

    PubMed

    Pan, J F; Cheung, Norbert C; Zou, Yu

    2013-01-01

    The design of an X-Y table applying direct-drive linear switched reluctance motor (LSRM) principle is proposed in this paper. The proposed X-Y table has the characteristics of low cost, simple and stable mechanical structure. After the design procedure is introduced, an adaptive position control method based on online parameter identification and pole-placement regulation scheme is developed for the X-Y table. Experimental results prove the feasibility and its priority over a traditional PID controller with better dynamic response, static performance and robustness to disturbances. It is expected that the novel two-dimensional direct-drive system find its applications in high-precision manufacture area. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Hepatitis A vaccine should receive priority in National Immunization Schedule in India.

    PubMed

    Verma, Ramesh; Khanna, Pardeep

    2012-08-01

    Hepatitis A is an acute, usually self-limiting infection of the liver caused by a virus known as hepatitis A virus (HAV). Humans are the only reservoir of the virus; transmission occurs primarily through the fecal-oral route and is closely associated with poor sanitary conditions. The virus has a worldwide distribution and causes about 1.5 million cases of clinical hepatitis each year. The risk of developing symptomatic illness following HAV infection is directly correlated with age. As many 85% of children below 2 y and 50% of those between 2-5 y infected with HAV are anicteric, and among older children and adults, infection usually causes clinical disease, with jaundice occurring in more than 70% of cases. The infection is usually self-limiting with occasional fulminant hepatic failure and mortality. In most developing countries in Asia and Africa, hepatitis A is highly endemic such that a large proportion of the population acquires immunity through asymptomatic infection early in life. HAV is endemic in India; most of the population is infected asymptomatically in early childhood with life-long immunity. Several outbreaks of hepatitis A in various parts of India have been recorded in the past decade such that anti-HAV positivity varied from 26 to 85%. Almost 50% of children of ages 1-5 y were found to be susceptible to HAV. Any one of the licensed vaccines may be used since all have nearly similar efficacy and safety profiles (except for post-exposure prophylaxis / immunocompromised patients, where only inactivated vaccines may be used). Two doses 6 mo apart are recommended for all vaccines. All Hepatitis A vaccines are licensed for use in children aged 1 y or older. However in the Indian scenario, it is preferable to administer the vaccines at age 18 mo or more when maternal antibodies have completely declined. Vaccination at this age is preferable to later since it is easier to integrate with the existing schedule, protects those who have no antibodies, and protects children by the time they attend day care. In India the vaccine against hepatitis A is available for the people who can afford it, but the government of India should give this vaccine as a priority in the national immunization schedule.

  20. Efficient ICCG on a shared memory multiprocessor

    NASA Technical Reports Server (NTRS)

    Hammond, Steven W.; Schreiber, Robert

    1989-01-01

    Different approaches are discussed for exploiting parallelism in the ICCG (Incomplete Cholesky Conjugate Gradient) method for solving large sparse symmetric positive definite systems of equations on a shared memory parallel computer. Techniques for efficiently solving triangular systems and computing sparse matrix-vector products are explored. Three methods for scheduling the tasks in solving triangular systems are implemented on the Sequent Balance 21000. Sample problems that are representative of a large class of problems solved using iterative methods are used. We show that a static analysis to determine data dependences in the triangular solve can greatly improve its parallel efficiency. We also show that ignoring symmetry and storing the whole matrix can reduce solution time substantially.

  1. STS-108 Mission Specialist Daniel M. Tani final suit checkout

    NASA Technical Reports Server (NTRS)

    2001-01-01

    STS-108 Mission Specialist Daniel M. Tani final suit checkout KSC-01PD-1717 KENNEDY SPACE CENTER, Fla. - STS-108 Mission Specialist Daniel M. Tani waves as he undergoes final suit check before launch on Nov. 29. Top priorities for the STS-108 (UF-1) mission of Endeavour are rotation of the International Space Station Expedition Three and Expedition Four crews; bringing water, equipment and supplies to the station in the Multi-Purpose Logistics Module Raffaello; and completion of robotics tasks and a spacewalk to install thermal blankets over two pieces of equipment at the bases of the Space Station's solar wings. Liftoff is scheduled for 7:41 p.m. EST.

  2. STS-108 Mission Specialist Linda A. Godwin final suit checkout

    NASA Technical Reports Server (NTRS)

    2001-01-01

    STS-108 Mission Specialist Linda A. Godwin final suit checkout KSC-01PD-1720 KENNEDY SPACE CENTER, Fla. -- STS-108 Mission Specialist Linda A. Godwin undergoes final suit check before launch on mission STS-108 Nov. 29. Top priorities for the STS-108 (UF-1) mission of Endeavour are rotation of the International Space Station Expedition Three and Expedition Four crews; bringing water, equipment and supplies to the station in the Multi-Purpose Logistics Module Raffaello; and completion of robotics tasks and a spacewalk to install thermal blankets over two pieces of equipment at the bases of the Space Station's solar wings. Liftoff is scheduled for 7:41 p.m. EST.

  3. Circadian rhythms and fractal fluctuations in forearm motion

    NASA Astrophysics Data System (ADS)

    Hu, Kun; Hilton, Michael F.

    2005-03-01

    Recent studies have shown that the circadian pacemaker --- an internal body clock located in the brain which is normally synchronized with the sleep/wake behavioral cycles --- influences key physiologic functions such as the body temperature, hormone secretion and heart rate. Surprisingly, no previous studies have investigated whether the circadian pacemaker impacts human motor activity --- a fundamental physiologic function. We investigate high-frequency actigraph recordings of forearm motion from a group of young and healthy subjects during a forced desynchrony protocol which allows to decouple the sleep/wake cycles from the endogenous circadian cycle while controlling scheduled behaviors. We investigate both static properties (mean value, standard deviation), dynamical characteristics (long-range correlations), and nonlinear features (magnitude and Fourier-phase correlations) in the fluctuations of forearm acceleration across different circadian phases. We demonstrate that while the static properties exhibit significant circadian rhythms with a broad peak in the afternoon, the dynamical and nonlinear characteristics remain invariant with circadian phase. This finding suggests an intrinsic multi-scale dynamic regulation of forearm motion the mechanism of which is not influenced by the circadian pacemaker, thus suggesting that increased cardiac risk in the early morning hours is not related to circadian-mediated influences on motor activity.

  4. Global identification of stochastic dynamical systems under different pseudo-static operating conditions: The functionally pooled ARMAX case

    NASA Astrophysics Data System (ADS)

    Sakellariou, J. S.; Fassois, S. D.

    2017-01-01

    The identification of a single global model for a stochastic dynamical system operating under various conditions is considered. Each operating condition is assumed to have a pseudo-static effect on the dynamics and be characterized by a single measurable scheduling variable. Identification is accomplished within a recently introduced Functionally Pooled (FP) framework, which offers a number of advantages over Linear Parameter Varying (LPV) identification techniques. The focus of the work is on the extension of the framework to include the important FP-ARMAX model case. Compared to their simpler FP-ARX counterparts, FP-ARMAX models are much more general and offer improved flexibility in describing various types of stochastic noise, but at the same time lead to a more complicated, non-quadratic, estimation problem. Prediction Error (PE), Maximum Likelihood (ML), and multi-stage estimation methods are postulated, and the PE estimator optimality, in terms of consistency and asymptotic efficiency, is analytically established. The postulated estimators are numerically assessed via Monte Carlo experiments, while the effectiveness of the approach and its superiority over its FP-ARX counterpart are demonstrated via an application case study pertaining to simulated railway vehicle suspension dynamics under various mass loading conditions.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Germain, Shawn

    Nuclear Power Plant (NPP) refueling outages create some of the most challenging activities the utilities face in both tracking and coordinating thousands of activities in a short period of time. Other challenges, including nuclear safety concerns arising from atypical system configurations and resource allocation issues, can create delays and schedule overruns, driving up outage costs. Today the majority of the outage communication is done using processes that do not take advantage of advances in modern technologies that enable enhanced communication, collaboration and information sharing. Some of the common practices include: runners that deliver paper-based requests for approval, radios, telephones, desktopmore » computers, daily schedule printouts, and static whiteboards that are used to display information. Many gains have been made to reduce the challenges facing outage coordinators; however; new opportunities can be realized by utilizing modern technological advancements in communication and information tools that can enhance the collective situational awareness of plant personnel leading to improved decision-making. Ongoing research as part of the Light Water Reactor Sustainability Program (LWRS) has been targeting NPP outage improvement. As part of this research, various applications of collaborative software have been demonstrated through pilot project utility partnerships. Collaboration software can be utilized as part of the larger concept of Computer-Supported Cooperative Work (CSCW). Collaborative software can be used for emergent issue resolution, Outage Control Center (OCC) displays, and schedule monitoring. Use of collaboration software enables outage staff and subject matter experts (SMEs) to view and update critical outage information from any location on site or off.« less

  6. Employing Earned Value Management in Government Research and Design - Lessons Learned from the Trenches

    NASA Technical Reports Server (NTRS)

    Simon, Tom

    2009-01-01

    To effectively manage a project, the project manager must have a plan, understand the current conditions, and be able to take action to correct the course when challenges arise. Research and design projects face technical, schedule, and budget challenges that make it difficult to utilize project management tools developed for projects based on previously demonstrated technologies. Projects developing new technologies by their inherent nature are trying something new and thus have little to no data to support estimates for schedule and cost, let alone the technical outcome. Projects with a vision for the outcome but little confidence in the exact tasks to accomplish in order to achieve the vision incur cost and schedule penalties when conceptual solutions require unexpected iterations or even a reinvention of the plan. This presentation will share the project management methodology and tools developed through trial and error for a NASA research and design project combining industry, academia, and NASA inhouse work in which Earned Value Management principles were employed but adapted for the reality of the government financial system and the reality of challenging technology development. The priorities of the presented methodology are flexibility, accountability, and simplicity to give the manager tools to help deliver to the customer while not using up valuable time and resources on extensive planning and analysis. This presentation will share the methodology, tools, and work through failed and successful examples from the three years of process evolution.

  7. A real-time architecture for time-aware agents.

    PubMed

    Prouskas, Konstantinos-Vassileios; Pitt, Jeremy V

    2004-06-01

    This paper describes the specification and implementation of a new three-layer time-aware agent architecture. This architecture is designed for applications and environments where societies of humans and agents play equally active roles, but interact and operate in completely different time frames. The architecture consists of three layers: the April real-time run-time (ART) layer, the time aware layer (TAL), and the application agents layer (AAL). The ART layer forms the underlying real-time agent platform. An original online, real-time, dynamic priority-based scheduling algorithm is described for scheduling the computation time of agent processes, and it is shown that the algorithm's O(n) complexity and scalable performance are sufficient for application in real-time domains. The TAL layer forms an abstraction layer through which human and agent interactions are temporally unified, that is, handled in a common way irrespective of their temporal representation and scale. A novel O(n2) interaction scheduling algorithm is described for predicting and guaranteeing interactions' initiation and completion times. The time-aware predicting component of a workflow management system is also presented as an instance of the AAL layer. The described time-aware architecture addresses two key challenges in enabling agents to be effectively configured and applied in environments where humans and agents play equally active roles. It provides flexibility and adaptability in its real-time mechanisms while placing them under direct agent control, and it temporally unifies human and agent interactions.

  8. Conceptual design of a lunar base solar power plant lunar base systems study task 3.3

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The best available concepts for a 100 kW Solar Lunar Power Plant based on static and dynamic conversion concepts have been examined. The two concepts which emerged for direct comparison yielded a difference in delivered mass of 35 MT, the mass equivalent of 1.4 lander payloads, in favor of the static concept. The technologies considered for the various elements are either state-of-the-art or near-term. Two photovoltaic cell concepts should receive high priority for development: i.e., amorphous silicon and indium phosphide cells. The amorphous silicon, because it can be made so light weight and rugged; and the indium phosphide, because it shows very high efficiency potential and is reportedly not degraded by radiation. Also the amorphous silicon cells may be mounted on flexible backing that may roll up much like a carpet for compact storage, delivery, and ease of deployment at the base. The fuel cell and electrolysis cell technology is quite well along for lunar base applications, and because both the Shuttle and the forthcoming Space Station incorporate these devices, the status quo will be maintained. Early development of emerging improvements should be implemented so that essential life verification test programs may commence.

  9. Heimdall System for MSSS Sensor Tasking

    NASA Astrophysics Data System (ADS)

    Herz, A.; Jones, B.; Herz, E.; George, D.; Axelrad, P.; Gehly, S.

    In Norse Mythology, Heimdall uses his foreknowledge and keen eyesight to keep watch for disaster from his home near the Rainbow Bridge. Orbit Logic and the Colorado Center for Astrodynamics Research (CCAR) at the University of Colorado (CU) have developed the Heimdall System to schedule observations of known and uncharacterized objects and search for new objects from the Maui Space Surveillance Site. Heimdall addresses the current need for automated and optimized SSA sensor tasking driven by factors associated with improved space object catalog maintenance. Orbit Logic and CU developed an initial baseline prototype SSA sensor tasking capability for select sensors at the Maui Space Surveillance Site (MSSS) using STK and STK Scheduler, and then added a new Track Prioritization Component for FiSST-inspired computations for predicted Information Gain and Probability of Detection, and a new SSA-specific Figure-of-Merit (FOM) for optimized SSA sensor tasking. While the baseline prototype addresses automation and some of the multi-sensor tasking optimization, the SSA-improved prototype addresses all of the key elements required for improved tasking leading to enhanced object catalog maintenance. The Heimdall proof-of-concept was demonstrated for MSSS SSA sensor tasking for a 24 hour period to attempt observations of all operational satellites in the unclassified NORAD catalog, observe a small set of high priority GEO targets every 30 minutes, make a sky survey of the GEO belt region accessible to MSSS sensors, and observe particular GEO regions that have a high probability of finding new objects with any excess sensor time. This Heimdall prototype software paves the way for further R&D that will integrate this technology into the MSSS systems for operational scheduling, improve the software's scalability, and further tune and enhance schedule optimization. The Heimdall software for SSA sensor tasking provides greatly improved performance over manual tasking, improved coordinated sensor usage, and tasking schedules driven by catalog improvement goals (reduced overall covariance, etc.). The improved performance also enables more responsive sensor tasking to address external events, newly detected objects, newly detected object activity, and sensor anomalies. Instead of having to wait until the next day's scheduling phase, events can be addressed with new tasking schedules immediately (within seconds or minutes). Perhaps the most important benefit is improved SSA based on an overall improvement to the quality of the space catalog. By driving sensor tasking and scheduling based on predicted Information Gain and other relevant factors, better decisions are made in the application of available sensor resources, leading to an improved catalog and better information about the objects of most interest. The Heimdall software solution provides a configurable, automated system to improve sensor tasking efficiency and responsiveness for SSA applications. The FISST algorithms for Track Prioritization, SSA specific task and resource attributes, Scheduler algorithms, and configurable SSA-specific Figure-of-Merit together provide optimized and tunable scheduling for the Maui Space Surveillance Site and possibly other sites and organizations across the U.S. military and for allies around the world.

  10. Sequestration of priority pollutant PAHs from sediment pore water employing semipermeable membrane devices.

    PubMed

    Williamson, Kelly S; Petty, Jimmie D; Huckins, James N; Lebo, Jon A; Kaiser, Edwin M

    2002-11-01

    Semipermeable membrane devices (SPMDs) were employed to sample sediment pore water in static exposure studies under controlled laboratory conditions using (control pond and formulated) sediments fortified with 15 priority pollutant polycyclic aromatic hydrocarbons (PPPAHs). The sediment fortification level of 750 ng/g was selected on the basis of what might be detected in a sediment sample from a contaminated area. The sampling interval consisted of 0, 4, 7, 14, and 28 days for each study. The analytical methodologies, as well as the extraction and sample cleanup procedures used in the isolation, characterization, and quantitation of 15 PPPAHs at different fortification levels in SPMDs, water, and sediment were reported previously (Williamson, M.S. Thesis, University of Missouri-Columbia, USA; Williamson et al., Chemosphere (This issue--PII: S0045-6535(02)00394-6)) and used for this project. Average (mean) extraction recoveries for each PPPAH congener in each matrix are reported and discussed. No procedural blank extracts (controls) were found to contain any PPPAH residues above the method quantitation limit, therefore, no matrix interferences were detected. The focus of this publication is to demonstrate the ability to sequester environmental contaminants, specifically PPPAHs, from sediment pore water using SPMDs and two different types of fortified sediment.

  11. Sequestration of priority pollutant PAHs from sediment pore water employing semipermeable membrane devices

    USGS Publications Warehouse

    Williamson, K.S.; Petty, J.D.; Huckins, J.N.; Lebo, J.A.; Kaiser, E.M.

    2002-01-01

    Semipermeable membrane devices (SPMDs) were employed to sample sediment pore water in static exposure studies under controlled laboratory conditions using (control pond and formulated) sediments fortified with 15 priority pollutant polycyclic aromatic hydrocarbons (PPPAHs). The sediment fortification level of 750 ng/g was selected on the basis of what might be detected in a sediment sample from a contaminated area. The sampling interval consisted of 0, 4, 7, 14, and 28 days for each study. The analytical methodologies, as well as the extraction and sample cleanup procedures used in the isolation, characterization, and quantitation of 15 PPPAHs at different fortification levels in SPMDs, water, and sediment were reported previously (Williamson, M.S. Thesis, University of Missouri - Columbia, USA; Williamson et al., Chemosphere (This issue - PII: S0045-6535(02)00394-6)) and used for this project. Average (mean) extraction recoveries for each PPPAH congener in each matrix are reported and discussed. No procedural blank extracts (controls) were found to contain any PPPAH residues above the method quantitation limit, therefore, no matrix interferences were detected. The focus of this publication is to demonstrate the ability to sequester environmental contaminants, specifically PPPAHs, from sediment pore water using SPMDs and two different types of fortified sediment.

  12. Preparing for Operational Use of High Priority Products from the Joint Polar Satellite System (JPSS) in Numerical Weather Prediction

    NASA Astrophysics Data System (ADS)

    Nandi, S.; Layns, A. L.; Goldberg, M.; Gambacorta, A.; Ling, Y.; Collard, A.; Grumbine, R. W.; Sapper, J.; Ignatov, A.; Yoe, J. G.

    2017-12-01

    This work describes end to end operational implementation of high priority products from National Oceanic and Atmospheric Administration's (NOAA) operational polar-orbiting satellite constellation, to include Suomi National Polar-orbiting Partnership (S-NPP) and the Joint Polar Satellite System series initial satellite (JPSS-1), into numerical weather prediction and earth systems models. Development and evaluation needed for the initial implementations of VIIRS Environmental Data Records (EDR) for Sea Surface Temperature ingestion in the Real-Time Global Sea Surface Temperature Analysis (RTG) and Polar Winds assimilated in the National Weather Service (NWS) Global Forecast System (GFS) is presented. These implementations ensure continuity of data in these models in the event of loss of legacy sensor data. Also discussed is accelerated operational implementation of Advanced Technology Microwave Sounder (ATMS) Temperature Data Records (TDR) and Cross-track Infrared Sounder (CrIS) Sensor Data Records, identified as Key Performance Parameters by the National Weather Service. Operational use of SNPP after 28 October, 2011 launch took more than one year due to the learning curve and development needed for full exploitation of new remote sensing capabilities. Today, ATMS and CrIS data positively impact weather forecast accuracy. For NOAA's JPSS initial satellite (JPSS-1), scheduled for launch in late 2017, we identify scope and timelines for pre-launch and post-launch activities needed to efficiently transition these capabilities into operations. As part of these alignment efforts, operational readiness for KPPs will be possible as soon as 90 days after launch. The schedule acceleration is possible because of the experience with S-NPP. NOAA operational polar-orbiting satellite constellation provides continuity and enhancement of earth systems observations out to 2036. Program best practices and lessons learned will inform future implementation for follow-on JPSS-3 and -4 missions ensuring benefits and enhancements during the system's design life.

  13. Automated control of hierarchical systems using value-driven methods

    NASA Technical Reports Server (NTRS)

    Pugh, George E.; Burke, Thomas E.

    1990-01-01

    An introduction is given to the Value-driven methodology, which has been successfully applied to solve a variety of difficult decision, control, and optimization problems. Many real-world decision processes (e.g., those encountered in scheduling, allocation, and command and control) involve a hierarchy of complex planning considerations. For such problems it is virtually impossible to define a fixed set of rules that will operate satisfactorily over the full range of probable contingencies. Decision Science Applications' value-driven methodology offers a systematic way of automating the intuitive, common-sense approach used by human planners. The inherent responsiveness of value-driven systems to user-controlled priorities makes them particularly suitable for semi-automated applications in which the user must remain in command of the systems operation. Three examples of the practical application of the approach in the automation of hierarchical decision processes are discussed: the TAC Brawler air-to-air combat simulation is a four-level computerized hierarchy; the autonomous underwater vehicle mission planning system is a three-level control system; and the Space Station Freedom electrical power control and scheduling system is designed as a two-level hierarchy. The methodology is compared with rule-based systems and with other more widely-known optimization techniques.

  14. Immunization coverage in India for areas served by the Integrated Child Development Services programme. The Integrated Child Development Services Consultants.

    PubMed

    Tandon, B N; Gandhi, N

    1992-01-01

    The Integrated Child Development Services (ICDS) programme was launched by the Indian government in October 1975 to provide a package of health, nutrition and informal educational services to mothers and children. In 1988 we studied the impact of ICDS on the immunization coverage of children aged 12-24 months and of mothers of infants in 19 rural, 8 tribal, and 9 urban ICDS projects that had been operational for more than 5 years. Complete coverage with BCG, diphtheria-pertussis-tetanus (DPT) and poliomyelitis vaccines was recorded for 65%, 63%, and 64% of children, respectively, in the ICDS population. By comparison, the coverage in the non-ICDS group was only 22% for BCG, 28% for DPT, and 27% for poliomyelitis. Complete immunization with tetanus toxoid was recorded for 68% of the mothers in the ICDS group and for 40% in the non-ICDS group. Coverage was greater in the urban and lower in the tribal projects. Scheduled castes, scheduled tribes, backward communities, and minorities (groups that have a high priority for social services) had immunization coverages in ICDS projects that were similar to those of higher castes.

  15. Gender-related factors in the recruitment of physicians to the rural Northwest.

    PubMed

    Ellsbury, Kathleen E; Baldwin, Laura-Mae; Johnson, Karin E; Runyan, Susan J; Hart, L Gary

    2002-01-01

    This study examines differences in the factors female and male physicians considered influential in their rural practice location choice and describes the practice arrangements that successfully recruited female physicians to rural areas. This cross-sectional study was based on a mailed survey of physicians successfully recruited between 1992 and 1999 to towns of 10,000 or less in six states in the Pacific Northwest. Responses from 77 men and 37 women (response rate 61%) indicated that women were more likely than men to have been influenced in making their practice choice by issues related to spouse or personal partner, flexible scheduling, family leave, availability of childcare, and the interpersonal aspects of recruitment. Commonly reported themes reflected the respondents' desire for flexibility regarding family issues and the value they placed on honesty during recruitment. It is very important in recruitment of both men and women to highlight the positive aspects of the community and to involve and assist the physician's spouse or partner. If they want to achieve a gender-balanced physician workforce, rural communities and practices recruiting physicians should place high priority on practice scheduling, spouse-partner, and interpersonal issues in the recruitment process.

  16. Challenges in modeling the X-29 flight test performance

    NASA Technical Reports Server (NTRS)

    Hicks, John W.; Kania, Jan; Pearce, Robert; Mills, Glen

    1987-01-01

    Presented are methods, instrumentation, and difficulties associated with drag measurement of the X-29A aircraft. The initial performance objective of the X-29A program emphasized drag polar shapes rather than absolute drag levels. Priorities during the flight envelope expansion restricted the evaluation of aircraft performance. Changes in aircraft configuration, uncertainties in angle-of-attack calibration, and limitations in instrumentation complicated the analysis. Limited engine instrumentation with uncertainties in overall in-flight thrust accuracy made it difficult to obtain reliable values of coefficient of parasite drag. The aircraft was incapable of tracking the automatic camber control trim schedule for optimum wing flaperon deflection during typical dynamic performance maneuvers; this has also complicated the drag polar shape modeling. The X-29A was far enough off the schedule that the developed trim drag correction procedure has proven inadequate. However, good drag polar shapes have been developed throughout the flight envelope. Preliminary flight results have compared well with wind tunnel predictions. A more comprehensive analysis must be done to complete performance models. The detailed flight performance program with a calibrated engine will benefit from the experience gained during this preliminary performance phase.

  17. Challenges in modeling the X-29A flight test performance

    NASA Technical Reports Server (NTRS)

    Hicks, John W.; Kania, Jan; Pearce, Robert; Mills, Glen

    1987-01-01

    The paper presents the methods, instrumentation, and difficulties associated with drag measurement of the X-29A aircraft. The initial performance objective of the X-29A program emphasized drag polar shapes rather than absolute drag levels. Priorities during the flight envelope expansion restricted the evaluation of aircraft performance. Changes in aircraft configuration, uncertainties in angle-of-attack calibration, and limitations in instrumentation complicated the analysis. Limited engine instrumentation with uncertainties in overall in-flight thrust accuracy made it difficult to obtain reliable values of coefficient of parasite drag. The aircraft was incapable of tracking the automatic camber control trim schedule for optimum wing flaperon deflection during typical dynamic performance maneuvers; this has also complicated the drag polar shape modeling. The X-29A was far enough off the schedule that the developed trim drag correction procedure has proven inadequate. Despite these obstacles, good drag polar shapes have been developed throughout the flight envelope. Preliminary flight results have compared well with wind tunnel predictions. A more comprehensive analysis must be done to complete the performance models. The detailed flight performance program with a calibrated engine will benefit from the experience gained during this preliminary performance phase.

  18. Controle du vol longitudinal d'un avion civil avec satisfaction de qualiies de manoeuvrabilite

    NASA Astrophysics Data System (ADS)

    Saussie, David Alexandre

    2010-03-01

    Fulfilling handling qualities still remains a challenging problem during flight control design. These criteria of different nature are derived from a wide experience based upon flight tests and data analysis, and they have to be considered if one expects a good behaviour of the aircraft. The goal of this thesis is to develop synthesis methods able to satisfy these criteria with fixed classical architectures imposed by the manufacturer or with a new flight control architecture. This is applied to the longitudinal flight model of a Bombardier Inc. business jet aircraft, namely the Challenger 604. A first step of our work consists in compiling the most commonly used handling qualities in order to compare them. A special attention is devoted to the dropback criterion for which theoretical analysis leads us to establish a practical formulation for synthesis purpose. Moreover, the comparison of the criteria through a reference model highlighted dominant criteria that, once satisfied, ensure that other ones are satisfied too. Consequently, we are able to consider the fulfillment of these criteria in the fixed control architecture framework. Guardian maps (Saydy et al., 1990) are then considered to handle the problem. Initially for robustness study, they are integrated in various algorithms for controller synthesis. Incidently, this fixed architecture problem is similar to the static output feedback stabilization problem and reduced-order controller synthesis. Algorithms performing stabilization and pole assignment in a specific region of the complex plane are then proposed. Afterwards, they are extended to handle the gain-scheduling problem. The controller is then scheduled through the entire flight envelope with respect to scheduling parameters. Thereafter, the fixed architecture is put aside while only conserving the same output signals. The main idea is to use Hinfinity synthesis to obtain an initial controller satisfying handling qualities thanks to reference model pairing and robust versus mass and center of gravity variations. Using robust modal control (Magni, 2002), we are able to reduce substantially the controller order and to structure it in order to come close to a classical architecture. An auto-scheduling method finally allows us to schedule the controller with respect to scheduling parameters. Two different paths are used to solve the same problem; each one exhibits its own advantages and disadvantages.

  19. Using container orchestration to improve service management at the RAL Tier-1

    NASA Astrophysics Data System (ADS)

    Lahiff, Andrew; Collier, Ian

    2017-10-01

    In recent years container orchestration has been emerging as a means of gaining many potential benefits compared to a traditional static infrastructure, such as increased utilisation through multi-tenancy, improved availability due to self-healing, and the ability to handle changing loads due to elasticity and auto-scaling. To this end we have been investigating migrating services at the RAL Tier-1 to an Apache Mesos cluster. In this model the concept of individual machines is abstracted away and services are run in containers on a cluster of machines, managed by schedulers, enabling a high degree of automation. Here we describe Mesos, the infrastructure deployed at RAL, and describe in detail the explicit example of running a batch farm on Mesos.

  20. Strategic Defense Initiative Organization adaptive structures program overview

    NASA Astrophysics Data System (ADS)

    Obal, Michael; Sater, Janet M.

    In the currently envisioned architecture none of the Strategic Defense System (SDS) elements to be deployed will receive scheduled maintenance. Assessments of performance capability due to changes caused by the uncertain effects of environments will be difficult, at best. In addition, the system will have limited ability to adjust in order to maintain its required performance levels. The Materials and Structures Office of the Strategic Defense Initiative Organization (SDIO) has begun to address solutions to these potential difficulties via an adaptive structures technology program that combines health and environment monitoring with static and dynamic structural control. Conceivable system benefits include improved target tracking and hit-to-kill performance, on-orbit system health monitoring and reporting, and threat attack warning and assessment.

  1. Estimation of effective wind speed

    NASA Astrophysics Data System (ADS)

    Østergaard, K. Z.; Brath, P.; Stoustrup, J.

    2007-07-01

    The wind speed has a huge impact on the dynamic response of wind turbine. Because of this, many control algorithms use a measure of the wind speed to increase performance, e.g. by gain scheduling and feed forward. Unfortunately, no accurate measurement of the effective wind speed is online available from direct measurements, which means that it must be estimated in order to make such control methods applicable in practice. In this paper a new method is presented for the estimation of the effective wind speed. First, the rotor speed and aerodynamic torque are estimated by a combined state and input observer. These two variables combined with the measured pitch angle is then used to calculate the effective wind speed by an inversion of a static aerodynamic model.

  2. NPSS Multidisciplinary Integration and Analysis

    NASA Technical Reports Server (NTRS)

    Hall, Edward J.; Rasche, Joseph; Simons, Todd A.; Hoyniak, Daniel

    2006-01-01

    The objective of this task was to enhance the capability of the Numerical Propulsion System Simulation (NPSS) by expanding its reach into the high-fidelity multidisciplinary analysis area. This task investigated numerical techniques to convert between cold static to hot running geometry of compressor blades. Numerical calculations of blade deformations were iteratively done with high fidelity flow simulations together with high fidelity structural analysis of the compressor blade. The flow simulations were performed with the Advanced Ducted Propfan Analysis (ADPAC) code, while structural analyses were performed with the ANSYS code. High fidelity analyses were used to evaluate the effects on performance of: variations in tip clearance, uncertainty in manufacturing tolerance, variable inlet guide vane scheduling, and the effects of rotational speed on the hot running geometry of the compressor blades.

  3. Assessing Exposures to Magnetic Resonance Imaging's Complex Mixture of Magnetic Fields for In Vivo, In Vitro, and Epidemiologic Studies of Health Effects for Staff and Patients.

    PubMed

    Frankel, Jennifer; Wilén, Jonna; Hansson Mild, Kjell

    2018-01-01

    A complex mixture of electromagnetic fields is used in magnetic resonance imaging (MRI): static, low-frequency, and radio frequency magnetic fields. Commonly, the static magnetic field ranges from one to three Tesla. The low-frequency field can reach several millitesla and with a time derivative of the order of some Tesla per second. The radiofrequency (RF) field has a magnitude in the microtesla range giving rise to specific absorption rate values of a few Watts per kilogram. Very little attention has been paid to the case where there is a combined exposure to several different fields at the same time. Some studies have shown genotoxic effects in cells after exposure to an MRI scan while others have not demonstrated any effects. A typical MRI exam includes muliple imaging sequences of varying length and intensity, to produce different types of images. Each sequence is designed with a particular purpose in mind, so one sequence can, for example, be optimized for clearly showing fat water contrast, while another is optimized for high-resolution detail. It is of the utmost importance that future experimental studies give a thorough description of the exposure they are using, and not just a statement such as "An ordinary MRI sequence was used." Even if the sequence is specified, it can differ substantially between manufacturers on, e.g., RF pulse height, width, and duty cycle. In the latest SCENIHR opinion, it is stated that there is very little information regarding the health effects of occupational exposure to MRI fields, and long-term prospective or retrospective cohort studies on workers are recommended as a high priority. They also state that MRI is increasingly used in pediatric diagnostic imaging, and a cohort study into the effects of MRI exposure on children is recommended as a high priority. For the exposure assessment in epidemiological studies, there is a clear difference between patients and staff and further work is needed on this. Studies that explore the possible differences between MRI scan sequences and compare them in terms of exposure level are warranted.

  4. Estimating the Effects of Astronaut Career Ionizing Radiation Dose Limits on Manned Interplanetary Flight Programs

    NASA Technical Reports Server (NTRS)

    Koontz, Steven L.; Rojdev, Kristina; Valle, Gerard D.; Zipay, John J.; Atwell, William S.

    2013-01-01

    Space radiation effects mitigation has been identified as one of the highest priority technology development areas for human space flight in the NASA Strategic Space Technology Investment Plan (Dec. 2012). In this paper we review the special features of space radiation that lead to severe constraints on long-term (more than 180 days) human flight operations outside Earth's magnetosphere. We then quantify the impacts of human space radiation dose limits on spacecraft engineering design and development, flight program architecture, as well as flight program schedule and cost. A new Deep Space Habitat (DSH) concept, the hybrid inflatable habitat, is presented and shown to enable a flexible, affordable approach to long term manned interplanetary flight today.

  5. Healthcare Strategic Planning as Part of National and Regional Development in the Israeli Galilee: A Case Study of the Planning Process.

    PubMed

    Peled, Ronit; Schenirer, Jerry

    2009-10-01

    This article describes a systematic process of geographic and strategic planning for healthcare services as a part of a regional development plan in the Israeli Galilee. The planning process consisted of three stages: (a) assessment of needs, demand and existing resources; (b) prioritisation of initiatives; and (c) scheduling of theoretical priorities. For many years the region has suffered from inequities and inequalities regarding the availability and accessibility of a regional healthcare system, resulting in high mortality and morbidity rates and low quality of life. The aim of the healthcare strategic plan was to suggest initiatives and actions to be taken in order to improve healthcare provision and the health and wellbeing of local residents.

  6. Critical issues in assuring long lifetime and fail-safe operation of optical communications network

    NASA Astrophysics Data System (ADS)

    Paul, Dilip K.

    1993-09-01

    Major factors in assuring long lifetime and fail-safe operation in optical communications networks are reviewed in this paper. Reliable functionality to design specifications, complexity of implementation, and cost are the most critical issues. As economics is the driving force to set the goals as well as priorities for the design, development, safe operation, and maintenance schedules of reliable networks, a balance is sought between the degree of reliability enhancement, cost, and acceptable outage of services. Protecting both the link and the network with high reliability components, hardware duplication, and diversity routing can ensure the best network availability. Case examples include both fiber optic and lasercom systems. Also, the state-of-the-art reliability of photonics in space environment is presented.

  7. Project Artemis

    NASA Technical Reports Server (NTRS)

    Birchenough, Shawn; Kato, Denise; Kennedy, Fred; Akin, David

    1990-01-01

    The goals of Project Artemis are designed to meet the challege of President Bush to return to the Moon, this time to stay. The first goal of the project is to establish a permanent manned base on the Moon for the purposes of scientific research and technological development. The knowledge gained from the establishment and operations of the lunar base will then be used to achieve the second goal of Project Artemis, the establishment of a manned base on the Martian surface. Throughout both phases of the program, crew safety will be the number one priority. There are four main issues that have governed the entire program: crew safety and mission success, commonality, growth potential, and costing and scheduling. These issues are discussed in more detail.

  8. Pan Am gets big savings at no cost

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tanz, D.

    Pan American World Airways' contract with an energy management control systems distributor enabled the company's terminal and maintenance facilities at JFK airport in New York to shift from housekeeping to major savings without additional cost. Energy savings from a pneumatic control system were split almost equally between Pan Am and Thomas S. Brown Associates (TSBA) Inc., and further savings are expected from a planned computer-controlled system. A full-time energy manager, able to give top priority to energy-consumption problems, was considered crucial to the program's success. Early efforts in light-level reduction and equipment scheduling required extensive persuasion and policing, but successfulmore » energy savings allowed the manager to progress to the more-extensive plants with TSBA.« less

  9. Web Time-Management Tool

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Oak Grove Reactor, developed by Oak Grove Systems, is a new software program that allows users to integrate workflow processes. It can be used with portable communication devices. The software can join e-mail, calendar/scheduling and legacy applications into one interactive system via the web. Priority tasks and due dates are organized and highlighted to keep the user up to date with developments. Reactor works with existing software and few new skills are needed to use it. Using a web browser, a user can can work on something while other users can work on the same procedure or view its status while it is being worked on at another site. The software was developed by the Jet Propulsion Lab and originally put to use at Johnson Space Center.

  10. Gore proposes doubling U.S. effort for global programs.

    PubMed

    1999-08-06

    To confront the widespread HIV crisis, Vice President Al Gore has proposed a plan for doubling the current U.S. allocations for global programs. Seventy percent of the projected $100 million is earmarked for sub-Saharan Africa, with smaller portions going to Asia and former Soviet republics. The global campaign will address issues including containment, prevention and education efforts, and medical and psychological treatment programs. Several meetings are being scheduled with political leaders, industry, AIDS activists, and foreign leaders to address worldwide problems related to HIV. Most advocacy groups are praising the initiative, stating the U.S. is now recognizing HIV as a foreign policy priority. However, other groups are critical, and state that more funds are needed to effectively address this issue.

  11. Objectively Optimized Observation Direction System Providing Situational Awareness for a Sensor Web

    NASA Astrophysics Data System (ADS)

    Aulov, O.; Lary, D. J.

    2010-12-01

    There is great utility in having a flexible and automated objective observation direction system for the decadal survey missions and beyond. Such a system allows us to optimize the observations made by suite of sensors to address specific goals from long term monitoring to rapid response. We have developed such a prototype using a network of communicating software elements to control a heterogeneous network of sensor systems, which can have multiple modes and flexible viewing geometries. Our system makes sensor systems intelligent and situationally aware. Together they form a sensor web of multiple sensors working together and capable of automated target selection, i.e. the sensors “know” where they are, what they are able to observe, what targets and with what priorities they should observe. This system is implemented in three components. The first component is a Sensor Web simulator. The Sensor Web simulator describes the capabilities and locations of each sensor as a function of time, whether they are orbital, sub-orbital, or ground based. The simulator has been implemented using AGIs Satellite Tool Kit (STK). STK makes it easy to analyze and visualize optimal solutions for complex space scenarios, and perform complex analysis of land, sea, air, space assets, and shares results in one integrated solution. The second component is target scheduler that was implemented with STK Scheduler. STK Scheduler is powered by a scheduling engine that finds better solutions in a shorter amount of time than traditional heuristic algorithms. The global search algorithm within this engine is based on neural network technology that is capable of finding solutions to larger and more complex problems and maximizing the value of limited resources. The third component is a modeling and data assimilation system. It provides situational awareness by supplying the time evolution of uncertainty and information content metrics that are used to tell us what we need to observe and the priority we should give to the observations. A prototype of this component was implemented with AutoChem. AutoChem is NASA release software constituting an automatic code generation, symbolic differentiator, analysis, documentation, and web site creation tool for atmospheric chemical modeling and data assimilation. Its model is explicit and uses an adaptive time-step, error monitoring time integration scheme for stiff systems of equations. AutoChem was the first model to ever have the facility to perform 4D-Var data assimilation and Kalman filter. The project developed a control system with three main accomplishments. First, fully multivariate observational and theoretical information with associated uncertainties was combined using a full Kalman filter data assimilation system. Second, an optimal distribution of the computations and of data queries was achieved by utilizing high performance computers/load balancing and a set of automatically mirrored databases. Third, inter-instrument bias correction was performed using machine learning. The PI for this project was Dr. David Lary of the UMBC Joint Center for Earth Systems Technology at NASA/Goddard Space Flight Center.

  12. An ontology-based nurse call management system (oNCS) with probabilistic priority assessment

    PubMed Central

    2011-01-01

    Background The current, place-oriented nurse call systems are very static. A patient can only make calls with a button which is fixed to a wall of a room. Moreover, the system does not take into account various factors specific to a situation. In the future, there will be an evolution to a mobile button for each patient so that they can walk around freely and still make calls. The system would become person-oriented and the available context information should be taken into account to assign the correct nurse to a call. The aim of this research is (1) the design of a software platform that supports the transition to mobile and wireless nurse call buttons in hospitals and residential care and (2) the design of a sophisticated nurse call algorithm. This algorithm dynamically adapts to the situation at hand by taking the profile information of staff members and patients into account. Additionally, the priority of a call probabilistically depends on the risk factors, assigned to a patient. Methods The ontology-based Nurse Call System (oNCS) was developed as an extension of a Context-Aware Service Platform. An ontology is used to manage the profile information. Rules implement the novel nurse call algorithm that takes all this information into account. Probabilistic reasoning algorithms are designed to determine the priority of a call based on the risk factors of the patient. Results The oNCS system is evaluated through a prototype implementation and simulations, based on a detailed dataset obtained from Ghent University Hospital. The arrival times of nurses at the location of a call, the workload distribution of calls amongst nurses and the assignment of priorities to calls are compared for the oNCS system and the current, place-oriented nurse call system. Additionally, the performance of the system is discussed. Conclusions The execution time of the nurse call algorithm is on average 50.333 ms. Moreover, the oNCS system significantly improves the assignment of nurses to calls. Calls generally have a nurse present faster and the workload-distribution amongst the nurses improves. PMID:21294860

  13. Integrated Cost and Schedule using Monte Carlo Simulation of a CPM Model - 12419

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hulett, David T.; Nosbisch, Michael R.

    This discussion of the recommended practice (RP) 57R-09 of AACE International defines the integrated analysis of schedule and cost risk to estimate the appropriate level of cost and schedule contingency reserve on projects. The main contribution of this RP is to include the impact of schedule risk on cost risk and hence on the need for cost contingency reserves. Additional benefits include the prioritizing of the risks to cost, some of which are risks to schedule, so that risk mitigation may be conducted in a cost-effective way, scatter diagrams of time-cost pairs for developing joint targets of time and cost,more » and probabilistic cash flow which shows cash flow at different levels of certainty. Integrating cost and schedule risk into one analysis based on the project schedule loaded with costed resources from the cost estimate provides both: (1) more accurate cost estimates than if the schedule risk were ignored or incorporated only partially, and (2) illustrates the importance of schedule risk to cost risk when the durations of activities using labor-type (time-dependent) resources are risky. Many activities such as detailed engineering, construction or software development are mainly conducted by people who need to be paid even if their work takes longer than scheduled. Level-of-effort resources, such as the project management team, are extreme examples of time-dependent resources, since if the project duration exceeds its planned duration the cost of these resources will increase over their budgeted amount. The integrated cost-schedule risk analysis is based on: - A high quality CPM schedule with logic tight enough so that it will provide the correct dates and critical paths during simulation automatically without manual intervention. - A contingency-free estimate of project costs that is loaded on the activities of the schedule. - Resolves inconsistencies between cost estimate and schedule that often creep into those documents as project execution proceeds. - Good-quality risk data that are usually collected in risk interviews of the project team, management and others knowledgeable in the risk of the project. The risks from the risk register are used as the basis of the risk data in the risk driver method. The risk driver method is based in the fundamental principle that identifiable risks drive overall cost and schedule risk. - A Monte Carlo simulation software program that can simulate schedule risk, burn WM2012 rate risk and time-independent resource risk. The results include the standard histograms and cumulative distributions of possible cost and time results for the project. However, by simulating both cost and time simultaneously we can collect the cost-time pairs of results and hence show the scatter diagram ('football chart') that indicates the joint probability of finishing on time and on budget. Also, we can derive the probabilistic cash flow for comparison with the time-phased project budget. Finally the risks to schedule completion and to cost can be prioritized, say at the P-80 level of confidence, to help focus the risk mitigation efforts. If the cost and schedule estimates including contingency reserves are not acceptable to the project stakeholders the project team should conduct risk mitigation workshops and studies, deciding which risk mitigation actions to take, and re-run the Monte Carlo simulation to determine the possible improvement to the project's objectives. Finally, it is recommended that the contingency reserves of cost and of time, calculated at a level that represents an acceptable degree of certainty and uncertainty for the project stakeholders, be added as a resource-loaded activity to the project schedule for strategic planning purposes. The risk analysis described in this paper is correct only for the current plan, represented by the schedule. The project contingency reserve of time and cost that are the main results of this analysis apply if that plan is to be followed. Of course project managers have the option of re-planning and re-scheduling in the face of new facts, in part by mitigating risk. This analysis identifies the high-priority risks to cost and to schedule, which assist the project manager in planning further risk mitigation. Some project managers reject the results and argue that they cannot possibly be so late or so overrun. Those project managers may be wasting an opportunity to mitigate risk and get a more favorable outcome. (authors)« less

  14. Survey of the Patterns of Using Stereotactic Ablative Radiotherapy for Early-Stage Non-small Cell Lung Cancer in Korea.

    PubMed

    Song, Sanghyuk; Chang, Ji Hyun; Kim, Hak Jae; Kim, Yeon Sil; Kim, Jin Hee; Ahn, Yong Chan; Kim, Jae-Sung; Song, Si Yeol; Moon, Sung Ho; Cho, Moon June; Youn, Seon Min

    2017-07-01

    Stereotactic ablative radiotherapy (SABR) is an effective emerging technique for early-stage non-small cell lung cancer (NSCLC). We investigated the current practice of SABR for early-stage NSCLC in Korea. We conducted a nationwide survey of SABR for NSCLC by sending e-mails to all board-certified members of the Korean Society for Radiation Oncology. The survey included 23 questions focusing on the technical aspects of SABR and 18 questions seeking the participants' opinions on specific clinical scenarios in the use of SABR for early-stage NSCLC. Overall, 79 radiation oncologists at 61/85 specialist hospitals in Korea (71.8%) responded to the survey. SABR was used at 33 institutions (54%) to treat NSCLC. Regarding technical aspects, the most common planning methods were the rotational intensity-modulated technique (59%) and the static intensity-modulated technique (49%). Respiratory motion was managed by gating (54%) or abdominal compression (51%), and 86% of the planning scans were obtained using 4-dimensional computed tomography. In the clinical scenarios, the most commonly chosen fractionation schedule for peripherally located T1 NSCLC was 60 Gy in four fractions. For centrally located tumors and T2 NSCLC, the oncologists tended to avoid SABR for radiotherapy, and extended the fractionation schedule. The results of our survey indicated that SABR is increasingly being used to treat NSCLC in Korea. However, there were wide variations in the technical protocols and fractionation schedules of SABR for early-stage NSCLC among institutions. Standardization of SABR is necessary before implementing nationwide, multicenter, randomized studies.

  15. Failure to produce taste-aversion learning in rats exposed to static electric fields and air ions.

    PubMed

    Creim, J A; Lovely, R H; Weigel, R J; Forsythe, W C; Anderson, L E

    1995-01-01

    Taste-aversion (TA) learning was measured to determine whether exposure to high-voltage direct current (HVdc) static electric fields can produce TA learning in male Long Evans rats. Fifty-six rats were randomly distributed into four groups of 14 rats each. All rats were placed on a 20 min/day drinking schedule for 12 consecutive days prior to receiving five conditioning trials. During the conditioning trials, access to 0.1% sodium saccharin-flavored water was given for 20 min, followed 30 min later by one of four treatments. Two groups of 14 rats each were individually exposed to static electric fields and air ions, one group to +75 kV/m (+2 x 10(5) air ions/cm3) and the other group to -75 kV/m (-2 x 10(5) air ions/cm3). Two other groups of 14 rats each served as sham-exposed controls, with the following variation in one of the sham-exposed groups: This group was subdivided into two subsets of seven rats each, so that a positive control group could be included to validate the experimental design. The positive control group (n = 7) was injected with cyclophosphamide 25 mg/kg, i.p., 30 min after access to saccharin-flavored water on conditioning days, whereas the other subset of seven rats was similarly injected with an equivalent volume of saline. Access to saccharin-flavored water on conditioning days was followed by the treatments described above and was alternated daily with water "recovery" sessions in which the rats received access to water for 20 min in the home cage without further treatment. Following the last water-recovery session, a 20 min, two-bottle preference test (between water and saccharin-flavored water) was administered to each group. The positive control group did show TA learning, thus validating the experimental protocol. No saccharin-flavored water was consumed in the two-bottle preference test by the cyclophosphamide-injected, sham-exposed group compared to 74% consumed by the saline-injected sham-exposed controls (P < .0001). Saccharin-preference data for the static field-exposed groups showed no TA learning compared to data for sham-exposed controls. In summary, exposure to intense static electric fields and air ions did not produce TA learning as assessed by this particular design.

  16. Steps Toward Optimal Competitive Scheduling

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Crawford, James; Khatib, Lina; Brafman, Ronen

    2006-01-01

    This paper is concerned with the problem of allocating a unit capacity resource to multiple users within a pre-defined time period. The resource is indivisible, so that at most one user can use it at each time instance. However, different users may use it at different times. The users have independent, se@sh preferences for when and for how long they are allocated this resource. Thus, they value different resource access durations differently, and they value different time slots differently. We seek an optimal allocation schedule for this resource. This problem arises in many institutional settings where, e.g., different departments, agencies, or personal, compete for a single resource. We are particularly motivated by the problem of scheduling NASA's Deep Space Satellite Network (DSN) among different users within NASA. Access to DSN is needed for transmitting data from various space missions to Earth. Each mission has different needs for DSN time, depending on satellite and planetary orbits. Typically, the DSN is over-subscribed, in that not all missions will be allocated as much time as they want. This leads to various inefficiencies - missions spend much time and resource lobbying for their time, often exaggerating their needs. NASA, on the other hand, would like to make optimal use of this resource, ensuring that the good for NASA is maximized. This raises the thorny problem of how to measure the utility to NASA of each allocation. In the typical case, it is difficult for the central agency, NASA in our case, to assess the value of each interval to each user - this is really only known to the users who understand their needs. Thus, our problem is more precisely formulated as follows: find an allocation schedule for the resource that maximizes the sum of users preferences, when the preference values are private information of the users. We bypass this problem by making the assumptions that one can assign money to customers. This assumption is reasonable; a committee is usually in charge of deciding the priority of each mission competing for access to the DSN within a time period while scheduling. Instead, we can assume that the committee assigns a budget to each mission.This paper is concerned with the problem of allocating a unit capacity resource to multiple users within a pre-defined time period. The resource is indivisible, so that at most one user can use it at each time instance. However, different users may use it at different times. The users have independent, se@sh preferences for when and for how long they are allocated this resource. Thus, they value different resource access durations differently, and they value different time slots differently. We seek an optimal allocation schedule for this resource. This problem arises in many institutional settings where, e.g., different departments, agencies, or personal, compete for a single resource. We are particularly motivated by the problem of scheduling NASA's Deep Space Satellite Network (DSN) among different users within NASA. Access to DSN is needed for transmitting data from various space missions to Earth. Each mission has different needs for DSN time, depending on satellite and planetary orbits. Typically, the DSN is over-subscribed, in that not all missions will be allocated as much time as they want. This leads to various inefficiencies - missions spend much time and resource lobbying for their time, often exaggerating their needs. NASA, on the other hand, would like to make optimal use of this resource, ensuring that the good for NASA is maximized. This raises the thorny problem of how to measure the utility to NASA of each allocation. In the typical case, it is difficult for the central agency, NASA in our case, to assess the value of each interval to each user - this is really only known to the users who understand their needs. Thus, our problem is more precisely formulated as follows: find an allocation schedule for the resource that maximizes the sum ofsers preferences, when the preference values are private information of the users. We bypass this problem by making the assumptions that one can assign money to customers. This assumption is reasonable; a committee is usually in charge of deciding the priority of each mission competing for access to the DSN within a time period while scheduling. Instead, we can assume that the committee assigns a budget to each mission.

  17. Efficient mapping algorithms for scheduling robot inverse dynamics computation on a multiprocessor system

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Chen, C. L.

    1989-01-01

    Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.

  18. Non-traditional Sensor Tasking for SSA: A Case Study

    NASA Astrophysics Data System (ADS)

    Herz, A.; Herz, E.; Center, K.; Martinez, I.; Favero, N.; Clark, C.; Therien, W.; Jeffries, M.

    Industry has recognized that maintaining SSA of the orbital environment going forward is too challenging for the government alone. Consequently there are a significant number of commercial activities in various stages of development standing-up novel sensors and sensor networks to assist in SSA gathering and dissemination. Use of these systems will allow government and military operators to focus on the most sensitive space control issues while allocating routine or lower priority data gathering responsibility to the commercial side. The fact that there will be multiple (perhaps many) commercial sensor capabilities available in this new operational model begets a common access solution. Absent a central access point to assert data needs, optimized use of all commercial sensor resources is not possible and the opportunity for coordinated collections satisfying overarching SSA-elevating objectives is lost. Orbit Logic is maturing its Heimdall Web system - an architecture facilitating “data requestor” perspectives (allowing government operations centers to assert SSA data gathering objectives) and “sensor operator” perspectives (through which multiple sensors of varying phenomenology and capability are integrated via machine -machine interfaces). When requestors submit their needs, Heimdall’s planning engine determines tasking schedules across all sensors, optimizing their use via an SSA-specific figure-of-merit. ExoAnalytic was a key partner in refining the sensor operator interfaces, working with Orbit Logic through specific details of sensor tasking schedule delivery and the return of observation data. Scant preparation on both sides preceded several integration exercises (walk-then-run style), which culminated in successful demonstration of the ability to supply optimized schedules for routine public catalog data collection – then adapt sensor tasking schedules in real-time upon receipt of urgent data collection requests. This paper will provide a narrative of the joint integration process - detailing decision points, compromises, and results obtained on the road toward a set of interoperability standards for commercial sensor accommodation.

  19. FX-87 performance measurements: data-flow implementation. Technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammel, R.T.; Gifford, D.K.

    1988-11-01

    This report documents a series of experiments performed to explore the thesis that the FX-87 effect system permits a compiler to schedule imperative programs (i.e., programs that may contain side-effects) for execution on a parallel computer. The authors analyze how much the FX-87 static effect system can improve the execution times of five benchmark programs on a parallel graph interpreter. Three of their benchmark programs do not use side-effects (factorial, fibonacci, and polynomial division) and thus did not have any effect-induced constraints. Their FX-87 performance was comparable to their performance in a purely functional language. Two of the benchmark programsmore » use side effects (DNA sequence matching and Scheme interpretation) and the compiler was able to use effect information to reduce their execution times by factors of 1.7 to 5.4 when compared with sequential execution times. These results support the thesis that a static effect system is a powerful tool for compilation to multiprocessor computers. However, the graph interpreter we used was based on unrealistic assumptions, and thus our results may not accurately reflect the performance of a practical FX-87 implementation. The results also suggest that conventional loop analysis would complement the FX-87 effect system« less

  20. MROrchestrator: A Fine-Grained Resource Orchestration Framework for MapReduce Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Bikash; Prabhakar, Ramya; Kandemir, Mahmut

    2012-01-01

    Efficient resource management in data centers and clouds running large distributed data processing frameworks like MapReduce is crucial for enhancing the performance of hosted applications and boosting resource utilization. However, existing resource scheduling schemes in Hadoop MapReduce allocate resources at the granularity of fixed-size, static portions of nodes, called slots. In this work, we show that MapReduce jobs have widely varying demands for multiple resources, making the static and fixed-size slot-level resource allocation a poor choice both from the performance and resource utilization standpoints. Furthermore, lack of co-ordination in the management of mul- tiple resources across nodes prevents dynamic slotmore » reconfigura- tion, and leads to resource contention. Motivated by this, we propose MROrchestrator, a MapReduce resource Orchestrator framework, which can dynamically identify resource bottlenecks, and resolve them through fine-grained, co-ordinated, and on- demand resource allocations. We have implemented MROrches- trator on two 24-node native and virtualized Hadoop clusters. Experimental results with a suite of representative MapReduce benchmarks demonstrate up to 38% reduction in job completion times, and up to 25% increase in resource utilization. We further show how popular resource managers like NGM and Mesos when augmented with MROrchestrator can hike up their performance.« less

  1. Prospective computerized analyses of sensibility in breast reconstruction with non-reinnervated DIEP flap.

    PubMed

    Santanelli, Fabio; Longo, Benedetto; Angelini, Matteo; Laporta, Rosaria; Paolini, Guido

    2011-05-01

    The deep inferior epigastric perforator (DIEP) flap is considered the definitive standard for autologous breast reconstruction because of its ability to restore shape, its consistency, and its static and dynamic symmetry, but the degree of spontaneous sensory recovery is still widely discussed. To clarify the real need for sensitive nerve coaptation, return of sensibility in DIEP flaps was investigated using a pressure-specifying sensory device. Thirty consecutive patients with breast cancer scheduled for modified radical mastectomy, axillary node dissection, and immediate reconstruction with cutaneous-adipose DIEP flaps without nerve repair were enrolled in the study. Sensibility for one and two points, static and moving, was tested preoperatively on the breasts and abdomen, and postoperatively at 6 and 12 months on the DIEP flaps. A t test was used for comparison of paired data and to investigate which factors affected sensory recovery. Preoperative healthy breast and abdomen pressure thresholds were lower for two-point than one-point discrimination and for moving discriminations compared with static ones at 6 and 12 months. Although they were significantly higher than those for contralateral healthy breasts (p < 0.05), pressure thresholds in DIEP flaps at 12 months were lower than at 6 months, showing a significant progressive sensory recovery (p < 0.05). At 12 months postoperatively, the best sensibility recovery was found at the inferior lateral quadrant, the worst at the superior medial quadrant. Age and flap weight were factors related to the performance of sensory recovery. DIEP flap transfer for immediate breast reconstruction undergoes satisfactory progressive spontaneous sensitive recovery at 6 and 12 months after surgery, and operative time spent dissecting sensitive perforator branches and their coaptation in recipient site could be spared.

  2. Bicriteria Network Optimization Problem using Priority-based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Gen, Mitsuo; Lin, Lin; Cheng, Runwei

    Network optimization is being an increasingly important and fundamental issue in the fields such as engineering, computer science, operations research, transportation, telecommunication, decision support systems, manufacturing, and airline scheduling. In many applications, however, there are several criteria associated with traversing each edge of a network. For example, cost and flow measures are both important in the networks. As a result, there has been recent interest in solving Bicriteria Network Optimization Problem. The Bicriteria Network Optimization Problem is known a NP-hard. The efficient set of paths may be very large, possibly exponential in size. Thus the computational effort required to solve it can increase exponentially with the problem size in the worst case. In this paper, we propose a genetic algorithm (GA) approach used a priority-based chromosome for solving the bicriteria network optimization problem including maximum flow (MXF) model and minimum cost flow (MCF) model. The objective is to find the set of Pareto optimal solutions that give possible maximum flow with minimum cost. This paper also combines Adaptive Weight Approach (AWA) that utilizes some useful information from the current population to readjust weights for obtaining a search pressure toward a positive ideal point. Computer simulations show the several numerical experiments by using some difficult-to-solve network design problems, and show the effectiveness of the proposed method.

  3. Analysis of wind-resistant and stability for cable tower in cable-stayed bridge with four towers

    NASA Astrophysics Data System (ADS)

    Meng, Yangjun; Li, Can

    2017-06-01

    Wind speed time history simulation methods have been introduced first, especially the harmonic synthesis method introduced in detail. Second, taking Chishi bridge for example, choosing the particular sections, and combined with the design wind speed, three-component coefficient simulate analysis between -4°and 4°has been carry out with the Fluent software. The results show that drag coefficient reaches maximum when the attack Angle is 1°. According to measured wind speed samples,time history curves of wind speed at bridge deck and tower roof have been obtained,and wind-resistant time history analysis for No.5 tower has been carry out. Their results show that the dynamic coefficients are different with different calculation standard, especially transverse bending moment, pulsating crosswind load does not show a dynamic amplification effect.Under pulsating wind loads at bridge deck or tower roof, the maximum displacement at the top of the tower and the maximum stress at the bottom of the tower are within the allowable range. The transverse stiffness of tower is greater than that of the longitudinal stiffness, therefore wind-resistant analysis should give priority to the longitudinal direction. Dynamic coefficients are different with different standard, the maximum dynamic coefficient should be used for the pseudo-static analysis.Finally, the static stability of tower is analyzed with different load combinations, and the galloping stabilities of cable tower is proved.

  4. Phase Transition in Octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX) under Static Compression: An Application of the First-Principles Method Specialized for CHNO Solid Explosives.

    PubMed

    Zhang, Lei; Jiang, Sheng-Li; Yu, Yi; Long, Yao; Zhao, Han-Yue; Peng, Li-Juan; Chen, Jun

    2016-11-10

    The first-principles method is challenged by accurate prediction of van der Waals interactions, which are ubiquitous in nature and crucial for determining the structure of molecules and condensed matter. We have contributed to this by constructing a set of pseudopotentials and pseudoatomic orbital basis specialized for molecular systems consisting of C/H/N/O elements. The reliability of the present method is verified from the interaction energies of 45 kinds of complexes (comparing with CCSD(T)) and the crystalline structures of 23 kinds of typical explosive solids (comparing with experiments). Using this method, we have studied the phase transition of octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX) under static compression up to 50 GPa. Kinetically, intramolecular deformation has priority in the competition with intermolecular packing deformation by ∼87%. A possible γ → β phase transition is found at around 2.10 GPa, and the migration of H 2 O has an effect of kinetically pushing this process. We make it clear that no β → δ/ε → δ phase transition occurs at 27 GPa, which has long been a hot debate in experiments. In addition, the P-V relation, bulk modulus, and acoustic velocity are also predicted for α-, δ-, and γ-HMX, which are experimentally unavailable.

  5. Utilization of Ancillary Data Sets for Conceptual SMAP Mission Algorithm Development and Product Generation

    NASA Technical Reports Server (NTRS)

    O'Neill, P.; Podest, E.

    2011-01-01

    The planned Soil Moisture Active Passive (SMAP) mission is one of the first Earth observation satellites being developed by NASA in response to the National Research Council's Decadal Survey, Earth Science and Applications from Space: National Imperatives for the Next Decade and Beyond [1]. Scheduled to launch late in 2014, the proposed SMAP mission would provide high resolution and frequent revisit global mapping of soil moisture and freeze/thaw state, utilizing enhanced Radio Frequency Interference (RFI) mitigation approaches to collect new measurements of the hydrological condition of the Earth's surface. The SMAP instrument design incorporates an L-band radar (3 km) and an L band radiometer (40 km) sharing a single 6-meter rotating mesh antenna to provide measurements of soil moisture and landscape freeze/thaw state [2]. These observations would (1) improve our understanding of linkages between the Earth's water, energy, and carbon cycles, (2) benefit many application areas including numerical weather and climate prediction, flood and drought monitoring, agricultural productivity, human health, and national security, (3) help to address priority questions on climate change, and (4) potentially provide continuity with brightness temperature and soil moisture measurements from ESA's SMOS (Soil Moisture Ocean Salinity) and NASA's Aquarius missions. In the planned SMAP mission prelaunch time frame, baseline algorithms are being developed for generating (1) soil moisture products both from radiometer measurements on a 36 km grid and from combined radar/radiometer measurements on a 9 km grid, and (2) freeze/thaw products from radar measurements on a 3 km grid. These retrieval algorithms need a variety of global ancillary data, both static and dynamic, to run the retrieval models, constrain the retrievals, and provide flags for indicating retrieval quality. The choice of which ancillary dataset to use for a particular SMAP product would be based on a number of factors, including its availability and ease of use, its inherent error and resulting impact on the overall soil moisture or freeze/thaw retrieval accuracy, and its compatibility with similar choices made by the SMOS mission. All decisions regarding SMAP ancillary data sources would be fully documented by the SMAP Project and made available to the user community.

  6. Contact Graph Routing

    NASA Technical Reports Server (NTRS)

    Burleigh, Scott C.

    2011-01-01

    Contact Graph Routing (CGR) is a dynamic routing system that computes routes through a time-varying topology of scheduled communication contacts in a network based on the DTN (Delay-Tolerant Networking) architecture. It is designed to enable dynamic selection of data transmission routes in a space network based on DTN. This dynamic responsiveness in route computation should be significantly more effective and less expensive than static routing, increasing total data return while at the same time reducing mission operations cost and risk. The basic strategy of CGR is to take advantage of the fact that, since flight mission communication operations are planned in detail, the communication routes between any pair of bundle agents in a population of nodes that have all been informed of one another's plans can be inferred from those plans rather than discovered via dialogue (which is impractical over long one-way-light-time space links). Messages that convey this planning information are used to construct contact graphs (time-varying models of network connectivity) from which CGR automatically computes efficient routes for bundles. Automatic route selection increases the flexibility and resilience of the space network, simplifying cross-support and reducing mission management costs. Note that there are no routing tables in Contact Graph Routing. The best route for a bundle destined for a given node may routinely be different from the best route for a different bundle destined for the same node, depending on bundle priority, bundle expiration time, and changes in the current lengths of transmission queues for neighboring nodes; routes must be computed individually for each bundle, from the Bundle Protocol agent's current network connectivity model for the bundle s destination node (the contact graph). Clearly this places a premium on optimizing the implementation of the route computation algorithm. The scalability of CGR to very large networks remains a research topic. The information carried by CGR contact plan messages is useful not only for dynamic route computation, but also for the implementation of rate control, congestion forecasting, transmission episode initiation and termination, timeout interval computation, and retransmission timer suspension and resumption.

  7. PS-CARA: Context-Aware Resource Allocation Scheme for Mobile Public Safety Networks.

    PubMed

    Kaleem, Zeeshan; Khaliq, Muhammad Zubair; Khan, Ajmal; Ahmad, Ishtiaq; Duong, Trung Q

    2018-05-08

    The fifth-generation (5G) communications systems are expecting to support users with diverse quality-of-service (QoS) requirements. Beside these requirements, the task with utmost importance is to support the emergency communication services during natural or man-made disasters. Most of the conventional base stations are not properly functional during a disaster situation, so deployment of emergency base stations such as mobile personal cell (mPC) is crucial. An mPC having moving capability can move in the disaster area to provide emergency communication services. However, mPC deployment causes severe co-channel interference to the users in its vicinity. The problem in the existing resource allocation schemes is its support for static environment, that does not fit well for mPC. So, a resource allocation scheme for mPC users is desired that can dynamically allocate resources based on users’ location and its connection establishment priority. In this paper, we propose a public safety users priority-based context-aware resource allocation (PS-CARA) scheme for users sum-rate maximization in disaster environment. Simulations results demonstrate that the proposed PS-CARA scheme can increase the user average and edge rate around 10.3% and 32.8% , respectively because of context information availability and by prioritizing the public safety users. The simulation results ensure that call blocking probability is also reduced considerably under the PS-CARA scheme.

  8. PS-CARA: Context-Aware Resource Allocation Scheme for Mobile Public Safety Networks

    PubMed Central

    Khaliq, Muhammad Zubair; Khan, Ajmal; Ahmad, Ishtiaq

    2018-01-01

    The fifth-generation (5G) communications systems are expecting to support users with diverse quality-of-service (QoS) requirements. Beside these requirements, the task with utmost importance is to support the emergency communication services during natural or man-made disasters. Most of the conventional base stations are not properly functional during a disaster situation, so deployment of emergency base stations such as mobile personal cell (mPC) is crucial. An mPC having moving capability can move in the disaster area to provide emergency communication services. However, mPC deployment causes severe co-channel interference to the users in its vicinity. The problem in the existing resource allocation schemes is its support for static environment, that does not fit well for mPC. So, a resource allocation scheme for mPC users is desired that can dynamically allocate resources based on users’ location and its connection establishment priority. In this paper, we propose a public safety users priority-based context-aware resource allocation (PS-CARA) scheme for users sum-rate maximization in disaster environment. Simulations results demonstrate that the proposed PS-CARA scheme can increase the user average and edge rate around 10.3% and 32.8% , respectively because of context information availability and by prioritizing the public safety users. The simulation results ensure that call blocking probability is also reduced considerably under the PS-CARA scheme. PMID:29738499

  9. Design and test of a tip-tilt driver for an image stabilization system

    NASA Astrophysics Data System (ADS)

    Casas, Albert; Gómez, José María.; Roma, David; Carmona, Manuel; López, Manel; Bosch, José; Herms, Atilù; Sabater, Josep; Volkmer, Reiner; Heidecke, Frank; Maue, Thorsten; Nakai, Eiji; Baumgartner, Jörg; Schmidt, Wolfgang

    2016-08-01

    The tip/tilt driver is part of the Polarimetric and Helioseismic Imager (PHI) instrument for the ESA Solar Orbiter (SO), which is scheduled to launch in 2017. PPHI captures polarimetric images from the Sun to better understand our nearest star, the Sun. The paper covers an analog amplifier design to drive capacitive solid state actuator such ass piezoelectric actuator. Due to their static and continuous operation, the actuator needs to be supplied by high-quality, low-frequency, high-voltage sinusoidal signals. The described circuit is an efficiency-improved Class-AB amplifier capable of recovering up to 60% of the charge stored in the actuator. The results obtained after the qualification model test demonstrate the feasibility of the circuit with the accomplishment of the requirements fixed by the scientific team.

  10. Early Development of the First Earth Venture Mission: How CYGNSS Is Using Engineering Models to Validate the Design

    NASA Technical Reports Server (NTRS)

    Wells, James; Scherrer, John; Van Noord, Jonathan; Law, Richard

    2015-01-01

    In response to the recommendations made in the National Research Council' s Ear th Science and Applications 2007 Decadal Sur vey, NASA has initiated the Ear th Venture line of mission oppor tunities. The fir st orbital mission chosen for this competitively selected, cost and schedule constrained, Pr incipal Investigator -led oppor tunity is the CYclone Global Navigation Satellite System (CYGNSS). The goal of CYGNSS is to understand the coupling between ocean sur face proper ties, moist atmospher ic thermodynamics, radiation, and convective dynamics in the inner core of a tropical cyclone. The CYGNSS mission is compr ised of eight Low Ear th Obser ving (LEO) micr osatellites that use GPS bi-static scatterometry to measure ocean sur face winds.

  11. Numerical Simulation of Rolling-Airframes Using a Multi-Level Cartesian Method

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.; Aftosmis, Michael J.; Berger, Marsha J.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A supersonic rolling missile with two synchronous canard control surfaces is analyzed using an automated, inviscid, Cartesian method. Sequential-static and time-dependent dynamic simulations of the complete motion are computed for canard dither schedules for level flight, pitch, and yaw maneuver. The dynamic simulations are compared directly against both high-resolution viscous simulations and relevant experimental data, and are also utilized to compute dynamic stability derivatives. The results show that both the body roll rate and canard dither motion influence the roll-averaged forces and moments on the body. At the relatively, low roll rates analyzed in the current work these dynamic effects are modest, however the dynamic computations are effective in predicting the dynamic stability derivatives which can be significant for highly-maneuverable missiles.

  12. A Parallel Compact Multi-Dimensional Numerical Algorithm with Aeroacoustics Applications

    NASA Technical Reports Server (NTRS)

    Povitsky, Alex; Morris, Philip J.

    1999-01-01

    In this study we propose a novel method to parallelize high-order compact numerical algorithms for the solution of three-dimensional PDEs (Partial Differential Equations) in a space-time domain. For this numerical integration most of the computer time is spent in computation of spatial derivatives at each stage of the Runge-Kutta temporal update. The most efficient direct method to compute spatial derivatives on a serial computer is a version of Gaussian elimination for narrow linear banded systems known as the Thomas algorithm. In a straightforward pipelined implementation of the Thomas algorithm processors are idle due to the forward and backward recurrences of the Thomas algorithm. To utilize processors during this time, we propose to use them for either non-local data independent computations, solving lines in the next spatial direction, or local data-dependent computations by the Runge-Kutta method. To achieve this goal, control of processor communication and computations by a static schedule is adopted. Thus, our parallel code is driven by a communication and computation schedule instead of the usual "creative, programming" approach. The obtained parallelization speed-up of the novel algorithm is about twice as much as that for the standard pipelined algorithm and close to that for the explicit DRP algorithm.

  13. Camera memory study for large space telescope. [charge coupled devices

    NASA Technical Reports Server (NTRS)

    Hoffman, C. P.; Brewer, J. E.; Brager, E. A.; Farnsworth, D. L.

    1975-01-01

    Specifications were developed for a memory system to be used as the storage media for camera detectors on the large space telescope (LST) satellite. Detectors with limited internal storage time such as intensities charge coupled devices and silicon intensified targets are implied. The general characteristics are reported of different approaches to the memory system with comparisons made within the guidelines set forth for the LST application. Priority ordering of comparisons is on the basis of cost, reliability, power, and physical characteristics. Specific rationales are provided for the rejection of unsuitable memory technologies. A recommended technology was selected and used to establish specifications for a breadboard memory. Procurement scheduling is provided for delivery of system breadboards in 1976, prototypes in 1978, and space qualified units in 1980.

  14. Replacing the Engine In Your Car While You Are Still Driving It

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bjorklund, Eric A.

    Replacing your accelerator’s timing system with a completely different architecture is not something that happens very often. Perhaps even rarer is the requirement that the replacement not interfere with the accelerator’s normal operational cycle. In 2011, The Los Alamos Neutron Science Center (LANSCE) began the purchasing and installation phase of a nine-year rolling upgrade project which will eventually result in the complete replacement of the low-level RF system, the timing system, the industrial I/O system, the beam-synchronized data acquisition system, the fastprotect reporting system, and much of the diagnostic equipment. These projects are mostly independent of each other, with theirmore » own installation schedules, priorities, and time-lines. All of them, however, must interface with the timing system.« less

  15. Recommendations for safe vaccination in children at the risk of taking allergic reactions to vaccine components

    PubMed

    2018-04-01

    Vaccines are one of the most important advances in medicine as a public health tool for the control of immunopreventable diseases. Occasionally, adverse reactions may occur. If a child has a reaction to a vaccine, it is likely to disrupt his immunization schedule with risks to himself and the community. This establishes the importance of correctly diagnosing a possible allergy and defining appropriate behavior. Allergic reactions to vaccines may be due to the immunogenic component, to the residual proteins in the manufacturing process and to antimicrobial agents, stabilizers, preservatives and any other element used in the manufacturing process. Vaccination should be a priority in the entire child population, so this document describes particular situations of allergic children to minimize the risk of immunizations and achieve safe vaccination.

  16. GLAST Large Area Telescope Multiwavelength Planning

    NASA Technical Reports Server (NTRS)

    Thompson, D. J.; Cameron, R. A.; Digel, S. W.; Wood, K. S.

    2006-01-01

    Because gamma-ray astrophysics depends in many ways on multiwavelength studies, the GLAST Large Area Telescope (LAT) Collaboration has started multiwavelength planning well before the scheduled 2007 launch of the observatory. Some of the high-priority needs include: (1) radio and X-ray timing of pulsars; (2) expansion of blazar catalogs, including redshift measurements (3) improved observations of molecular clouds, especially at high galactic latitudes; (4) simultaneous broad-spectrum blazar flare measurements; (5) characterization of gamma-ray transients, including gamma ray bursts; (6) radio, optical, X-ray and TeV counterpart searches for unidentified gamma-ray sources. Work on the first three of these activities is needed before launch. The GLAST Large Area Telescope is an international effort, with U.S. funding provided by the Department of Energy and NASA.

  17. Causes and consequences of sleepiness among college students.

    PubMed

    Hershner, Shelley D; Chervin, Ronald D

    2014-01-01

    Daytime sleepiness, sleep deprivation, and irregular sleep schedules are highly prevalent among college students, as 50% report daytime sleepiness and 70% attain insufficient sleep. The consequences of sleep deprivation and daytime sleepiness are especially problematic to college students and can result in lower grade point averages, increased risk of academic failure, compromised learning, impaired mood, and increased risk of motor vehicle accidents. This article reviews the current prevalence of sleepiness and sleep deprivation among college students, contributing factors for sleep deprivation, and the role of sleep in learning and memory. The impact of sleep and sleep disorders on academics, grade point average, driving, and mood will be examined. Most importantly, effective and viable interventions to decrease sleepiness and sleep deprivation through sleep education classes, online programs, encouragement of naps, and adjustment of class time will be reviewed. This paper highlights that addressing sleep issues, which are not often considered as a risk factor for depression and academic failure, should be encouraged. Promotion of university and college policies and class schedules that encourage healthy and adequate sleep could have a significant impact on the sleep, learning, and health of college students. Future research to investigate effective and feasible interventions, which disseminate both sleep knowledge and encouragement of healthy sleep habits to college students in a time and cost effective manner, is a priority.

  18. Cost-effectiveness analysis of AS04-adjuvanted human papillomavirus 16/18 vaccine compared with human papillomavirus 6/11/16/18 vaccine in the Philippines, with the new 2-dose schedule.

    PubMed

    Germar, Maria Julieta; Purugganan, Carrie; Bernardino, Ma Socorro; Cuenca, Benjamin; Chen, Y-Chen; Li, Xiao; Van Kriekinge, Georges; Lee, I-Heng

    2017-05-04

    Cervical cancer (CC) is the second leading cause of cancer death among Filipino women. Human papillomavirus (HPV) vaccination protects against CC. Two vaccines (AS04-HPV-16/18 and 4vHPV) are approved in the Philippines; they were originally developed for a 3-dose (3D) administration and have recently been approved in a 2-dose schedule (2D). This study aims to evaluate the cost-effectiveness of HPV vaccination of 13-year-old Filipino girls, in addition to current screening, in the new 2D schedule. An existing static lifetime, one-year cycle Markov cohort model was adapted to the Philippine settings to simulate the natural history of low-risk and oncogenic HPV infection, the effects of screening and vaccination of a 13-year-old girls cohort vaccinated with either the 2D-AS04-HPV-16/18 or 2D-4vHPV assuming a 100% vaccination coverage. Incremental cost, quality-adjusted life year (QALY) and cost-effectiveness were derived from these estimates. Input data were obtained from published sources and Delphi panel, using country-specific data where possible. Sensitivity analyses were performed to assess the robustness of the model. The model estimated that 2D-AS04-HPV-16/18 prevented 986 additional CC cases and 399 CC deaths (undiscounted), as well as 555 increased QALY (discounted), and save 228.1 million Philippine pesos (PHP) compared with the 2D-4vHPV. In conclusion, AS04-HPV-16/18 is shown to be dominant over 4vHPV in the Philippines, with greater estimated health benefits and lower costs.

  19. Cost-effectiveness analysis of AS04-adjuvanted human papillomavirus 16/18 vaccine compared with human papillomavirus 6/11/16/18 vaccine in the Philippines, with the new 2-dose schedule

    PubMed Central

    Germar, Maria Julieta; Purugganan, Carrie; Bernardino, Ma. Socorro; Cuenca, Benjamin; Chen, Y-Chen; Li, Xiao; Van Kriekinge, Georges; Lee, I-Heng

    2017-01-01

    ABSTRACT Cervical cancer (CC) is the second leading cause of cancer death among Filipino women. Human papillomavirus (HPV) vaccination protects against CC. Two vaccines (AS04-HPV-16/18 and 4vHPV) are approved in the Philippines; they were originally developed for a 3-dose (3D) administration and have recently been approved in a 2-dose schedule (2D). This study aims to evaluate the cost-effectiveness of HPV vaccination of 13-year-old Filipino girls, in addition to current screening, in the new 2D schedule. An existing static lifetime, one-year cycle Markov cohort model was adapted to the Philippine settings to simulate the natural history of low-risk and oncogenic HPV infection, the effects of screening and vaccination of a 13-year-old girls cohort vaccinated with either the 2D-AS04-HPV-16/18 or 2D-4vHPV assuming a 100% vaccination coverage. Incremental cost, quality-adjusted life year (QALY) and cost-effectiveness were derived from these estimates. Input data were obtained from published sources and Delphi panel, using country-specific data where possible. Sensitivity analyses were performed to assess the robustness of the model. The model estimated that 2D-AS04-HPV-16/18 prevented 986 additional CC cases and 399 CC deaths (undiscounted), as well as 555 increased QALY (discounted), and save 228.1 million Philippine pesos (PHP) compared with the 2D-4vHPV. In conclusion, AS04-HPV-16/18 is shown to be dominant over 4vHPV in the Philippines, with greater estimated health benefits and lower costs. PMID:28075249

  20. Marital attitude trajectories across adolescence.

    PubMed

    Willoughby, Brian J

    2010-11-01

    The current study seeks to address the implicit assumption in the developmental literature that marital attitudes are static by investigating how various marital attitudes might change across adolescence. Longitudinal change for three marital attitudes in relation to family structure, educational aspirations, race and gender are examined. Utilizing a sample of 1,010 high school students (53% male; 76% white) recruited from a Midwestern metropolitan area, latent growth models were used to model marital attitude trajectories across adolescence. The sample was followed for 4 years from ages 14 until 18. Results revealed that adolescents placed a higher priority on marriage as they prepared to transition into young adulthood but that gender, race and educational aspirations all altered the degree in which marital attitudes changed across the time period of the study. Results highlight the importance of considering multiple constructs of marital attitudes and the need for more longitudinal work in this area of study.

  1. Shifting Gears: Triage and Traffic in Urban India.

    PubMed

    Solomon, Harris

    2017-09-01

    While studies of triage in clinical medical literature tend to focus on the knowledge required to carry out sorting, this article details the spatial features of triage. It is based on participation observation of traffic-related injuries in a Mumbai hospital casualty ward. It pays close attention to movement, specifically to adjustments, which include moving bodies, changes in treatment priority, and interruptions in care. The article draws on several ethnographic cases of injury and its aftermath that gather and separate patients, kin, and bystanders, all while a triage medical authority is charged with sorting them out. I argue that attention must be paid to differences in movement, which can be overlooked if medical decision-making is taken to be a static verdict. The explanatory significance of this distinction between adjustment and adjudication is a more nuanced understanding of triage as an iterative, spatial process. © 2017 by the American Anthropological Association.

  2. Application of Hybrid Optimization-Expert System for Optimal Power Management on Board Space Power Station

    NASA Technical Reports Server (NTRS)

    Momoh, James; Chattopadhyay, Deb; Basheer, Omar Ali AL

    1996-01-01

    The space power system has two sources of energy: photo-voltaic blankets and batteries. The optimal power management problem on-board has two broad operations: off-line power scheduling to determine the load allocation schedule of the next several hours based on the forecast of load and solar power availability. The nature of this study puts less emphasis on speed requirement for computation and more importance on the optimality of the solution. The second category problem, on-line power rescheduling, is needed in the event of occurrence of a contingency to optimally reschedule the loads to minimize the 'unused' or 'wasted' energy while keeping the priority on certain type of load and minimum disturbance of the original optimal schedule determined in the first-stage off-line study. The computational performance of the on-line 'rescheduler' is an important criterion and plays a critical role in the selection of the appropriate tool. The Howard University Center for Energy Systems and Control has developed a hybrid optimization-expert systems based power management program. The pre-scheduler has been developed using a non-linear multi-objective optimization technique called the Outer Approximation method and implemented using the General Algebraic Modeling System (GAMS). The optimization model has the capability of dealing with multiple conflicting objectives viz. maximizing energy utilization, minimizing the variation of load over a day, etc. and incorporates several complex interaction between the loads in a space system. The rescheduling is performed using an expert system developed in PROLOG which utilizes a rule-base for reallocation of the loads in an emergency condition viz. shortage of power due to solar array failure, increase of base load, addition of new activity, repetition of old activity etc. Both the modules handle decision making on battery charging and discharging and allocation of loads over a time-horizon of a day divided into intervals of 10 minutes. The models have been extensively tested using a case study for the Space Station Freedom and the results for the case study will be presented. Several future enhancements of the pre-scheduler and the 'rescheduler' have been outlined which include graphic analyzer for the on-line module, incorporating probabilistic considerations, including spatial location of the loads and the connectivity using a direct current (DC) load flow model.

  3. Dynamic histomorphometric evaluation of human fetal bone formation.

    PubMed

    Glorieux, F H; Salle, B L; Travers, R; Audra, P H

    1991-01-01

    We have evaluated dynamic and static parameters of bone formation in femoral metaphyses collected from two human fetuses at 19 weeks of gestation. Tetracycline was administered to the mother at set intervals (2-5-2 day schedule) before interruption of pregnancy. Labels were distinct and sharply linear, suggesting a well organized calcification front at this early stage of mineralization. Mineral apposition rate (MAR) was fastest (4.1 +/- 0.3 microns/d) in the periosteal (Ps) envelope, and about half that value in the endosteal envelopes (endocortical: 2.5 +/- 0.1, cancellous 2.1 +/- 0.1 microns/d). Because cellular activities may vary throughout the metaphyseal area, sections were arbitrarily separated in 0.75 mm layers starting from the growth plate. Three measured parameters decreased rapidly with increasing distance from the physis: Ps MAR: 4.9 to 2.3 microns/d, trabecular osteoid thickness: 5.9 to 1.2 microns, and cartilage volume (CgV/TV): 5.4% to 1.2%. Others did not vary significantly along the metaphysis. Comparison of several static parameters with those measured in five autopsy specimens from full-term infants showed that bone and cartilage volume, and trabecular thickness increased while osteoid thickness and parameters of resorption decreased in the second half of the gestation period. The study indicates that fetal bone matrix mineralization is already highly organized at mid-gestation, and validates the use of histomorphometry to assess bone maturation during early skeletal development.

  4. Climate change effects on water allocations with season dependent water rights.

    PubMed

    Null, Sarah E; Prudencio, Liana

    2016-11-15

    Appropriative water rights allocate surface water to competing users based on seniority. Often water rights vary seasonally with spring runoff, irrigation schedules, or other non-uniform supply and demand. Downscaled monthly Coupled Model Intercomparison Project multi-model, multi-emissions scenario hydroclimate data evaluate water allocation reliability and variability with anticipated hydroclimate change. California's Tuolumne watershed is a study basin, chosen because water rights are well-defined, simple, and include competing environmental, agricultural, and urban water uses representative of most basins. We assume that dedicated environmental flows receive first priority when mandated by federal law like the Endangered Species Act or hydropower relicensing, followed by senior agricultural water rights, and finally junior urban water rights. Environmental flows vary by water year and include April pulse flows, and senior agricultural water rights are 68% larger during historical spring runoff from April through June. Results show that senior water right holders receive the largest climate-driven reductions in allocated water when peak streamflow shifts from snowmelt-dominated spring runoff to mixed snowmelt- and rainfall-dominated winter runoff. Junior water right holders have higher uncertainty from inter-annual variability. These findings challenge conventional wisdom that water shortages are absorbed by junior water users and suggest that aquatic ecosystems may be disproportionally impaired by hydroclimate change, even when environmental flows receive priority. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. "Birth Control can Easily Take a Back Seat": Challenges Providing IUDs in Community Health Care Settings.

    PubMed

    Biggs, M Antonia; Kaller, Shelly; Harper, Cynthia C; Freedman, Lori; Mays, Aisha R

    2018-01-01

    To assess community health centers' (CHCs) capacity to offer streamlined intrauterine devices (IUDs) services. Prior to implementing a contraceptive training project, we surveyed health care staff (N=97) from 11 CHC sites that offer IUDs onsite. Twenty interviews with clinicians explored more deeply their challenges offering IUDs in the CHC setting. Most practices required multiple visits for IUD placement, most (66%) clinician survey respondents had placed an IUD and 19% had placed an IUD as emergency contraception. Need for screening tests, scheduling challenges, pressures to meet patient quotas, and lack of priority given to women's health hindered streamlined IUD provision. Although access to IUDs has increased, significant barriers to provision in CHC settings persist. Clinic policies may need to address a variety of system and provider-level barriers to meet the needs of patients.

  6. Application of modern control theory to scheduling and path-stretching maneuvers of aircraft in the near terminal area

    NASA Technical Reports Server (NTRS)

    Athans, M.

    1974-01-01

    A design concept of the dynamic control of aircraft in the near terminal area is discussed. An arbitrary set of nominal air routes, with possible multiple merging points, all leading to a single runway, is considered. The system allows for the automated determination of acceleration/deceleration of aircraft along the nominal air routes, as well as for the automated determination of path-stretching delay maneuvers. In addition to normal operating conditions, the system accommodates: (1) variable commanded separations over the outer marker to allow for takeoffs and between successive landings and (2) emergency conditions under which aircraft in distress have priority. The system design is based on a combination of three distinct optimal control problems involving a standard linear-quadratic problem, a parameter optimization problem, and a minimum-time rendezvous problem.

  7. STS-107 Payload Specialist Ilan Ramon at SPACEHAB during training

    NASA Technical Reports Server (NTRS)

    2002-01-01

    KENNEDY SPACE CENTER, FLA. - STS-107 Payload Specialist Ilan Ramon, from Israel, trains on equipment at SPACEHAB, Cape Canaveral, Fla. STS-107 is a research mission. The primary payload is the first flight of the SHI Research Double Module (SHI/RDM). The experiments range from material sciences to life sciences (many rats). Also part of the payload is the Fast Reaction Experiments Enabling Science, Technology, Applications and Research (FREESTAR) that incorporates eight high priority secondary attached shuttle experiments: Mediterranean Israeli Dust Experiment (MEIDEX), Shuttle Ozone Limb Sounding Experiment (SOLSE-2), Student Tracked Atmospheric Research Satellite for Heuristic International Networking Experiment (STARSHINE), Critical Viscosity of Xenon-2 (CVX-2), Solar Constant Experiment-3 (SOLOCON-3), Prototype Synchrotron Radiation Detector (PSRD), Low Power Transceiver (LPT), and Collisions Into Dust Experiment -2 (COLLIDE-2). STS-107 is scheduled to launch in July 2002

  8. STS-107 Mission Specialist Kalpana Chawla at SPACEHAB during training

    NASA Technical Reports Server (NTRS)

    2002-01-01

    KENNEDY SPACE CENTER, FLA. - STS-107 Mission Specialist Kalpana Chawla looks over equipment at SPACEHAB, Cape Canaveral, Fla., during crew training. STS-107 is a research mission. The primary payload is the first flight of the SHI Research Double Module (SHI/RDM). The experiments range from material sciences to life sciences (many rats). Also part of the payload is the Fast Reaction Experiments Enabling Science, Technology, Applications and Research (FREESTAR) that incorporates eight high priority secondary attached shuttle experiments: Mediterranean Israeli Dust Experiment (MEIDEX), Shuttle Ozone Limb Sounding Experiment (SOLSE-2), Student Tracked Atmospheric Research Satellite for Heuristic International Networking Experiment (STARSHINE), Critical Viscosity of Xenon-2 (CVX-2), Solar Constant Experiment-3 (SOLOCON-3), Prototype Synchrotron Radiation Detector (PSRD), Low Power Transceiver (LPT), and Collisions Into Dust Experiment -2 (COLLIDE-2). STS-107 is scheduled to launch in July 2002

  9. KSC01pd1881

    NASA Image and Video Library

    2001-12-19

    KENNEDY SPACE CENTER, FLA. -- STS-107 Commander Rick Husband and Mission Specialist Laurel Clark learn to work with mission-related equipment at SPACEHAB, Cape Canaveral, Fla. STS-107 is a research mission. The primary payload is the first flight of the SHI Research Double Module (SHI/RDM). The experiments range from material sciences to life sciences (many rats). Also part of the payload is the Fast Reaction Experiments Enabling Science, Technology, Applications and Research (FREESTAR) that incorporates eight high priority secondary attached shuttle experiments: Mediterranean Israeli Dust Experiment (MEIDEX), Shuttle Ozone Limb Sounding Experiment (SOLSE-2), Student Tracked Atmospheric Research Satellite for Heuristic International Networking Experiment (STARSHINE), Critical Viscosity of Xenon-2 (CVX-2), Solar Constant Experiment-3 (SOLOCON-3), Prototype Synchrotron Radiation Detector (PSRD), Low Power Transceiver (LPT), and Collisions Into Dust Experiment -2 (COLLIDE-2). STS-107 is scheduled to launch in July 2002

  10. KSC-02pd0052

    NASA Image and Video Library

    2002-01-10

    KENNEDY SPACE CENTER, FLA. - STS-107 Payload Specialist Ilan Ramon, from Israel, trains on equipment at SPACEHAB, Cape Canaveral, Fla. STS-107 is a research mission. The primary payload is the first flight of the SHI Research Double Module (SHI/RDM). The experiments range from material sciences to life sciences (many rats). Also part of the payload is the Fast Reaction Experiments Enabling Science, Technology, Applications and Research (FREESTAR) that incorporates eight high priority secondary attached shuttle experiments: Mediterranean Israeli Dust Experiment (MEIDEX), Shuttle Ozone Limb Sounding Experiment (SOLSE-2), Student Tracked Atmospheric Research Satellite for Heuristic International Networking Experiment (STARSHINE), Critical Viscosity of Xenon-2 (CVX-2), Solar Constant Experiment-3 (SOLOCON-3), Prototype Synchrotron Radiation Detector (PSRD), Low Power Transceiver (LPT), and Collisions Into Dust Experiment -2 (COLLIDE-2). STS-107 is scheduled to launch in July 2002

  11. KSC01pd1885

    NASA Image and Video Library

    2001-12-19

    KENNEDY SPACE CENTER, FLA. -- At SPACEHAB, Cape Canaveral, Fla., Commander Rick Husband works with an experiment that will be part of the mission. STS-107 is a research mission. The primary payload is the first flight of the SHI Research Double Module (SHI/RDM). The experiments range from material sciences to life sciences (many rats). Also part of the payload is the Fast Reaction Experiments Enabling Science, Technology, Applications and Research (FREESTAR) that incorporates eight high priority secondary attached shuttle experiments: Mediterranean Israeli Dust Experiment (MEIDEX), Shuttle Ozone Limb Sounding Experiment (SOLSE-2), Student Tracked Atmospheric Research Satellite for Heuristic International Networking Experiment (STARSHINE), Critical Viscosity of Xenon-2 (CVX-2), Solar Constant Experiment-3 (SOLOCON-3), Prototype Synchrotron Radiation Detector (PSRD), Low Power Transceiver (LPT), and Collisions Into Dust Experiment -2 (COLLIDE-2). STS-107 is scheduled to launch in July 2002

  12. KSC-02pd0053

    NASA Image and Video Library

    2002-01-10

    KENNEDY SPACE CENTER, FLA. -- STS-107 Mission Specialist Kalpana Chawla scans paperwork for equipment at SPACEHAB, Cape Canaveral, Fla., during crew training. STS-107 is a research mission. The primary payload is the first flight of the SHI Research Double Module (SHI/RDM). The experiments range from material sciences to life sciences (many rats). Also part of the payload is the Fast Reaction Experiments Enabling Science, Technology, Applications and Research (FREESTAR) that incorporates eight high priority secondary attached shuttle experiments: Mediterranean Israeli Dust Experiment (MEIDEX), Shuttle Ozone Limb Sounding Experiment (SOLSE-2), Student Tracked Atmospheric Research Satellite for Heuristic International Networking Experiment (STARSHINE), Critical Viscosity of Xenon-2 (CVX-2), Solar Constant Experiment-3 (SOLOCON-3), Prototype Synchrotron Radiation Detector (PSRD), Low Power Transceiver (LPT), and Collisions Into Dust Experiment -2 (COLLIDE-2). STS-107 is scheduled to launch in July 2002

  13. The planning cycle.

    PubMed

    Johnson, William

    2005-01-01

    Information technology planning can be described as a continuous cyclical process composed of three phases whose primary purpose is optimum allocation of scarce resources. In the assessment phase, planners assess user needs, environmental factors, business objectives, and IT infrastructure needs to develop IT projects that address needs in each of these areas. A major goal of this phase is to develop a broad IT inventory. The prioritization phase seeks to ensure optimum allocation of scarce resources by prioritizing ITprojects based on: Costs--total life cycle costs. Benefits--both quantitative and non-quantitative, including support for the organization's strategic business objectives. Risks--subjective assessments of technological and non-technological risks. Implementation requirements--time and personnel requirements to implement the system. The scheduling phase incorporates sequencing considerations, personnel availability, and budgetary constraints to produce an IT plan in which project priorities are adjusted to meet organizational realities.

  14. Optimal SSN Tasking to Enhance Real-time Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Ferreira, J., III; Hussein, I.; Gerber, J.; Sivilli, R.

    2016-09-01

    Space Situational Awareness (SSA) is currently constrained by an overwhelming number of resident space objects (RSOs) that need to be tracked and the amount of data these observations produce. The Joint Centralized Autonomous Tasking System (JCATS) is an autonomous, net-centric tool that approaches these SSA concerns from an agile, information-based stance. Finite set statistics and stochastic optimization are used to maintain an RSO catalog and develop sensor tasking schedules based on operator configured, state information-gain metrics to determine observation priorities. This improves the efficiency of sensors to target objects as awareness changes and new information is needed, not at predefined frequencies solely. A net-centric, service-oriented architecture (SOA) allows for JCATS integration into existing SSA systems. Testing has shown operationally-relevant performance improvements and scalability across multiple types of scenarios and against current sensor tasking tools.

  15. Program on State Agency Remote Sensing Data Management (SARSDM). [missouri

    NASA Technical Reports Server (NTRS)

    Eastwood, L. F., Jr.; Gotway, E. O.

    1978-01-01

    A planning study for developing a Missouri natural resources information system (NRIS) that combines satellite-derived data and other information to assist in carrying out key state tasks was conducted. Four focal applications -- dam safety, ground water supply monitoring, municipal water supply monitoring, and Missouri River basin modeling were identified. Major contributions of the study are: (1) a systematic choice and analysis of a high priority application (water resources) for a Missouri, LANDSAT-based information system; (2) a system design and implementation plan, based on Missouri, but useful for many other states; (3) an analysis of system costs, component and personnel requirements, and scheduling; and (4) an assessment of deterrents to successful technological innovation of this type in state government, and a system management plan, based on this assessment, for overcoming these obstacles in Missouri.

  16. [The Japanese Health Care System: An Analysis of the Funding and Reimbursement System].

    PubMed

    Rump, Alexis; Schöffski, Oliver

    2017-08-10

    Objective The modern Japanese health care system was established during the Meiji period (1868-1912) using the example of Germany. In this paper, the funding and remuneration of health services and products in Japan are described. The focus lies on the mechanisms used to implement health policy goals and to control costs. Method Selective literature search. Results All permanent residents in Japan are enrolled in one of more than 3,000 compulsory health funds. Employees and public servants are covered through company or government-related health insurance schemes. Independent workers, the unemployed and the pensioners are usually assigned to health insurance plans managed by local city governments. The elderly over 75 years are insured through special health funds managed at the prefectural level. To correct the fiscal disparities among the health insurance programs, a risk adjustment is realized by compensatory financial transfers between the funds and substantial subsidies from the central and local governments. The statutory benefits package that is identical for all insurance plans is regulated in a single comprehensive schedule. All the covered health services and products are listed with the fees and compensations, and the conditions for the service providers to be remunerated are also stated. This fee and compensation schedule is regularly revised every 2 years under the leadership of the Ministry of Health, Labor and Welfare. The revisions are intended to contain health expenditures and to set incentives for the achievement of health policy goals. Conclusion The funding of the Japanese health care system and the risk adjustment mechanisms among health funds are well established and show a rather static character. The short- and mid-term development of the system is mainly controlled on the side of the expenditures through the unique and comprehensive fee and compensation schedule. The regular revisions of this schedule permit to react at relatively short notice to evolving situations, and through a policy of small improvements, target an optimization of the system as a whole. © Georg Thieme Verlag KG Stuttgart · New York.

  17. CARMENES instrument control system and operational scheduler

    NASA Astrophysics Data System (ADS)

    Garcia-Piquer, Alvaro; Guàrdia, Josep; Colomé, Josep; Ribas, Ignasi; Gesa, Lluis; Morales, Juan Carlos; Pérez-Calpena, Ana; Seifert, Walter; Quirrenbach, Andreas; Amado, Pedro J.; Caballero, José A.; Reiners, Ansgar

    2014-07-01

    The main goal of the CARMENES instrument is to perform high-accuracy measurements of stellar radial velocities (1m/s) with long-term stability. CARMENES will be installed in 2015 at the 3.5 m telescope in the Calar Alto Observatory (Spain) and it will be equipped with two spectrographs covering from the visible to the near-infrared. It will make use of its near-IR capabilities to observe late-type stars, whose peak of the spectral energy distribution falls in the relevant wavelength interval. The technology needed to develop this instrument represents a challenge at all levels. We present two software packages that play a key role in the control layer for an efficient operation of the instrument: the Instrument Control System (ICS) and the Operational Scheduler. The coordination and management of CARMENES is handled by the ICS, which is responsible for carrying out the operations of the different subsystems providing a tool to operate the instrument in an integrated manner from low to high user interaction level. The ICS interacts with the following subsystems: the near-IR and visible channels, composed by the detectors and exposure meters; the calibration units; the environment sensors; the front-end electronics; the acquisition and guiding module; the interfaces with telescope and dome; and, finally, the software subsystems for operational scheduling of tasks, data processing, and data archiving. We describe the ICS software design, which implements the CARMENES operational design and is planned to be integrated in the instrument by the end of 2014. The CARMENES operational scheduler is the second key element in the control layer described in this contribution. It is the main actor in the translation of the survey strategy into a detailed schedule for the achievement of the optimization goals. The scheduler is based on Artificial Intelligence techniques and computes the survey planning by combining the static constraints that are known a priori (i.e., target visibility, sky background, required time sampling coverage) and the dynamic change of the system conditions (i.e., weather, system conditions). Off-line and on-line strategies are integrated into a single tool for a suitable transfer of the target prioritization made by the science team to the real-time schedule that will be used by the instrument operators. A suitable solution will be expected to increase the efficiency of telescope operations, which will represent an important benefit in terms of scientific return and operational costs. We present the operational scheduling tool designed for CARMENES, which is based on two algorithms combining a global and a local search: Genetic Algorithms and Hill Climbing astronomy-based heuristics, respectively. The algorithm explores a large amount of potential solutions from the vast search space and is able to identify the most efficient ones. A planning solution is considered efficient when it optimizes the objectives defined, which, in our case, are related to the reduction of the time that the telescope is not in use and the maximization of the scientific return, measured in terms of the time coverage of each target in the survey. We present the results obtained using different test cases.

  18. Novel high-fidelity realistic explosion damage simulation for urban environments

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoqing; Yadegar, Jacob; Zhu, Youding; Raju, Chaitanya; Bhagavathula, Jaya

    2010-04-01

    Realistic building damage simulation has a significant impact in modern modeling and simulation systems especially in diverse panoply of military and civil applications where these simulation systems are widely used for personnel training, critical mission planning, disaster management, etc. Realistic building damage simulation should incorporate accurate physics-based explosion models, rubble generation, rubble flyout, and interactions between flying rubble and their surrounding entities. However, none of the existing building damage simulation systems sufficiently faithfully realize the criteria of realism required for effective military applications. In this paper, we present a novel physics-based high-fidelity and runtime efficient explosion simulation system to realistically simulate destruction to buildings. In the proposed system, a family of novel blast models is applied to accurately and realistically simulate explosions based on static and/or dynamic detonation conditions. The system also takes account of rubble pile formation and applies a generic and scalable multi-component based object representation to describe scene entities and highly scalable agent-subsumption architecture and scheduler to schedule clusters of sequential and parallel events. The proposed system utilizes a highly efficient and scalable tetrahedral decomposition approach to realistically simulate rubble formation. Experimental results demonstrate that the proposed system has the capability to realistically simulate rubble generation, rubble flyout and their primary and secondary impacts on surrounding objects including buildings, constructions, vehicles and pedestrians in clusters of sequential and parallel damage events.

  19. Causes and consequences of sleepiness among college students

    PubMed Central

    Hershner, Shelley D; Chervin, Ronald D

    2014-01-01

    Daytime sleepiness, sleep deprivation, and irregular sleep schedules are highly prevalent among college students, as 50% report daytime sleepiness and 70% attain insufficient sleep. The consequences of sleep deprivation and daytime sleepiness are especially problematic to college students and can result in lower grade point averages, increased risk of academic failure, compromised learning, impaired mood, and increased risk of motor vehicle accidents. This article reviews the current prevalence of sleepiness and sleep deprivation among college students, contributing factors for sleep deprivation, and the role of sleep in learning and memory. The impact of sleep and sleep disorders on academics, grade point average, driving, and mood will be examined. Most importantly, effective and viable interventions to decrease sleepiness and sleep deprivation through sleep education classes, online programs, encouragement of naps, and adjustment of class time will be reviewed. This paper highlights that addressing sleep issues, which are not often considered as a risk factor for depression and academic failure, should be encouraged. Promotion of university and college policies and class schedules that encourage healthy and adequate sleep could have a significant impact on the sleep, learning, and health of college students. Future research to investigate effective and feasible interventions, which disseminate both sleep knowledge and encouragement of healthy sleep habits to college students in a time and cost effective manner, is a priority. PMID:25018659

  20. The DEEP-South: Preliminary Photometric Results from the KMTNet-CTIO

    NASA Astrophysics Data System (ADS)

    Kim, Myung-Jin; Moon, Hong-Kyu; Choi, Young-Jun; Yim, Hong-Suh; Bae, Youngho; Roh, Dong-Goo; the DEEP-South Team

    2015-08-01

    The DEep Ecliptic Patrol of the Southern sky (DEEP-South) will not only conduct characterization of targeted asteroids and blind survey at the sweet spots, but also utilize data mining of small Solar System bodies in the whole KMTNet archive. As round-the-clock observation with the KMTNet is optimized for spin characterization of tumbling and slow-rotating bodies as it facilitates debiasing previously reported lightcurve observations. It is also most suitable for detection and rapid follow-up of Atens and Atiras, the “difficult objects” that are being discovered at lower solar elongations.For the sake of efficiency, we implemented an observation scheduler, SMART (Scheduler for Measuring Asteroids RoTation), designed to conduct follow-up observations in a timely manner. It automatically updates catalogs, generates ephemerides, checks priorities, prepares target lists, and sends a suite of scripts to site operators. We also developed photometric analysis software called ASAP (Asteroid Spin Analysis Package) that aids to find a set of appropriate comparison stars in an image, to derive spin parameters and reconstruct lightcurve simultaneously in a semi-automatic manner. In this presentation, we will show our preliminary results of time series analyses of a number of km-sized Potentially Hazardous Asteroids (PHAs), 5189 (1990 UQ), 12923 (1999 GK4), 53426 (1999 SL5), 136614 (1993 VA6), 385186 (1994 AW1), and 2000 OH from test runs in February and March 2015 at the KMTNet-CTIO.

  1. A primer on the hormone-free interval for combined oral contraceptives.

    PubMed

    Hauck, Brian A; Brown, Vivien

    2015-01-01

    The dosing, schedules, and other aspects of combined oral contraceptive (COC) design have evolved in recent years to address a variety of issues including short- and long-term safety, bleeding profiles, and contraceptive efficacy. In particular, several newer formulations have altered the length of the hormone-free interval (HFI), in order to minimize two key undesired effects that occur during this time: hormone-withdrawal-associated symptoms (HWaS) and follicular development. This primer reviews our current understanding of the key biological processes that occur during the HFI and how this understanding has led to changes in the dosing and schedule of newer COC formulations. In brief, HWaS are common, underappreciated, and a likely contributor to COC discontinuation; because of this, shortening the HFI and/or supplementing with estrogen during the progestin-free interval may provide relief from these symptoms and improve adherence. A short HFI (with or without estrogen supplementation) may also help maintain effective follicular suppression and contraceptive efficacy, even when the overall dose of estrogen throughout the cycle is low. Taken together, the available data about HWaS and follicular activity during the HFI support the rationale for recent COC designs that use a low estrogen dose and a short HFI. The availability of a variety of COC regimens gives physicians a range of choices when selecting the most appropriate COC for each woman's particular priorities and needs.

  2. Failure to produce taste-aversion learning in rats exposed to static electric fields and air ions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Creim, J.A.; Lovely, R.H.; Weigel, R.J.

    1995-12-01

    Taste-aversion (TA) learning was measured to determine whether exposure to high-voltage direct current (HVdc) static electric fields can produce TA learning in male Long Evans rats. Fifty-six rats were randomly distributed into four groups of 14 rats each. All rats were placed on a 20 min/day drinking schedule for 12 consecutive days prior to receiving five conditioning trials. During the conditioning trials, access to 0.1% sodium saccharin-flavored water was given for 20 min, followed 30 min later by one of four treatments. Two groups of 14 rats each were individually exposed to static electric fields and air ions, one groupmore » to +75 kV/m (+2 {times} 10{sup 5} air ions/cm{sup 3}) and the other group to {minus}75 kV/m ({minus}2 {times} 10{sup 5} air ions/cm{sup 3}). Two other groups of 14 rats each served as sham-exposed controls, with the following variation in one of the sham-exposed groups: this group was subdivided into two subsets of seven rats each, so that a positive control group could be included to validate the experimental design. The positive control group (n = 7) was injected with cyclophosphamide 25 mg/kg, i.p., 30 min after access to saccharin-flavored water on conditioning days, whereas the other subset of seven rats was similarly injected with an equivalent volume of saline. Access to saccharin-flavored water on conditioning days was followed by the treatments described above and was alternated daily with water recovery sessions in which the rats received access to water for 20 min in the home cage without further treatment. Following the last water-recovery session, a 20 min, two-bottle preference test (between water and saccharin-flavored water) was administered to each group. The positive control group did show TA learning, thus validating the experimental protocol.« less

  3. Application of municipal biosolids to dry-land wheat fields - A monitoring program near Deer Trail, Colorado (USA). A presentation for an international conference: "The Future of Agriculture: Science, Stewardship, and Sustainability", August 7-9, 2006, Sacramento, CA

    USGS Publications Warehouse

    Crock, James G.; Smith, David B.; Yager, Tracy J.B.

    2006-01-01

    Since late 1993, Metro Wastewater Reclamation District of Denver (Metro District), a large wastewater treatment plant in Denver, Colorado, has applied Grade I, Class B biosolids to about 52,000 acres of non-irrigated farmland and rangeland near Deer Trail, Colorado. In cooperation with the Metro District in 1993, the U.S. Geological Survey (USGS) began monitoring ground water at part of this site. In 1999, the USGS began a more comprehensive study of the entire site to address stakeholder concerns about the chemical effects of biosolids applications. This more comprehensive monitoring program has recently been extended through 2010. Monitoring components of the more comprehensive study included biosolids collected at the wastewater treatment plant, soil, crops, dust, alluvial and bedrock ground water, and stream bed sediment. Streams at the site are dry most of the year, so samples of stream bed sediment deposited after rain were used to indicate surface-water effects. This presentation will only address biosolids, soil, and crops. More information about these and the other monitoring components are presented in the literature (e.g., Yager and others, 2004a, b, c, d) and at the USGS Web site for the Deer Trail area studies at http://co.water.usgs.gov/projects/CO406/CO406.html. Priority parameters identified by the stakeholders for all monitoring components, included the total concentrations of nine trace elements (arsenic, cadmium, copper, lead, mercury, molybdenum, nickel, selenium, and zinc), plutonium isotopes, and gross alpha and beta activity, regulated by Colorado for biosolids to be used as an agricultural soil amendment. Nitrogen and chromium also were priority parameters for ground water and sediment components. In general, the objective of each component of the study was to determine whether concentrations of priority parameters (1) were higher than regulatory limits, (2) were increasing with time, or (3) were significantly higher in biosolids-applied areas than in a similar farmed area where biosolids were not applied. Where sufficient samples could be collected, statistical methods were used to evaluate effects. Rigorous quality assurance was included in all aspects of the study. The roles of hydrology and geology also were considered in the design, data collection, and interpretation phases of the study. Study results indicate that the chemistry of the biosolids from the Denver plant was consistent during 1999-2005, and total concentrations of regulated trace elements were consistently lower than the regulatory limits. Plutonium isotopes were not detected in the biosolids. Leach tests using deionized water to simulate natural precipitation indicate arsenic, molybdenum, and nickel were the most soluble priority parameters in the biosolids. Study results show no significant difference in concentrations of priority parameters between biosolids-applied soils and unamended soils where no biosolids were applied. However, biosolids were applied only twice during 1999-2003. The next soil sampling is not scheduled until 2010. To date concentrations of most of the priority parameters were not much greater in the biosolids than in natural soil from the sites. Therefore, many more biosolids applications would need to occur before biosolids effects on the soil priority constituents can be quantified. Leach tests using deionized water to simulate precipitation indicate that molybdenum and selenium were the priority parameters that were most soluble in both biosolids-applied soil and natural or unamended soil. Study results do not indicate significant differences in concentrations of priority parameters between crops grown in biosolids-applied areas and crops grown where no biosolids were applied. However, crops were grown only twice during 1999-2003, so only two crop samples could be collected. The wheat-grain elemental data collected during 1999-2003 for both biosolids-applied areas and unamended areas are similar

  4. The British Columbia Nephrologists' Access Study (BCNAS) - a prospective, health services interventional study to develop waiting time benchmarks and reduce wait times for out-patient nephrology consultations.

    PubMed

    Schachter, Michael E; Romann, Alexandra; Djurdev, Ognjenka; Levin, Adeera; Beaulieu, Monica

    2013-08-29

    Early referral and management of high-risk chronic kidney disease may prevent or delay the need for dialysis. Automatic eGFR reporting has increased demand for out-patient nephrology consultations and in some cases, prolonged queues. In Canada, a national task force suggested the development of waiting time targets, which has not been done for nephrology. We sought to describe waiting time for outpatient nephrology consultations in British Columbia (BC). Data collection occurred in 2 phases: 1) Baseline Description (Jan 18-28, 2010) and 2) Post Waiting Time Benchmark-Introduction (Jan 16-27, 2012). Waiting time was defined as the interval from receipt of referral letters to assessment. Using a modified Delphi process, Nephrologists and Family Physicians (FP) developed waiting time targets for commonly referred conditions through meetings and surveys. Rules were developed to weigh-in nephrologists', FPs', and patients' perspectives in order to generate waiting time benchmarks. Targets consider comorbidities, eGFR, BP and albuminuria. Referred conditions were assigned a priority score between 1-4. BC nephrologists were encouraged to centrally triage referrals to see the first available nephrologist. Waiting time benchmarks were simultaneously introduced to guide patient scheduling. A post-intervention waiting time evaluation was then repeated. In 2010 and 2012, 43/52 (83%) and 46/57 (81%) of BC nephrologists participated. Waiting time decreased from 98(IQR44,157) to 64(IQR21,120) days from 2010 to 2012 (p = <.001), despite no change in referral eGFR, demographics, nor number of office hrs/wk. Waiting time improved most for high priority patients. An integrated, Provincial initiative to measure wait times, develop waiting benchmarks, and engage physicians in active waiting time management associated with improved access to nephrologists in BC. Improvements in waiting time was most marked for the highest priority patients, which suggests that benchmarks had an influence on triaging behavior. Further research is needed to determine whether this effect is sustainable.

  5. Getting right to the point: identifying Australian outpatients' priorities and preferences for patient-centred quality improvement in chronic disease care.

    PubMed

    Fradgley, Elizabeth A; Paul, Christine L; Bryant, Jamie; Oldmeadow, Christopher

    2016-09-01

    To identify specific actions for patient-centred quality improvement in chronic disease outpatient settings, this study identified patients' general and specific preferences among a comprehensive suite of initiatives for change. A cross-sectional survey was conducted in three hospital-based clinics specializing in oncology, neurology and cardiology care located in New South Wales, Australia. Adult English-speaking outpatients completed the touch-screen Consumer Preferences Survey in waiting rooms or treatment areas. Participants selected up to 23 general initiatives that would improve their experience. Using adaptive branching, participants could select an additional 110 detailed initiatives and complete a relative prioritization exercise. A total of 541 individuals completed the survey (71.1% consent, 73.1% completion). Commonly selected general initiatives, presented in order of decreasing priority (along with sample proportion), included: improved parking (60.3%), up-to-date information provision (15.0%), ease of clinic contact (12.9%), access to information at home (12.8%), convenient appointment scheduling (14.2%), reduced wait-times (19.8%) and information on medical emergencies (11.1%). To address these general initiatives, 40 detailed initiatives were selected by respondents. Initiatives targeting service accessibility and information provision, such as parking and up-to-date information on patient prognoses and progress, were commonly selected and perceived to be of relatively greater priority. Specific preferences included the need for clinics to provide patient-designated parking in close proximity to the clinic, information on treatment progress and test results (potentially in the form of designated brief appointments or via telehealth) and comprehensive and trustworthy lists of information sources to access at home. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Science Planning for the Solar Probe Plus NASA Mission

    NASA Astrophysics Data System (ADS)

    Kusterer, M. B.; Fox, N. J.; Turner, F. S.; Vandegriff, J. D.

    2015-12-01

    With a planned launch in 2018, there are a number of challenges for the Science Planning Team (SPT) of the Solar Probe Plus mission. The geometry of the celestial bodies and the spacecraft during some of the Solar Probe Plus mission orbits cause limited uplink and downlink opportunities. The payload teams must manage the volume of data that they write to the spacecraft solid-state recorders (SSR) for their individual instruments for downlink to the ground. The aim is to write the instrument data to the spacecraft SSR for downlink before a set of data downlink opportunities large enough to get the data to the ground and before the start of another data collection cycle. The SPT also intend to coordinate observations with other spacecraft and ground based systems. To add further complexity, two of the spacecraft payloads have the capability to write a large volumes of data to their internal payload SSR while sending a smaller "survey" portion of the data to the spacecraft SSR for downlink. The instrument scientists would then view the survey data on the ground, determine the most interesting data from their payload SSR, send commands to transfer that data from their payload SSR to the spacecraft SSR for downlink. The timing required for downlink and analysis of the survey data, identifying uplink opportunities for commanding data transfers, and downlink opportunities big enough for the selected data within the data collection period is critical. To solve these challenges, the Solar Probe Plus Science Working Group has designed a orbit-type optimized data file priority downlink scheme to downlink high priority survey data quickly. This file priority scheme would maximize the reaction time that the payload teams have to perform the survey and selected data method on orbits where the downlink and uplink availability will support using this method. An interactive display and analysis science planning tool is being designed for the SPT to use as an aid to planning. The tool will integrate the data file priority downlink scheme, payload data volume allocations, spacecraft ephemeris, attitude, downlink and uplink schedules, spacecraft and payload activities, and other spacecraft ephemeris. A prototype of the tool is in development using notional inputs obtained from the spacecraft engineering teams.

  7. Gravity Probe B: Testing Einstein with Gyroscopes

    NASA Technical Reports Server (NTRS)

    Geveden, Rex D.; May, Todd

    2003-01-01

    Some 40 years in the making, NASA' s historic Gravity Probe B (GP-B) mission is scheduled to launch aboard a Delta II in 2003. GP-B will test two extraordinary predictions from Einstein's General Relativity: geodetic precession and the Lense-Thirring effect (frame-dragging). Employing tiny, ultra-precise gyroscopes, GP-B features a measurement accuracy of 0.5 milli-arc-seconds per year. The extraordinary measurement precision is made possible by a host of breakthrough technologies, including electro-statically suspended, super-conducting quartz gyroscopes; virtual elimination of magnetic flux; a solid quartz star tracking telescope; helium microthrusters for drag-free control of the spacecraft; and a 2400 liter superfluid helium dewar. This paper will provide an overview of the science, key technologies, flight hardware, integration and test, and flight operations of the GP-B space vehicle. It will also examine some of the technical management challenges of a large-scale, technology-driven, Principal Investigator-led mission.

  8. Gravity Probe B: Testing Einstein with Gyroscopes

    NASA Technical Reports Server (NTRS)

    Geveden, Rex D.; May, Todd

    2003-01-01

    Some 40 years in the making, NASA s historic Gravity Probe B (GP-B) mission is scheduled to launch aboard a Delta I1 in 2003. GP-B will test two extraordinary predictions from Einstein s General Relativity: geodetic precession and the Lense-Thirring effect (frame-dragging). Employing tiny, ultra-precise gyroscopes, GP-B features a measurement accuracy of 0.5 milli-arc-seconds per year. The extraordinary measurement precision is made possible by a host of breakthrough technologies, including electro-statically suspended, super-conducting quartz gyroscopes; virtual elimination of magnetic flux; a solid quartz star- tracking telescope; helium microthrusters for drag-free control of the spacecraft; and a 2400 liter superfluid helium dewar. This paper will provide an overview of the science, key technologies, flight hardware, integration and test, and flight operations of the GP-B space vehicle. It will also examine some of the technical management challenges of a large-scale, technology-driven, Principal Investigator-led mission.

  9. TOGA - A GNSS Reflections Instrument for Remote Sensing Using Beamforming

    NASA Technical Reports Server (NTRS)

    Esterhuizen, S.; Meehan, T. K.; Robison, D.

    2009-01-01

    Remotely sensing the Earth's surface using GNSS signals as bi-static radar sources is one of the most challenging applications for radiometric instrument design. As part of NASA's Instrument Incubator Program, our group at JPL has built a prototype instrument, TOGA (Time-shifted, Orthometric, GNSS Array), to address a variety of GNSS science needs. Observing GNSS reflections is major focus of the design/development effort. The TOGA design features a steerable beam antenna array which can form a high-gain antenna pattern in multiple directions simultaneously. Multiple FPGAs provide flexible digital signal processing logic to process both GPS and Galileo reflections. A Linux OS based science processor serves as experiment scheduler and data post-processor. This paper outlines the TOGA design approach as well as preliminary results of reflection data collected from test flights over the Pacific ocean. This reflections data demonstrates observation of the GPS L1/L2C/L5 signals.

  10. KSC-2014-1985

    NASA Image and Video Library

    2014-03-20

    VANDENBERG AIR FORCE BASE, Calif. – The Delta first-stage booster for NASA's Orbiting Carbon Observatory-2 mission, or OCO-2, passes a static display of a U.S. Air Force Minuteman III intercontinental ballistic missile, at left, on its move from the Building 836 hangar to the Horizontal Processing Facility at Space Launch Complex 2 on Vandenberg Air Force Base in California. OCO-2 is scheduled to launch aboard a United Launch Alliance Delta II rocket on July 1, 2014. The observatory will collect precise global measurements of carbon dioxide in the Earth's atmosphere and provide scientists with a better idea of the chemical compound's impacts on climate change. Scientists will analyze this data to improve our understanding of the natural processes and human activities that regulate the abundance and distribution of this important atmospheric gas. To learn more about OCO-2, visit http://oco.jpl.nasa.gov. Photo credit: NASA/D. Liberotti, 30th Space Wing, VAFB

  11. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    NASA Technical Reports Server (NTRS)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  12. Design concepts and performance of NASA X-band transponder (DST) for deep space spacecraft applications

    NASA Technical Reports Server (NTRS)

    Mysoor, Narayan R.; Perret, Jonathan D.; Kermode, Arthur W.

    1991-01-01

    The design concepts and measured performance characteristics of an X band (7162 MHz/8415 MHz) breadboard deep space transponder (DST) for future spacecraft applications, with the first use scheduled for the Comet Rendezvous Asteroid Flyby (CRAF) and Cassini missions in 1995 and 1996, respectively. The DST consists of a double conversion, superheterodyne, automatic phase tracking receiver, and an X band (8415 MHz) exciter to drive redundant downlink power amplifiers. The receiver acquires and coherently phase tracks the modulated or unmodulated X band (7162 MHz) uplink carrier signal. The exciter phase modulates the X band (8415 MHz) downlink signal with composite telemetry and ranging signals. The receiver measured tracking threshold, automatic gain control static phase error, and phase jitter characteristics of the breadboard DST are in good agreement with the expected performance. The measured results show a receiver tracking threshold of -158 dBm and a dynamic signal range of 88 dB.

  13. An X-band spacecraft transponder for deep space applications - Design concepts and breadboard performance

    NASA Technical Reports Server (NTRS)

    Mysoor, Narayan R.; Perret, Jonathan D.; Kermode, Arthur W.

    1992-01-01

    The design concepts and measured performance characteristics are summarized of an X band (7162 MHz/8415 MHz) breadboard deep space transponder (DSP) for future spacecraft applications, with the first use scheduled for the Comet Rendezvous Asteroid Flyby (CRAF) and Cassini missions in 1995 and 1996, respectively. The DST consists of a double conversion, superheterodyne, automatic phase tracking receiver, and an X band (8415 MHz) exciter to drive redundant downlink power amplifiers. The receiver acquires and coherently phase tracks the modulated or unmodulated X band (7162 MHz) uplink carrier signal. The exciter phase modulates the band (8415 MHz) downlink signal with composite telemetry and ranging signals. The receiver measured tracking threshold, automatic gain control, static phase error, and phase jitter characteristics of the breadboard DST are in good agreement with the expected performance. The measured results show a receiver tracking threshold of -158 dBm and a dynamic signal range of 88 dB.

  14. Tungsten wire-nickel base alloy composite development

    NASA Technical Reports Server (NTRS)

    Brentnall, W. D.; Moracz, D. J.

    1976-01-01

    Further development and evaluation of refractory wire reinforced nickel-base alloy composites is described. Emphasis was placed on evaluating thermal fatigue resistance as a function of matrix alloy composition, fabrication variables and reinforcement level and distribution. Tests for up to 1,000 cycles were performed and the best system identified in this current work was 50v/o W/NiCrAlY. Improved resistance to thermal fatigue damage would be anticipated for specimens fabricated via optimized processing schedules. Other properties investigated included 1,093 C (2,000 F) stress rupture strength, impact resistance and static air oxidation. A composite consisting of 30v/o W-Hf-C alloy fibers in a NiCrAlY alloy matrix was shown to have a 100-hour stress rupture strength at 1,093 C (2,000 F) of 365 MN/square meters (53 ksi) or a specific strength advantage of about 3:1 over typical D.S. eutectics.

  15. The Rotational and Gravitational Effect of Earthquakes

    NASA Technical Reports Server (NTRS)

    Gross, Richard

    2000-01-01

    The static displacement field generated by an earthquake has the effect of rearranging the Earth's mass distribution and will consequently cause the Earth's rotation and gravitational field to change. Although the coseismic effect of earthquakes on the Earth's rotation and gravitational field have been modeled in the past, no unambiguous observations of this effect have yet been made. However, the Gravity Recovery And Climate Experiment (GRACE) satellite, which is scheduled to be launched in 2001, will measure time variations of the Earth's gravitational field to high degree and order with unprecedented accuracy. In this presentation, the modeled coseismic effect of earthquakes upon the Earth's gravitational field to degree and order 100 will be computed and compared to the expected accuracy of the GRACE measurements. In addition, the modeled second degree changes, corresponding to changes in the Earth's rotation, will be compared to length-of-day and polar motion excitation observations.

  16. Time as a Key Topic in Health Professionals’ Perceptions of Clinical Handovers

    PubMed Central

    Watson, Bernadette M.; Jones, Liz; Cretchley, Julia

    2014-01-01

    Clinical handovers are an essential part of the daily care and treatment of hospital patients. We invoked a language and social psychology lens to investigate how different health professional groups discussed the communication problems and strengths they experienced in handovers. We conducted in-depth interviews with three different health professional groups within a large metropolitan hospital. We used Leximancer text analytics software as a tool to analyze the data. Results showed that time was of concern to all groups in both similar and diverse ways. All professionals discussed time management, time pressures, and the difficulties of coordinating different handovers. Each professional group had its own unique perceptions and priorities about handovers. Our findings indicated that health professionals understood what was required for handover improvement but did not have the extra capacity to alter their current environment. Hospital management, with clinicians, need to implement handover schedule processes that prioritize interprofessional representation. PMID:28462291

  17. Managing competing elastic Grid and Cloud scientific computing applications using OpenNebula

    NASA Astrophysics Data System (ADS)

    Bagnasco, S.; Berzano, D.; Lusso, S.; Masera, M.; Vallero, S.

    2015-12-01

    Elastic cloud computing applications, i.e. applications that automatically scale according to computing needs, work on the ideal assumption of infinite resources. While large public cloud infrastructures may be a reasonable approximation of this condition, scientific computing centres like WLCG Grid sites usually work in a saturated regime, in which applications compete for scarce resources through queues, priorities and scheduling policies, and keeping a fraction of the computing cores idle to allow for headroom is usually not an option. In our particular environment one of the applications (a WLCG Tier-2 Grid site) is much larger than all the others and cannot autoscale easily. Nevertheless, other smaller applications can benefit of automatic elasticity; the implementation of this property in our infrastructure, based on the OpenNebula cloud stack, will be described and the very first operational experiences with a small number of strategies for timely allocation and release of resources will be discussed.

  18. Commercial ELV services and the National Aeronautics and Space Administration - Concord or discord?

    NASA Technical Reports Server (NTRS)

    Frankle, Edward A.

    1988-01-01

    In implementation of the U.S. policy to foster and encourage the commercial expendable launch vehicle (ELV) industry, tensions have developed between the industry and U.S. Government agencies in two distinct areas: industry use of government facilities and government purchase of commercial ELV services. The reasons for the tensions and discrete legal problems for each area are identified and discussed. Specifically, in the use of government facilities area, issues of insurance and indemnification for third-party liability and government property, concerns over priority and scheduling, and dispute-resolution procedures are discussed. In the area of government purchase of ELV launch services, a comparison is made between a launch service purchase and prior procurement practice. In all areas, the conclusion is reached that while problems still exist, they generally are understood and great progress has been made toward their resolution.

  19. The ODDI Odyssey: Developing and Integrating Operations for the International Space Station

    NASA Astrophysics Data System (ADS)

    Deal, Ryan W.

    2002-01-01

    International Space Station (ISS) comprise the deliverable products (OP-01 Reports) of the Boeing Operations Data Development and Integration (ODDI) Integrated Product Team (IPT) to the NASA customer. The ODDI IPT's mission is to exceed the customer's expectations by providing high-quality data and sound techniques for assembling and operating the ISS. strategies in order to streamline the generation of operations products that the Mission Operations Directorate (MOD) utilizes for its crew and ground operations procedures development. Just as for other business practices, operations is a transformation process, converting inputs (resources) into outputs (products) based on a strategy that works best for the established competitive priorities of the operations organization. product reviews, and supporting other ISS operations duties (such as Mission Evaluation Room support) must be balanced with meeting schedules for delivery of the ODDI IPT's OP-01 Reports in accordance with the ISS assembly sequence timeline.

  20. Transmutation of Radioactive Nuclear Waste — Present Status and Requirement for the Problem-Oriented Nuclear Database: Approach to Scheduling the Experiments (Reactor, Target, Blanket)

    NASA Astrophysics Data System (ADS)

    Artisyuk, V.; Ignatyuk, A.; Korovin, Yu.; Lopatkin, A.; Matveenko, I.; Stankovskiy, A.; Titarenko, Yu.

    2005-05-01

    Transmutation of nuclear wastes (Minor Actinides and Long-Lived Fission Products) remains an important option to reduce the burden of high-level waste on final waste disposal in deep geological structures. Accelerator-Driven Systems (ADS) are considered as possible candidates to perform transmutation due to their subcritical operation mode that eliminates some of the serious safety penalties unavoidable in critical reactors. Specific requirements to nuclear data necessary for ADS transmutation analysis is the main subject of the ISTC Project ♯2578 which started in 2004 to identify the areas of research priorities in the future. The present paper gives a summary of ongoing project stressing the importance of nuclear data for blanket performance (reactivity behavior with associated safety characteristics) and uncertainties that affect characteristics of neutron producing target.

  1. Superconducting magnet development for tokamaks and mirrors: a technical assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laverick, C.; Jacobs, R. B.; Boom, R. W.

    1977-11-01

    The role of superconducting magnets in Magnetic Fusion Energy Research and Development is assessed from a consideration of program plans and schedules, the present status of the programs and the research and development suggestions arising from recent studies and workshops. A principal conclusion is that the large superconducting magnet systems needed for commercial magnetic fusion reactors can be constructed. However such magnets working under severe conditions, with increasingly stringent reliability, safety and cost restrictions can never be built unless experience is first gained in a number of important installations designed to prove physics and technology steps on the way tomore » commercial power demonstration. The immediate problem is to design a technology program in the absence of definite device needs and specifications, giving a priority weighting to the multiplicity of good, high quality development program suggestions when all proposals cannot be supported.« less

  2. STS-107 Mission Specialist Kalpana Chawla at SPACEHAB during training

    NASA Technical Reports Server (NTRS)

    2002-01-01

    KENNEDY SPACE CENTER, FLA. -- STS-107 Mission Specialist Kalpana Chawla scans paperwork for equipment at SPACEHAB, Cape Canaveral, Fla., during crew training. STS-107 is a research mission. The primary payload is the first flight of the SHI Research Double Module (SHI/RDM). The experiments range from material sciences to life sciences (many rats). Also part of the payload is the Fast Reaction Experiments Enabling Science, Technology, Applications and Research (FREESTAR) that incorporates eight high priority secondary attached shuttle experiments: Mediterranean Israeli Dust Experiment (MEIDEX), Shuttle Ozone Limb Sounding Experiment (SOLSE-2), Student Tracked Atmospheric Research Satellite for Heuristic International Networking Experiment (STARSHINE), Critical Viscosity of Xenon-2 (CVX-2), Solar Constant Experiment-3 (SOLOCON-3), Prototype Synchrotron Radiation Detector (PSRD), Low Power Transceiver (LPT), and Collisions Into Dust Experiment -2 (COLLIDE-2). STS-107 is scheduled to launch in July 2002

  3. STS-108 Mission Specialist Linda A. Godwin arrives at KSC

    NASA Technical Reports Server (NTRS)

    2001-01-01

    STS-108 Mission Specialist Linda A. Godwin arrives at KSC KSC-01PD-1710 KENNEDY SPACE CENTER, Fla. -- STS-108 Mission Specialist Linda A. Godwin pauses after her arrival at KSC. She and the rest of the crew will be preparing for launch Nov. 29 on Space Shuttle Endeavour. Liftoff is scheduled for 7:41 p.m. EST. Top priorities for the STS-108 (UF-1) mission of Endeavour are rotation of the International Space Station Expedition Three and Expedition Four crews, bringing water, equipment and supplies to the station in the Multi-Purpose Logistics Module Raffaello, and completion of spacewalk and robotics tasks. Mission Specialists Daniel M. Tani and Godwin will take part in the spacewalk to install thermal blankets over two pieces of equipment at the bases of the Space Station's solar wings. Other crew members are Commander Dominic L. Gorie and Pilot Mark E. Kelly.

  4. STS-108 Mission Specialist Daniel M. Tani arrives at KSC

    NASA Technical Reports Server (NTRS)

    2001-01-01

    STS-108 Mission Specialist Daniel M. Tani arrives at KSC KSC-01PD-1707 KENNEDY SPACE CENTER, Fla. -- STS-108 Mission Specialist Daniel M. Tani arrives at KSC in a T-38 jet trainer. He and the rest of the crew will be preparing for launch Nov. 29 on Space Shuttle Endeavour. Liftoff is scheduled for 7:41 p.m. EST. Top priorities for the STS-108 (UF-1) mission of Endeavour are rotation of the International Space Station Expedition Three and Expedition Four crews, bringing water, equipment and supplies to the station in the Multi-Purpose Logistics Module Raffaello, and completion of spacewalk and robotics tasks. Mission Specialists Linda A. Godwin and Tani will take part in the spacewalk to install thermal blankets over two pieces of equipment at the bases of the Space Station's solar wings. Other crew members are Commander Dominic L. Gorie and Pilot Mark E. Kelly.

  5. Consultation sequencing of a hospital with multiple service points using genetic programming

    NASA Astrophysics Data System (ADS)

    Morikawa, Katsumi; Takahashi, Katsuhiko; Nagasawa, Keisuke

    2018-07-01

    A hospital with one consultation room operated by a physician and several examination rooms is investigated. Scheduled patients and walk-ins arrive at the hospital, each patient goes to the consultation room first, and some of them visit other service points before consulting the physician again. The objective function consists of the sum of three weighted average waiting times. The problem of sequencing patients for consultation is focused. To alleviate the stress of waiting, the consultation sequence is displayed. A dispatching rule is used to decide the sequence, and best rules are explored by genetic programming (GP). The simulation experiments indicate that the rules produced by GP can be reduced to simple permutations of queues, and the best permutation depends on the weight used in the objective function. This implies that a balanced allocation of waiting times can be achieved by ordering the priority among three queues.

  6. Case Report: Treatment of Widespread Nodular Post kala-Azar Dermal Leishmaniasis with Extended-Dose Liposomal Amphotericin B in Bangladesh: A Series of Four Cases.

    PubMed

    Basher, Ariful; Maruf, Shomik; Nath, Proggananda; Hasnain, Md Golam; Mukit, Muhammod Abdul; Anuwarul, Azim; Aktar, Fatima; Nath, Rupen; Hossain, Afm Akhtar; Milton, Abul Hasnat; Mondal, Dinesh; Mohammad Sumsuzzaman, Abul Khair; Rahman, Ridwanur; Faiz, M A

    2017-10-01

    Post kala-azar dermal leishmaniasis (PKDL) is a skin manifestation which usually appears after visceral leishmaniasis. It is now proved that PKDL patients serve as a reservoir for anthropometric leishmanial transmission. Hence, to achieve the kala-azar elimination target set by the World Health Organization in the Indian Subcontinent, PKDL cases should be given priority. The goal of treatment for PKDL should be early reepithelizlization and rapid cure, but unfortunately this has been difficult to achieve, especially for patients with severe lesions. Therefore, we describe here four cases of PKDL who had widespread nodular and macular lesions and were treated with two cycles of LAmB doses with 20 mg/kg body weight divided into four equal doses (each dose contains 5 mg/kg) administered every alternate day. This treatment schedule achieved 100% treatment success with the minimal safety concern.

  7. Animal models of binge drinking, current challenges to improve face validity.

    PubMed

    Jeanblanc, Jérôme; Rolland, Benjamin; Gierski, Fabien; Martinetti, Margaret P; Naassila, Mickael

    2018-05-05

    Binge drinking (BD), i.e., consuming a large amount of alcohol in a short period of time, is an increasing public health issue. Though no clear definition has been adopted worldwide the speed of drinking seems to be a keystone of this behavior. Developing relevant animal models of BD is a priority for gaining a better characterization of the neurobiological and psychobiological mechanisms underlying this dangerous and harmful behavior. Until recently, preclinical research on BD has been conducted mostly using forced administration of alcohol, but more recent studies used scheduled access to alcohol, to model more voluntary excessive intakes, and to achieve signs of intoxications that mimic the human behavior. The main challenges for future research are discussed regarding the need of good face validity, construct validity and predictive validity of animal models of BD. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Implementing real-time robotic systems using CHIMERA II

    NASA Technical Reports Server (NTRS)

    Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.

    1990-01-01

    A description is given of the CHIMERA II programming environment and operating system, which was developed for implementing real-time robotic systems. Sensor-based robotic systems contain both general- and special-purpose hardware, and thus the development of applications tends to be a time-consuming task. The CHIMERA II environment is designed to reduce the development time by providing a convenient software interface between the hardware and the user. CHIMERA II supports flexible hardware configurations which are based on one or more VME-backplanes. All communication across multiple processors is transparent to the user through an extensive set of interprocessor communication primitives. CHIMERA II also provides a high-performance real-time kernel which supports both deadline and highest-priority-first scheduling. The flexibility of CHIMERA II allows hierarchical models for robot control, such as NASREM, to be implemented with minimal programming time and effort.

  9. A Qualitative Study of Underutilization of the AIDS Drug Assistance Program

    PubMed Central

    Olson, Kristin M.; Godwin, Noah C.; Wilkins, Sara Anne; Mugavero, Michael J.; Moneyham, Linda D.; Slater, Larry Z.; Raper, James L.

    2014-01-01

    In our previous work, we demonstrated underutilization of the AIDS Drug Assistance Program (ADAP) at an HIV clinic in Alabama. In order to understand barriers and facilitators to utilization of ADAP, we conducted focus groups of ADAP enrollees. Focus groups were stratified by sex, race, and historical medication possession ratio as a measure of program utilization. We grouped factors according to the social-ecological model. We found that multiple levels of influence, including patient and clinic-related factors, influenced utilization of antiretroviral medications. Patients introduced issues that illustrated high-priority needs for ADAP policy and implementation, suggesting that in order to improve ADAP utilization, the following issues must be addressed: patient transportation, ADAP medication refill schedules and procedures, mailing of medications, and the ADAP recertification process. These findings can inform a strategy of approaches to improve ADAP utilization, which may have widespread implications for ADAP programs across the United States. PMID:24503498

  10. KSC01pd1775

    NASA Image and Video Library

    2001-12-05

    KENNEDY SPACE CENTER, Fla. -- Gathered for a second day after a scrub due to weather conditions, the STS-108 crew again enjoy a pre-launch snack featuring a cake with the mission patch. Seated left to right are Mission Specialists Daniel M. Tani and Linda A. Godwin, Pilot Mark E. Kelly and Commander Dominic L. Gorie; the Expedition 4 crew Commander Yuri Onufrienko and astronauts Carl E. Walz and Daniel W. Bursch. Top priorities for the STS-108 (UF-1) mission of Endeavour are rotation of the International Space Station Expedition 3 and Expedition 4 crews; bringing water, equipment and supplies to the station in the Multi-Purpose Logistics Module Raffaello; and the crew's completion of robotics tasks and a spacewalk to install thermal blankets over two pieces of equipment at the bases of the Space Station's solar wings. Launch is scheduled for 5:19 p.m. EST (22:19 GMT) Dec .5, 2001, from Launch Pad 39B

  11. KSC01pd1882

    NASA Image and Video Library

    2001-12-19

    KENNEDY SPACE CENTER, FLA. - STS-107 Payload Specialist Ilan Ramon, from Israel, pauses during an experiment at SPACEHAB, Cape Canaveral, Fla., to talk with Mission Specialist Laurel Clark. STS-107 is a research mission. The primary payload is the first flight of the SHI Research Double Module (SHI/RDM). The experiments range from material sciences to life sciences (many rats). Also part of the payload is the Fast Reaction Experiments Enabling Science, Technology, Applications and Research (FREESTAR) that incorporates eight high priority secondary attached shuttle experiments: Mediterranean Israeli Dust Experiment (MEIDEX), Shuttle Ozone Limb Sounding Experiment (SOLSE-2), Student Tracked Atmospheric Research Satellite for Heuristic International Networking Experiment (STARSHINE), Critical Viscosity of Xenon-2 (CVX-2), Solar Constant Experiment-3 (SOLOCON-3), Prototype Synchrotron Radiation Detector (PSRD), Low Power Transceiver (LPT), and Collisions Into Dust Experiment -2 (COLLIDE-2). STS-107 is scheduled to launch in July 2002.

  12. KSC01pd1884

    NASA Image and Video Library

    2001-12-19

    KENNEDY SPACE CENTER, FLA. - At SPACEHAB, Cape Canaveral, Fla., members of the STS-107 crew familiarize themselves with experiments and equipment for the mission. Pointing at a piece of equipment (center) is Mission Specialist Laurel Clark . At right is Mission Specialist Kalpana Chawla. STS-107 is a research mission. The primary payload is the first flight of the SHI Research Double Module (SHI/RDM). The experiments range from material sciences to life sciences (many rats). Also part of the payload is the Fast Reaction Experiments Enabling Science, Technology, Applications and Research (FREESTAR) that incorporates eight high priority secondary attached shuttle experiments: Mediterranean Israeli Dust Experiment (MEIDEX), Shuttle Ozone Limb Sounding Experiment (SOLSE-2), Student Tracked Atmospheric Research Satellite for Heuristic International Networking Experiment (STARSHINE), Critical Viscosity of Xenon-2 (CVX-2), Solar Constant Experiment-3 (SOLOCON-3), Prototype Synchrotron Radiation Detector (PSRD), Low Power Transceiver (LPT), and Collisions Into Dust Experiment -2 (COLLIDE-2). STS-107 is scheduled to launch in July 2002

  13. KSC01pd1883

    NASA Image and Video Library

    2001-12-19

    KENNEDY SPACE CENTER, FLA. - - STS-107 Payload Specialist Ilan Ramon, from Israel, works on an experiment at SPACEHAB, Cape Canaveral, Fla. With him is Mission Specialist Laurel Clark. STS-107 is a research mission. The primary payload is the first flight of the SHI Research Double Module (SHI/RDM). The experiments range from material sciences to life sciences (many rats). Also part of the payload is the Fast Reaction Experiments Enabling Science, Technology, Applications and Research (FREESTAR) that incorporates eight high priority secondary attached shuttle experiments: Mediterranean Israeli Dust Experiment (MEIDEX), Shuttle Ozone Limb Sounding Experiment (SOLSE-2), Student Tracked Atmospheric Research Satellite for Heuristic International Networking Experiment (STARSHINE), Critical Viscosity of Xenon-2 (CVX-2), Solar Constant Experiment-3 (SOLOCON-3), Prototype Synchrotron Radiation Detector (PSRD), Low Power Transceiver (LPT), and Collisions Into Dust Experiment -2 (COLLIDE-2). STS-107 is scheduled to launch in July 2002

  14. Ambulatory surgery centers best practices for the 90s.

    PubMed

    Hoover, J A

    1994-05-01

    Outpatient surgery will be the driving force in the continued growth of ambulatory care in the 1990s. Providing efficient, high-quality ambulatory surgical services should therefore be a priority among healthcare providers. Arthur Andersen conducted a survey to discover best practices in ambulatory surgical service. General success characteristics of best performers were business-focused relationships with physicians, the use of clinical protocols, patient convenience, cost management, strong leadership, teamwork, streamlined processes and efficient design. Other important factors included scheduling to maximize OR room use; achieving surgical efficiencies through reduced case pack assembly errors and equipment availability; a focus on cost capture rather than charge capture; sound materiel management practices, such as standardization and vendor teaming; and the appropriate use of automated systems. It is important to evaluate whether the best practices are applicable to your environment and what specific changes to your current processes would be necessary to adopt them.

  15. KSC01pd1711

    NASA Image and Video Library

    2001-11-25

    KENNEDY SPACE CENTER, Fla. -- On the parking apron at KSC's Shuttle Landing Facility, the STS-108 and Expedition 4 crews pause after their arrival to greet the media. Standing, left to right, are Mission Specialists Linda A. Godwin and Daniel M. Tani, Pilot Mark E. Kelly, and Commander Dominic L. Gorie; Expedition 4 Commander Yuri Onufrienko and crew members Carl E. Walz and Daniel W. Bursch. Top priorities for the STS-108 (UF-1) mission of Endeavour are rotation of the International Space Station Expedition Three and Expedition Four crews, bringing water, equipment and supplies to the station in the Multi-Purpose Logistics Module Raffaello, and completion of spacewalk and robotics tasks. Tani and Godwin will take part in the spacewalk to install thermal blankets over two pieces of equipment at the bases of the Space Station's solar wings. Liftoff is scheduled for 7:41 p.m. EST

  16. Technician checks the mirrors of the Starshine-2 experiment

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Technician checks the mirrors of the Starshine-2 experiment KSC-01PD-1715 KENNEDY SPACE CENTER, Fla. -- A technician checks the mirrors on the Starshine-2 experiment inside a canister in the payload bay of Space Shuttle Endeavour. The deployable experiment is being carried on mission STS-108. Starshine-2's 800 aluminum mirrors were polished by more than 25,000 students from 26 countries. Top priorities for the STS-108 (UF-1) mission of Endeavour are rotation of the International Space Station Expedition Three and Expedition Four crews, bringing water, equipment and supplies to the station in the Multi-Purpose Logistics Module Raffaello, and completion of robotics tasks and a spacewalk to install thermal blankets over two pieces of equipment at the bases of the Space Station's solar wings. Liftoff of Endeavour on mission STS-108 is scheduled for 7:41 p.m. EST.

  17. Wind loading analysis and strategy for deflection reduction on HET wide field upgrade

    NASA Astrophysics Data System (ADS)

    South, Brian J.; Soukup, Ian M.; Worthington, Michael S.; Zierer, Joseph J.; Booth, John A.; Good, John M.

    2010-07-01

    Wind loading can be a detrimental source of vibration and deflection for any large terrestrial optical telescope. The Hobby-Eberly Telescope* (HET) in the Davis Mountains of West Texas is undergoing a Wide Field Upgrade (WFU) in support of the Dark Energy Experiment (HETDEX) that will greatly increase the size of the instrumentation subjected to operating wind speeds of up to 20.1 m/s (45 mph). A non-trivial consideration for this telescope (or others) is to quantify the wind loads and resulting deflections of telescope structures induced under normal operating conditions so that appropriate design changes can be made. A quasi-static computational fluid dynamics (CFD) model was generated using wind speeds collected on-site as inputs to characterize dynamic wind forces on telescope structures under various conditions. The CFD model was refined until predicted wind speed and direction inside the dome agreed with experimental data. The dynamic wind forces were then used in static loading analysis to determine maximum deflections under typical operating conditions. This approach also allows for exploration of operating parameters without impact to the observation schedule of the telescope. With optimum combinations of parameters (i.e. dome orientation, tracker position, and louver deployment), deflections due to current wind conditions can be significantly reduced. Furthermore, the upper limit for operating wind speed could be increased, provided these parameters are monitored closely. This translates into increased image quality and observing time.

  18. Transforming the sensing and numerical prediction of high-impact local weather through dynamic adaptation.

    PubMed

    Droegemeier, Kelvin K

    2009-03-13

    Mesoscale weather, such as convective systems, intense local rainfall resulting in flash floods and lake effect snows, frequently is characterized by unpredictable rapid onset and evolution, heterogeneity and spatial and temporal intermittency. Ironically, most of the technologies used to observe the atmosphere, predict its evolution and compute, transmit or store information about it, operate in a static pre-scheduled framework that is fundamentally inconsistent with, and does not accommodate, the dynamic behaviour of mesoscale weather. As a result, today's weather technology is highly constrained and far from optimal when applied to any particular situation. This paper describes a new cyberinfrastructure framework, in which remote and in situ atmospheric sensors, data acquisition and storage systems, assimilation and prediction codes, data mining and visualization engines, and the information technology frameworks within which they operate, can change configuration automatically, in response to evolving weather. Such dynamic adaptation is designed to allow system components to achieve greater overall effectiveness, relative to their static counterparts, for any given situation. The associated service-oriented architecture, known as Linked Environments for Atmospheric Discovery (LEAD), makes advanced meteorological and cyber tools as easy to use as ordering a book on the web. LEAD has been applied in a variety of settings, including experimental forecasting by the US National Weather Service, and allows users to focus much more attention on the problem at hand and less on the nuances of data formats, communication protocols and job execution environments.

  19. Minimal complexity control law synthesis

    NASA Technical Reports Server (NTRS)

    Bernstein, Dennis S.; Haddad, Wassim M.; Nett, Carl N.

    1989-01-01

    A paradigm for control law design for modern engineering systems is proposed: Minimize control law complexity subject to the achievement of a specified accuracy in the face of a specified level of uncertainty. Correspondingly, the overall goal is to make progress towards the development of a control law design methodology which supports this paradigm. Researchers achieve this goal by developing a general theory of optimal constrained-structure dynamic output feedback compensation, where here constrained-structure means that the dynamic-structure (e.g., dynamic order, pole locations, zero locations, etc.) of the output feedback compensation is constrained in some way. By applying this theory in an innovative fashion, where here the indicated iteration occurs over the choice of the compensator dynamic-structure, the paradigm stated above can, in principle, be realized. The optimal constrained-structure dynamic output feedback problem is formulated in general terms. An elegant method for reducing optimal constrained-structure dynamic output feedback problems to optimal static output feedback problems is then developed. This reduction procedure makes use of star products, linear fractional transformations, and linear fractional decompositions, and yields as a byproduct a complete characterization of the class of optimal constrained-structure dynamic output feedback problems which can be reduced to optimal static output feedback problems. Issues such as operational/physical constraints, operating-point variations, and processor throughput/memory limitations are considered, and it is shown how anti-windup/bumpless transfer, gain-scheduling, and digital processor implementation can be facilitated by constraining the controller dynamic-structure in an appropriate fashion.

  20. Tailoring health programming to clergy: findings from a study of United Methodist clergy in North Carolina.

    PubMed

    Proeschold-Bell, Rae Jean; LeGrand, Sara; Wallace, Amanda; James, John; Moore, H Edgar; Swift, Robin; Toole, David

    2012-01-01

    Research indicating high rates of chronic disease among some clergy groups highlights the need for health programming for clergy. Like any group united by similar beliefs and norms, clergy may find culturally tailored health programming more accessible and effective. There is an absence of research on what aspects clergy find important for clergy health programs. We conducted 11 focus groups with United Methodist Church pastors and district superintendents. Participants answered open-ended questions about clergy health program desires and ranked program priorities from a list of 13 possible programs. Pastors prioritized health club memberships, retreats, personal trainers, mental health counseling, and spiritual direction. District superintendents prioritized for pastors: physical exams, peer support groups, health coaching, retreats, health club memberships, and mental health counseling. District superintendents prioritized for themselves: physical exams, personal trainers, health coaching, retreats, and nutritionists. Additionally, through qualitative analysis, nine themes emerged concerning health and health programs: (a) clergy defined health holistically, and they expressed a desire for (b) schedule flexibility, (c) accessibility in rural areas, (d) low cost programs, (e) institutional support, (f) education on physical health, and (g) the opportunity to work on their health in connection with others. They also expressed concern about (h) mental health stigma and spoke about (i) the tension between prioritizing healthy behaviors and fulfilling vocational responsibilities. The design of future clergy health programming should consider these themes and the priorities clergy identified for health programming.

  1. A rule-based shell to hierarchically organize HST observations

    NASA Technical Reports Server (NTRS)

    Bose, Ashim; Gerb, Andrew

    1995-01-01

    An observing program on the Hubble Space Telescope (HST) is described in terms of exposures that are obtained by one or more of the instruments onboard the HST. These exposures are organized into a hierarchy of structures for purposes of efficient scheduling of observations. The process by which exposures get organized into the higher-level structures is called merging. This process relies on rules to determine which observations can be 'merged' into the same higher level structure, and which cannot. The TRANSformation expert system converts proposals for astronomical observations with HST into detailed observing plans. The conversion process includes the task of merging. Within TRANS, we have implemented a declarative shell to facilitate merging. This shell offers the following features: (1) an easy way of specifying rules on when to merge and when not to merge, (2) a straightforward priority mechanism for resolving conflicts among rules, (3) an explanation facility for recording the merging history, (4) a report generating mechanism to help users understand the reasons for merging, and (5) a self-documenting mechanism that documents all the merging rules that have been defined in the shell, ordered by priority. The merging shell is implemented using an object-oriented paradigm in CLOS. It has been a part of operational TRANS (after extensive testing) since July 1993. It has fulfilled all performance expectations, and has considerably simplified the process of implementing new or changed requirements for merging. The users are pleased with its report-generating and self-documenting features.

  2. The role of uncertainty and reward on eye movements in a virtual driving task

    PubMed Central

    Sullivan, Brian T.; Johnson, Leif; Rothkopf, Constantin A.; Ballard, Dana; Hayhoe, Mary

    2012-01-01

    Eye movements during natural tasks are well coordinated with ongoing task demands and many variables could influence gaze strategies. Sprague and Ballard (2003) proposed a gaze-scheduling model that uses a utility-weighted uncertainty metric to prioritize fixations on task-relevant objects and predicted that human gaze should be influenced by both reward structure and task-relevant uncertainties. To test this conjecture, we tracked the eye movements of participants in a simulated driving task where uncertainty and implicit reward (via task priority) were varied. Participants were instructed to simultaneously perform a Follow Task where they followed a lead car at a specific distance and a Speed Task where they drove at an exact speed. We varied implicit reward by instructing the participants to emphasize one task over the other and varied uncertainty in the Speed Task with the presence or absence of uniform noise added to the car's velocity. Subjects' gaze data were classified for the image content near fixation and segmented into looks. Gaze measures, including look proportion, duration and interlook interval, showed that drivers more closely monitor the speedometer if it had a high level of uncertainty, but only if it was also associated with high task priority or implicit reward. The interaction observed appears to be an example of a simple mechanism whereby the reduction of visual uncertainty is gated by behavioral relevance. This lends qualitative support for the primary variables controlling gaze allocation proposed in the Sprague and Ballard model. PMID:23262151

  3. Class D Management Implementation Approach of the First Orbital Mission of the Earth Venture Series

    NASA Technical Reports Server (NTRS)

    Wells, James E.; Scherrer, John; Law, Richard; Bonniksen, Chris

    2013-01-01

    A key element of the National Research Council's Earth Science and Applications Decadal Survey called for the creation of the Venture Class line of low-cost research and application missions within NASA (National Aeronautics and Space Administration). One key component of the architecture chosen by NASA within the Earth Venture line is a series of self-contained stand-alone spaceflight science missions called "EV-Mission". The first mission chosen for this competitively selected, cost and schedule capped, Principal Investigator-led opportunity is the CYclone Global Navigation Satellite System (CYGNSS). As specified in the defining Announcement of Opportunity, the Principal Investigator is held responsible for successfully achieving the science objectives of the selected mission and the management approach that he/she chooses to obtain those results has a significant amount of freedom as long as it meets the intent of key NASA guidance like NPR 7120.5 and 7123. CYGNSS is classified under NPR 7120.5E guidance as a Category 3 (low priority, low cost) mission and carries a Class D risk classification (low priority, high risk) per NPR 8705.4. As defined in the NPR guidance, Class D risk classification allows for a relatively broad range of implementation strategies. The management approach that will be utilized on CYGNSS is a streamlined implementation that starts with a higher risk tolerance posture at NASA and that philosophy flows all the way down to the individual part level.

  4. Multi-period response management to contaminated water distribution networks: dynamic programming versus genetic algorithms

    NASA Astrophysics Data System (ADS)

    Bashi-Azghadi, Seyyed Nasser; Afshar, Abbas; Afshar, Mohammad Hadi

    2018-03-01

    Previous studies on consequence management assume that the selected response action including valve closure and/or hydrant opening remains unchanged during the entire management period. This study presents a new embedded simulation-optimization methodology for deriving time-varying operational response actions in which the network topology may change from one stage to another. Dynamic programming (DP) and genetic algorithm (GA) are used in order to minimize selected objective functions. Two networks of small and large sizes are used in order to illustrate the performance of the proposed modelling schemes if a time-dependent consequence management strategy is to be implemented. The results show that for a small number of decision variables even in large-scale networks, DP is superior in terms of accuracy and computer runtime. However, as the number of potential actions grows, DP loses its merit over the GA approach. This study clearly proves the priority of the proposed dynamic operation strategy over the commonly used static strategy.

  5. Development of an exercise intervention to improve cognition in people with mild to moderate dementia: Dementia And Physical Activity (DAPA) Trial, registration ISRCTN32612072.

    PubMed

    Brown, Deborah; Spanjers, Katie; Atherton, Nicky; Lowe, Janet; Stonehewer, Louisa; Bridle, Chris; Sheehan, Bart; Lamb, Sarah E

    2015-06-01

    More than 800000 people in the UK have dementia, and it is a government priority to improve dementia care. Drug treatment options are relatively limited. The Dementia And Physical Activity (DAPA) study is a randomised trial which targets cognition in people with dementia, using an exercise programme. There is evidence to suggest that both aerobic and resistance exercise may be useful in improving cognition. Hence the intervention comprises a supervised part of twice-weekly exercise classes of one hour duration for 4 months, including aerobic exercise at moderate intensity on static bicycles, and resistance (weight training) exercise using weight vests, weight belts and dumbbells. Thereafter participants progress to unsupervised, independent exercise. Aids to behaviour modification have been incorporated into the intervention. The DAPA intervention has been designed to maximise likelihood of effectiveness and cost-effectiveness, and for delivery in the UK National Health Service. Copyright © 2015 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.

  6. Public health impact and cost-effectiveness of the RTS,S/AS01 malaria vaccine: a systematic comparison of predictions from four mathematical models.

    PubMed

    Penny, Melissa A; Verity, Robert; Bever, Caitlin A; Sauboin, Christophe; Galactionova, Katya; Flasche, Stefan; White, Michael T; Wenger, Edward A; Van de Velde, Nicolas; Pemberton-Ross, Peter; Griffin, Jamie T; Smith, Thomas A; Eckhoff, Philip A; Muhib, Farzana; Jit, Mark; Ghani, Azra C

    2016-01-23

    The phase 3 trial of the RTS,S/AS01 malaria vaccine candidate showed modest efficacy of the vaccine against Plasmodium falciparum malaria, but was not powered to assess mortality endpoints. Impact projections and cost-effectiveness estimates for longer timeframes than the trial follow-up and across a range of settings are needed to inform policy recommendations. We aimed to assess the public health impact and cost-effectiveness of routine use of the RTS,S/AS01 vaccine in African settings. We compared four malaria transmission models and their predictions to assess vaccine cost-effectiveness and impact. We used trial data for follow-up of 32 months or longer to parameterise vaccine protection in the group aged 5-17 months. Estimates of cases, deaths, and disability-adjusted life-years (DALYs) averted were calculated over a 15 year time horizon for a range of levels of Plasmodium falciparum parasite prevalence in 2-10 year olds (PfPR2-10; range 3-65%). We considered two vaccine schedules: three doses at ages 6, 7·5, and 9 months (three-dose schedule, 90% coverage) and including a fourth dose at age 27 months (four-dose schedule, 72% coverage). We estimated cost-effectiveness in the presence of existing malaria interventions for vaccine prices of US$2-10 per dose. In regions with a PfPR2-10 of 10-65%, RTS,S/AS01 is predicted to avert a median of 93,940 (range 20,490-126,540) clinical cases and 394 (127-708) deaths for the three-dose schedule, or 116,480 (31,450-160,410) clinical cases and 484 (189-859) deaths for the four-dose schedule, per 100,000 fully vaccinated children. A positive impact is also predicted at a PfPR2-10 of 5-10%, but there is little impact at a prevalence of lower than 3%. At $5 per dose and a PfPR2-10 of 10-65%, we estimated a median incremental cost-effectiveness ratio compared with current interventions of $30 (range 18-211) per clinical case averted and $80 (44-279) per DALY averted for the three-dose schedule, and of $25 (16-222) and $87 (48-244), respectively, for the four-dose schedule. Higher ICERs were estimated at low PfPR2-10 levels. We predict a significant public health impact and high cost-effectiveness of the RTS,S/AS01 vaccine across a wide range of settings. Decisions about implementation will need to consider levels of malaria burden, the cost-effectiveness and coverage of other malaria interventions, health priorities, financing, and the capacity of the health system to deliver the vaccine. PATH Malaria Vaccine Initiative; Bill & Melinda Gates Foundation; Global Good Fund; Medical Research Council; UK Department for International Development; GAVI, the Vaccine Alliance; WHO. Copyright © 2016 Penny et al. Open Access article distributed under the terms of CC BY. Published by Elsevier Ltd.. All rights reserved.

  7. Public health impact and cost-effectiveness of the RTS,S/AS01 malaria vaccine: a systematic comparison of predictions from four mathematical models

    PubMed Central

    Penny, Melissa A; Verity, Robert; Bever, Caitlin A; Sauboin, Christophe; Galactionova, Katya; Flasche, Stefan; White, Michael T; Wenger, Edward A; Van de Velde, Nicolas; Pemberton-Ross, Peter; Griffin, Jamie T; Smith, Thomas A; Eckhoff, Philip A; Muhib, Farzana; Jit, Mark; Ghani, Azra C

    2016-01-01

    Summary Background The phase 3 trial of the RTS,S/AS01 malaria vaccine candidate showed modest efficacy of the vaccine against Plasmodium falciparum malaria, but was not powered to assess mortality endpoints. Impact projections and cost-effectiveness estimates for longer timeframes than the trial follow-up and across a range of settings are needed to inform policy recommendations. We aimed to assess the public health impact and cost-effectiveness of routine use of the RTS,S/AS01 vaccine in African settings. Methods We compared four malaria transmission models and their predictions to assess vaccine cost-effectiveness and impact. We used trial data for follow-up of 32 months or longer to parameterise vaccine protection in the group aged 5–17 months. Estimates of cases, deaths, and disability-adjusted life-years (DALYs) averted were calculated over a 15 year time horizon for a range of levels of Plasmodium falciparum parasite prevalence in 2–10 year olds (PfPR2–10; range 3–65%). We considered two vaccine schedules: three doses at ages 6, 7·5, and 9 months (three-dose schedule, 90% coverage) and including a fourth dose at age 27 months (four-dose schedule, 72% coverage). We estimated cost-effectiveness in the presence of existing malaria interventions for vaccine prices of US$2–10 per dose. Findings In regions with a PfPR2–10 of 10–65%, RTS,S/AS01 is predicted to avert a median of 93 940 (range 20 490–126 540) clinical cases and 394 (127–708) deaths for the three-dose schedule, or 116 480 (31 450–160 410) clinical cases and 484 (189–859) deaths for the four-dose schedule, per 100 000 fully vaccinated children. A positive impact is also predicted at a PfPR2–10 of 5–10%, but there is little impact at a prevalence of lower than 3%. At $5 per dose and a PfPR2–10 of 10–65%, we estimated a median incremental cost-effectiveness ratio compared with current interventions of $30 (range 18–211) per clinical case averted and $80 (44–279) per DALY averted for the three-dose schedule, and of $25 (16–222) and $87 (48–244), respectively, for the four-dose schedule. Higher ICERs were estimated at low PfPR2–10 levels. Interpretation We predict a significant public health impact and high cost-effectiveness of the RTS,S/AS01 vaccine across a wide range of settings. Decisions about implementation will need to consider levels of malaria burden, the cost-effectiveness and coverage of other malaria interventions, health priorities, financing, and the capacity of the health system to deliver the vaccine. Funding PATH Malaria Vaccine Initiative; Bill & Melinda Gates Foundation; Global Good Fund; Medical Research Council; UK Department for International Development; GAVI, the Vaccine Alliance; WHO. PMID:26549466

  8. Ex-ante assessment of different vaccination-based control schedules against the peste des petits ruminants virus in sub-Saharan Africa

    PubMed Central

    Lancelot, Renaud; Domenech, Joseph; Lesnoff, Matthieu

    2018-01-01

    Background Peste des petits ruminants (PPR) is a highly contagious and widespread viral infection of small ruminants (goats and sheep), causing heavy economic losses in many developing countries. Therefore, its progressive control and global eradication by 2030 was defined as a priority by international organizations addressing animal health. The control phase of the global strategy is based on mass vaccination of small ruminant populations in endemic regions or countries. It is estimated that a 70% post-vaccination immunity rate (PVIR) is needed in a given epidemiological unit to prevent PPR virus spread. However, implementing mass vaccination is difficult and costly in smallholder farming systems with scattered livestock and limited facilities. Regarding this, controlling PPR is a special challenge in sub-Saharan Africa. In this study, we focused on this region to assess the effect of several variables of PVIR in two contrasted smallholder farming systems. Methods Using a seasonal matrix population model of PVIR, we estimated its decay in goats reared in sub-humid areas, and sheep reared in semi-arid areas, over a 4-year vaccination program. Assuming immunologically naive and PPR-free epidemiological unit, we assessed the ability of different vaccination scenarios to reach the 70% PVIR throughout the program. The tested scenarios differed in i) their overall schedule, ii) their delivery month and iii) their vaccination coverage. Results In sheep reared in semi-arid areas, the vaccination month did affect the PVIR decay though it did not in goats in humid regions. In both cases, our study highlighted i) the importance of targeting the whole eligible population at least during the two first years of the vaccination program and ii) the importance of reaching a vaccination coverage as high as 80% of this population. This study confirmed the relevance of the vaccination schedules recommended by international organizations. PMID:29351277

  9. The Falcon Telescope Network

    NASA Astrophysics Data System (ADS)

    Chun, F.; Tippets, R.; Dearborn, M.; Gresham, K.; Freckleton, R.; Douglas, M.

    2014-09-01

    The Falcon Telescope Network (FTN) is a global network of small aperture telescopes developed by the Center for Space Situational Awareness Research in the Department of Physics at the United States Air Force Academy (USAFA). Consisting of commercially available equipment, the FTN is a collaborative effort between USAFA and other educational institutions ranging from two- and four-year colleges to major research universities. USAFA provides the equipment (e.g. telescope, mount, camera, filter wheel, dome, weather station, computers and storage devices) while the educational partners provide the building and infrastructure to support an observatory. The user base includes USAFA along with K-12 and higher education faculty and students. Since the FTN has a general use purpose, objects of interest include satellites, astronomical research, and STEM support images. The raw imagery, all in the public domain, will be accessible to FTN partners and will be archived at USAFA in the Cadet Space Operations Center. FTN users will be able to submit observational requests via a web interface. The requests will then be prioritized based on the type of user, the object of interest, and a user-defined priority. A network wide schedule will be developed every 24 hours and each FTN site will autonomously execute its portion of the schedule. After an observational request is completed, the FTN user will receive notification of collection and a link to the data. The Falcon Telescope Network is an ambitious endeavor, but demonstrates the cooperation that can be achieved by multiple educational institutions.

  10. Willpower

    NASA Technical Reports Server (NTRS)

    Little, Terry

    2002-01-01

    I am struck by how often failure is blamed on a lack of discipline. You often hear losing coaches cite this as the reason for a big loss. I don't recall the last time I heard one say that his team lost a game because of his players' lack of skill. I think a breakdown in discipline is also one of the key reasons why program and project management teams fail to meet expectations. The first program I ever managed had a clear set of priorities. I understood the mandate, and so did everyone else on the team. We set an ambitious schedule and started to work fervently. Not too long into the program the customer wanted to know what performance he was going to get. I replied by categorizing the performance parameters into three bins: 1. Performance you will get. 2. Performance you may get. 3. Performance that there's no way you will get. Did that cause an uproar. The customer demanded everything in the second bin be moved to the first, and most everything in the third bin moved to the second. My immediate impulse was to agree, but I managed to overcome that. In my heart, I knew that we would never meet the already ambitious schedule if we had to deliver more performance. No was my answer. The program turned out to be a huge success, but the result would have been largely different had senior management or I failed to maintain discipline.

  11. Assessment of patients' level of satisfaction with cleft treatment using the Cleft Evaluation Profile.

    PubMed

    Noor, Siti Noor Fazliah Mohd; Musa, Sabri

    2007-05-01

    Determination of the psychosocial status and assessment of the level of satisfaction in Malaysian cleft palate patients and their parents. Cross-sectional study. Sixty cleft lip and palate patients (12 to 17 years of age) from Hospital Universiti Sains Malaysia and their parents were selected. The questionnaires used were the Child Interview Schedule, the Parents Interview Schedule, and the Cleft Evaluation Profile (CEP), administered via individual interviews. Patients were teased because of their clefts and felt their self-confidence was affected by the cleft condition. They were frequently teased about cleft-related features such as speech, teeth, and lip appearance. Parents also reported that their children were being teased because of their clefts and that their children's self-confidence was affected by the clefts. Both showed a significant level of satisfaction with the treatment provided by the cleft team. There was no significant difference between the responses of the patients and their parents. The features that were found to be most important for the patients and their parents, in decreasing order of priority, were teeth, nose, lips, and speech. Cleft lip and/or palate patients were teased because of their clefts, and it affected their self-confidence. The Cleft Evaluation Profile is a reliable and useful tool to assess patients' level of satisfaction with treatment received for cleft lip and/or palate and can identify the types of cleft-related features that are most important for the patients.

  12. Reducing Risks to Women Linked to Shift Work, Long Work Hours, and Related Workplace Sleep and Fatigue Issues.

    PubMed

    Caruso, Claire C

    2015-10-01

    In the United States, an estimated 12% to 28% of working women are on shift work schedules, and 12% work more than 48 hours per week. Shift work and long work hours are associated with many health and safety risks, including obesity, injuries, and negative reproductive outcomes. Over time, the worker is at risk for developing a wide range of chronic diseases. These work schedules can also strain personal relationships, owing to fatigue and poor mood from sleep deprivation and reduced quality time to spend with family and friends. Worker errors from fatigue can lead to reduced quality of goods and services, negatively impacting the employer. In addition, mistakes by fatigued workers can have far-reaching negative effects on the community, ranging from medical care errors to motor vehicle crashes and industrial disasters that endanger others. To reduce the many risks that are linked to these demanding work hours, the National Institute for Occupational Safety and Health (NIOSH) conducts research, develops guidance and authoritative recommendations, and translates and disseminates scientific information to protect workers, their families, employers, and the community. The key message to reduce these risks is making sleep a priority in the employer's systems for organizing work and in the worker's personal life. The NIOSH website has freely available online training programs with suggestions for workers and their managers to help them better cope with this workplace hazard.

  13. Residency Training: Quality improvement projects in neurology residency and fellowship: applying DMAIC methodology.

    PubMed

    Kassardjian, Charles D; Williamson, Michelle L; van Buskirk, Dorothy J; Ernste, Floranne C; Hunderfund, Andrea N Leep

    2015-07-14

    Teaching quality improvement (QI) is a priority for residency and fellowship training programs. However, many medical trainees have had little exposure to QI methods. The purpose of this study is to review a rigorous and simple QI methodology (define, measure, analyze, improve, and control [DMAIC]) and demonstrate its use in a fellow-driven QI project aimed at reducing the number of delayed and canceled muscle biopsies at our institution. DMAIC was utilized. The project aim was to reduce the number of delayed muscle biopsies to 10% or less within 24 months. Baseline data were collected for 12 months. These data were analyzed to identify root causes for muscle biopsy delays and cancellations. Interventions were developed to address the most common root causes. Performance was then remeasured for 9 months. Baseline data were collected on 97 of 120 muscle biopsies during 2013. Twenty biopsies (20.6%) were delayed. The most common causes were scheduling too many tests on the same day and lack of fasting. Interventions aimed at patient education and biopsy scheduling were implemented. The effect was to reduce the number of delayed biopsies to 6.6% (6/91) over the next 9 months. Familiarity with QI methodologies such as DMAIC is helpful to ensure valid results and conclusions. Utilizing DMAIC, we were able to implement simple changes and significantly reduce the number of delayed muscle biopsies at our institution. © 2015 American Academy of Neurology.

  14. Testing SLURM open source batch system for a Tierl/Tier2 HEP computing facility

    NASA Astrophysics Data System (ADS)

    Donvito, Giacinto; Salomoni, Davide; Italiano, Alessandro

    2014-06-01

    In this work the testing activities that were carried on to verify if the SLURM batch system could be used as the production batch system of a typical Tier1/Tier2 HEP computing center are shown. SLURM (Simple Linux Utility for Resource Management) is an Open Source batch system developed mainly by the Lawrence Livermore National Laboratory, SchedMD, Linux NetworX, Hewlett-Packard, and Groupe Bull. Testing was focused both on verifying the functionalities of the batch system and the performance that SLURM is able to offer. We first describe our initial set of requirements. Functionally, we started configuring SLURM so that it replicates all the scheduling policies already used in production in the computing centers involved in the test, i.e. INFN-Bari and the INFN-Tier1 at CNAF, Bologna. Currently, the INFN-Tier1 is using IBM LSF (Load Sharing Facility), while INFN-Bari, an LHC Tier2 for both CMS and Alice, is using Torque as resource manager and MAUI as scheduler. We show how we configured SLURM in order to enable several scheduling functionalities such as Hierarchical FairShare, Quality of Service, user-based and group-based priority, limits on the number of jobs per user/group/queue, job age scheduling, job size scheduling, and scheduling of consumable resources. We then show how different job typologies, like serial, MPI, multi-thread, whole-node and interactive jobs can be managed. Tests on the use of ACLs on queues or in general other resources are then described. A peculiar SLURM feature we also verified is triggers on event, useful to configure specific actions on each possible event in the batch system. We also tested highly available configurations for the master node. This feature is of paramount importance since a mandatory requirement in our scenarios is to have a working farm cluster even in case of hardware failure of the server(s) hosting the batch system. Among our requirements there is also the possibility to deal with pre-execution and post-execution scripts, and controlled handling of the failure of such scripts. This feature is heavily used, for example, at the INFN-Tier1 in order to check the health status of a worker node before execution of each job. Pre- and post-execution scripts are also important to let WNoDeS, the IaaS Cloud solution developed at INFN, use SLURM as its resource manager. WNoDeS has already been supporting the LSF and Torque batch systems for some time; in this work we show the work done so that WNoDeS supports SLURM as well. Finally, we show several performance tests that we carried on to verify SLURM scalability and reliability, detailing scalability tests both in terms of managed nodes and of queued jobs.

  15. Data-Driven Residential Load Modeling and Validation in GridLAB-D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gotseff, Peter; Lundstrom, Blake

    Accurately characterizing the impacts of high penetrations of distributed energy resources (DER) on the electric distribution system has driven modeling methods from traditional static snap shots, often representing a critical point in time (e.g., summer peak load), to quasi-static time series (QSTS) simulations capturing all the effects of variable DER, associated controls and hence, impacts on the distribution system over a given time period. Unfortunately, the high time resolution DER source and load data required for model inputs is often scarce or non-existent. This paper presents work performed within the GridLAB-D model environment to synthesize, calibrate, and validate 1-second residentialmore » load models based on measured transformer loads and physics-based models suitable for QSTS electric distribution system modeling. The modeling and validation approach taken was to create a typical GridLAB-D model home that, when replicated to represent multiple diverse houses on a single transformer, creates a statistically similar load to a measured load for a given weather input. The model homes are constructed to represent the range of actual homes on an instrumented transformer: square footage, thermal integrity, heating and cooling system definition as well as realistic occupancy schedules. House model calibration and validation was performed using the distribution transformer load data and corresponding weather. The modeled loads were found to be similar to the measured loads for four evaluation metrics: 1) daily average energy, 2) daily average and standard deviation of power, 3) power spectral density, and 4) load shape.« less

  16. Assessment of Silicon Carbide Composites for Advanced Salt-Cooled Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katoh, Yutai; Wilson, Dane F; Forsberg, Charles W

    2007-09-01

    The Advanced High-Temperature Reactor (AHTR) is a new reactor concept that uses a liquid fluoride salt coolant and a solid high-temperature fuel. Several alternative fuel types are being considered for this reactor. One set of fuel options is the use of pin-type fuel assemblies with silicon carbide (SiC) cladding. This report provides (1) an initial viability assessment of using SiC as fuel cladding and other in-core components of the AHTR, (2) the current status of SiC technology, and (3) recommendations on the path forward. Based on the analysis of requirements, continuous SiC fiber-reinforced, chemically vapor-infiltrated SiC matrix (CVI SiC/SiC) compositesmore » are recommended as the primary option for further study on AHTR fuel cladding among various industrially available forms of SiC. Critical feasibility issues for the SiC-based AHTR fuel cladding are identified to be (1) corrosion of SiC in the candidate liquid salts, (2) high dose neutron radiation effects, (3) static fatigue failure of SiC/SiC, (4) long-term radiation effects including irradiation creep and radiation-enhanced static fatigue, and (5) fabrication technology of hermetic wall and sealing end caps. Considering the results of the issues analysis and the prospects of ongoing SiC research and development in other nuclear programs, recommendations on the path forward is provided in the order or priority as: (1) thermodynamic analysis and experimental examination of SiC corrosion in the candidate liquid salts, (2) assessment of long-term mechanical integrity issues using prototypical component sections, and (3) assessment of high dose radiation effects relevant to the anticipated operating condition.« less

  17. Parental perceptions of barriers to physical activity in children with developmental disabilities living in Trinidad and Tobago.

    PubMed

    Njelesani, Janet; Leckie, Karen; Drummond, Jennifer; Cameron, Deb

    2015-01-01

    Parents have a strong influence on their child's engagement in physical activities, especially for children with developmental disabilities, as these children are less likely to initiate physical activity. Knowledge is limited regarding parents' perceptions of this phenomenon in low- and middle-income countries (LMICs); yet many rehabilitation providers work with children with developmental disabilities and their parents in these contexts. The aim of this study was to explore the barriers perceived by parents of children with developmental disabilities to their children's engagement in physical activity. An occupational perspective was used to explore how parents speak about barriers to their child's engagement in physical activity. Interviews were conducted with nine parents in Port-of-Spain, Trinidad and Tobago. Parent's perceived barriers were categorized into four themes: family priorities, not an option in our environment, need to match the activity to the child's ability, and need for specialized supports. FINDINGS provide opportunities for future rehabilitation and community programming in LMICs. Implications for Rehabilitation Children living with a developmental disability may engage more in solitary and sedentary pursuits as a result of parents choosing activities that do not present extensive social and physical demands for their child. Therapists can play an important role in providing knowledge to parents of appropriate physical activity and the benefits of physical activity for children with developmental disabilities in order to promote children's participation. In environments where there is limited social support for families, therapists need to consider and be particularly supportive of parental priorities and schedules.

  18. Emerging Propulsion Technologies

    NASA Technical Reports Server (NTRS)

    Keys, Andrew S.

    2006-01-01

    The Emerging Propulsion Technologies (EPT) investment area is the newest area within the In-Space Propulsion Technology (ISPT) Project and strives to bridge technologies in the lower Technology Readiness Level (TRL) range (2 to 3) to the mid TRL range (4 to 6). A prioritization process, the Integrated In-Space Transportation Planning (IISTP), was developed and applied in FY01 to establish initial program priorities. The EPT investment area emerged for technologies that scored well in the IISTP but had a low technical maturity level. One particular technology, the Momentum-eXchange Electrodynamic-Reboost (MXER) tether, scored extraordinarily high and had broad applicability in the IISTP. However, its technical maturity was too low for ranking alongside technologies like the ion engine or aerocapture. Thus MXER tethers assumed top priority at EPT startup in FY03 with an aggressive schedule and adequate budget. It was originally envisioned that future technologies would enter the ISP portfolio through EPT, and EPT developed an EPT/ISP Entrance Process for future candidate ISP technologies. EPT has funded the following secondary, candidate ISP technologies at a low level: ultra-lightweight solar sails, general space/near-earth tether development, electrodynamic tether development, advanced electric propulsion, and in-space mechanism development. However, the scope of the ISPT program has focused over time to more closely match SMD needs and technology advancement successes. As a result, the funding for MXER and other EPT technologies is not currently available. Consequently, the MXER tether tasks and other EPT tasks were expected to phased out by November 2006. Presentation slides are presented which provide activity overviews for the aerocapture technology and emerging propulsion technology projects.

  19. 'All singing from the same hymn sheet': healthcare professionals' perceptions of developing patient education material about the cardiovascular aspects of rheumatoid arthritis.

    PubMed

    John, Holly; Hale, Elizabeth D; Treharne, Gareth J; Carroll, Douglas; Kitas, George D

    2009-12-01

    Cardiovascular disease (CVD) is the leading cause of death in Britain, and its prevention is a priority. Rheumatoid arthritis (RA) patients have an increased risk of CVD, and management of modifiable classical risk factors requires a programme with patient education at its heart. Before a programme for RA patients is implemented, it is important to explore the perceptions of patients and relevant healthcare professionals and consider how these could influence the subsequent content, timing and delivery of such education. Here, we assess healthcare professionals' perceptions. Qualitative focus group methodology was adopted. Four group meetings of healthcare professionals were held using a semi-structured interview schedule. The focus group transcripts were analysed using interpretative phenomenological analysis. Three superordinate themes emerged: professional determinations about people with RA, including their perceptions about patients' priorities and motivations; communication about CVD risk, including what should be communicated, how, to whom and when; and responsibility for CVD management, referring to patients and the healthcare community. Although healthcare professionals agree that it is important to convey the increased CVD risk to patients with RA, there is concern they may be less proactive in promoting risk management strategies. There was uncertainty about the best time to discuss CVD with RA patients. Maintaining a close relationship between primary and secondary care was thought to be important, with all healthcare professionals 'singing from the same hymn sheet'. These findings can inform the development of novel education material to fulfil a currently unmet clinical need. Copyright (c) 2009 John Wiley & Sons, Ltd.

  20. Measurement and Characterization of Space Shuttle Solid Rocket Motor Plume Acoustics

    NASA Technical Reports Server (NTRS)

    Kenny, Jeremy; Hobbs, Chris; Plotkin, Ken; Pilkey, Debbie

    2009-01-01

    Lift-off acoustic environments generated by the future Ares I launch vehicle are assessed by the NASA Marshall Space Flight Center (MSFC) acoustics team using several prediction tools. This acoustic environment is directly caused by the Ares I First Stage booster, powered by the five-segment Reusable Solid Rocket Motor (RSRMV). The RSRMV is a larger-thrust derivative design from the currently used Space Shuttle solid rocket motor, the Reusable Solid Rocket Motor (RSRM). Lift-off acoustics is an integral part of the composite launch vibration environment affecting the Ares launch vehicle and must be assessed to help generate hardware qualification levels and ensure structural integrity of the vehicle during launch and lift-off. Available prediction tools that use free field noise source spectrums as a starting point for generation of lift-off acoustic environments are described in the monograph NASA SP-8072: "Acoustic Loads Generated by the Propulsion System." This monograph uses a reference database for free field noise source spectrums which consist of subscale rocket motor firings, oriented in horizontal static configurations. The phrase "subscale" is appropriate, since the thrust levels of rockets in the reference database are orders of magnitude lower than the current design thrust for the Ares launch family. Thus, extrapolation is needed to extend the various reference curves to match Ares-scale acoustic levels. This extrapolation process yields a subsequent amount of uncertainty added upon the acoustic environment predictions. As the Ares launch vehicle design schedule progresses, it is important to take every opportunity to lower prediction uncertainty and subsequently increase prediction accuracy. Never before in NASA s history has plume acoustics been measured for large scale solid rocket motors. Approximately twice a year, the RSRM prime vendor, ATK Launch Systems, static fires an assembled RSRM motor in a horizontal configuration at their test facility in Utah. The remaining RSRM static firings will take place on elevated terrain, with the nozzle exit plume being mostly undeflected and the landscape allowing placement of microphones within direct line of sight to the exhaust plume. These measurements will help assess the current extrapolation process by direct comparison between subscale and full scale solid rocket motor data.

Top