A Method for Optimal Load Dispatch of a Multi-zone Power System with Zonal Exchange Constraints
NASA Astrophysics Data System (ADS)
Hazarika, Durlav; Das, Ranjay
2018-04-01
This paper presented a method for economic generation scheduling of a multi-zone power system having inter zonal operational constraints. For this purpose, the generator rescheduling for a multi area power system having inter zonal operational constraints has been represented as a two step optimal generation scheduling problem. At first, the optimal generation scheduling has been carried out for the zone having surplus or deficient generation with proper spinning reserve using co-ordination equation. The power exchange required for the deficit zones and zones having no generation are estimated based on load demand and generation for the zone. The incremental transmission loss formulas for the transmission lines participating in the power transfer process among the zones are formulated. Using these, incremental transmission loss expression in co-ordination equation, the optimal generation scheduling for the zonal exchange has been determined. Simulation is carried out on IEEE 118 bus test system to examine the applicability and validity of the method.
Multi-Satellite Observation Scheduling for Large Area Disaster Emergency Response
NASA Astrophysics Data System (ADS)
Niu, X. N.; Tang, H.; Wu, L. X.
2018-04-01
an optimal imaging plan, plays a key role in coordinating multiple satellites to monitor the disaster area. In the paper, to generate imaging plan dynamically according to the disaster relief, we propose a dynamic satellite task scheduling method for large area disaster response. First, an initial robust scheduling scheme is generated by a robust satellite scheduling model in which both the profit and the robustness of the schedule are simultaneously maximized. Then, we use a multi-objective optimization model to obtain a series of decomposing schemes. Based on the initial imaging plan, we propose a mixed optimizing algorithm named HA_NSGA-II to allocate the decomposing results thus to obtain an adjusted imaging schedule. A real disaster scenario, i.e., 2008 Wenchuan earthquake, is revisited in terms of rapid response using satellite resources and used to evaluate the performance of the proposed method with state-of-the-art approaches. We conclude that our satellite scheduling model can optimize the usage of satellite resources so as to obtain images in disaster response in a more timely and efficient manner.
NASA Astrophysics Data System (ADS)
Nagata, Takeshi; Tao, Yasuhiro; Utatani, Masahiro; Sasaki, Hiroshi; Fujita, Hideki
This paper proposes a multi-agent approach to maintenance scheduling in restructured power systems. The restructuring of electric power industry has resulted in market-based approaches for unbundling a multitude of service provided by self-interested entities such as power generating companies (GENCOs), transmission providers (TRANSCOs) and distribution companies (DISCOs). The Independent System Operator (ISO) is responsible for the security of the system operation. The schedule submitted to ISO by GENCOs and TRANSCOs should satisfy security and reliability constraints. The proposed method consists of several GENCO Agents (GAGs), TARNSCO Agents (TAGs) and a ISO Agent(IAG). The IAG’s role in maintenance scheduling is limited to ensuring that the submitted schedules do not cause transmission congestion or endanger the system reliability. From the simulation results, it can be seen the proposed multi-agent approach could coordinate between generation and transmission maintenance schedules.
NASA Astrophysics Data System (ADS)
Nejad, Hossein Tehrani Nik; Sugimura, Nobuhiro; Iwamura, Koji; Tanimizu, Yoshitaka
Process planning and scheduling are important manufacturing planning activities which deal with resource utilization and time span of manufacturing operations. The process plans and the schedules generated in the planning phase shall be modified in the execution phase due to the disturbances in the manufacturing systems. This paper deals with a multi-agent architecture of an integrated and dynamic system for process planning and scheduling for multi jobs. A negotiation protocol is discussed, in this paper, to generate the process plans and the schedules of the manufacturing resources and the individual jobs, dynamically and incrementally, based on the alternative manufacturing processes. The alternative manufacturing processes are presented by the process plan networks discussed in the previous paper, and the suitable process plans and schedules are searched and generated to cope with both the dynamic status and the disturbances of the manufacturing systems. We initiatively combine the heuristic search algorithms of the process plan networks with the negotiation protocols, in order to generate suitable process plans and schedules in the dynamic manufacturing environment. A simulation software has been developed to carry out case studies, aimed at verifying the performance of the proposed multi-agent architecture.
Multi-time scale energy management of wind farms based on comprehensive evaluation technology
NASA Astrophysics Data System (ADS)
Xu, Y. P.; Huang, Y. H.; Liu, Z. J.; Wang, Y. F.; Li, Z. Y.; Guo, L.
2017-11-01
A novel energy management of wind farms is proposed in this paper. Firstly, a novel comprehensive evaluation system is proposed to quantify economic properties of each wind farm to make the energy management more economical and reasonable. Then, a combination of multi time-scale schedule method is proposed to develop a novel energy management. The day-ahead schedule optimizes unit commitment of thermal power generators. The intraday schedule is established to optimize power generation plan for all thermal power generating units, hydroelectric generating sets and wind power plants. At last, the power generation plan can be timely revised in the process of on-line schedule. The paper concludes with simulations conducted on a real provincial integrated energy system in northeast China. Simulation results have validated the proposed model and corresponding solving algorithms.
NASA Astrophysics Data System (ADS)
Bolon, Kevin M.
The lack of multi-day data for household travel and vehicle capability requirements is an impediment to evaluations of energy savings strategies, since (1) travel requirements vary from day-to-day, and (2) energy-saving transportation options often have reduced capability. This work demonstrates a survey methodology and modeling system for evaluating the energy-savings potential of household travel, considering multi-day travel requirements and capability constraints imposed by the available transportation resources. A stochastic scheduling model is introduced---the multi-day Household Activity Schedule Estimator (mPHASE)---which generates synthetic daily schedules based on "fuzzy" descriptions of activity characteristics using a finite-element representation of activity flexibility, coordination among household members, and scheduling conflict resolution. Results of a thirty-household pilot study are presented in which responses to an interactive computer assisted personal interview were used as inputs to the mPHASE model in order to illustrate the feasibility of generating complex, realistic multi-day household schedules. Study vehicles were equipped with digital cameras and GPS data acquisition equipment to validate the model results. The synthetically generated schedules captured an average of 60 percent of household travel distance, and exhibited many of the characteristics of complex household travel, including day-to-day travel variation, and schedule coordination among household members. Future advances in the methodology may improve the model results, such as encouraging more detailed and accurate responses by providing a selection of generated schedules during the interview. Finally, the Constraints-based Transportation Resource Assignment Model (CTRAM) is introduced. Using an enumerative optimization approach, CTRAM determines the energy-minimizing vehicle-to-trip assignment decisions, considering trip schedules, occupancy, and vehicle capability. Designed to accept either actual or synthetic schedules, results of an application of the optimization model to the 2001 and 2009 National Household Travel Survey data show that U.S. households can reduce energy use by 10 percent, on average, by modifying the assignment of existing vehicles to trips. Households in 2009 show a higher tendency to assign vehicles optimally than in 2001, and multi-vehicle households with diverse fleets have greater savings potential, indicating that fleet modification strategies may be effective, particularly under higher energy price conditions.
Flexible Coordination in Resource-Constrained Domains
1994-07-01
Experiments (TIEs) with planning technologies developed at both BBN (FMERG) and SRI ( SOCAP ). We have also exported scheduling support capabilities provided by...SRI’s SOCAP course of action (COA) plan generator. "* Development and demonstration of distributed, multi-level deployment scheduling - Through analysis...scheduler was adapted for integration with the SOCAP planning system to provide feedback on transportation feasibility during generation of the
APGEN Scheduling: 15 Years of Experience in Planning Automation
NASA Technical Reports Server (NTRS)
Maldague, Pierre F.; Wissler, Steve; Lenda, Matthew; Finnerty, Daniel
2014-01-01
In this paper, we discuss the scheduling capability of APGEN (Activity Plan Generator), a multi-mission planning application that is part of the NASA AMMOS (Advanced Multi- Mission Operations System), and how APGEN scheduling evolved over its applications to specific Space Missions. Our analysis identifies two major reasons for the successful application of APGEN scheduling to real problems: an expressive DSL (Domain-Specific Language) for formulating scheduling algorithms, and a well-defined process for enlisting the help of auxiliary modeling tools in providing high-fidelity, system-level simulations of the combined spacecraft and ground support system.
Research on Scheduling Algorithm for Multi-satellite and Point Target Task on Swinging Mode
NASA Astrophysics Data System (ADS)
Wang, M.; Dai, G.; Peng, L.; Song, Z.; Chen, G.
2012-12-01
Nowadays, using satellite in space to observe ground is an important and major method to obtain ground information. With the development of the scientific technology in the field of space, many fields such as military and economic and other areas have more and more requirement of space technology because of the benefits of the satellite's widespread, timeliness and unlimited of area and country. And at the same time, because of the wide use of all kinds of satellites, sensors, repeater satellites and ground receiving stations, ground control system are now facing great challenge. Therefore, how to make the best value of satellite resources so as to make full use of them becomes an important problem of ground control system. Satellite scheduling is to distribute the resource to all tasks without conflict to obtain the scheduling result so as to complete as many tasks as possible to meet user's requirement under considering the condition of the requirement of satellites, sensors and ground receiving stations. Considering the size of the task, we can divide tasks into point task and area task. This paper only considers point targets. In this paper, a description of satellite scheduling problem and a chief introduction of the theory of satellite scheduling are firstly made. We also analyze the restriction of resource and task in scheduling satellites. The input and output flow of scheduling process are also chiefly described in the paper. On the basis of these analyses, we put forward a scheduling model named as multi-variable optimization model for multi-satellite and point target task on swinging mode. In the multi-variable optimization model, the scheduling problem is transformed the parametric optimization problem. The parameter we wish to optimize is the swinging angle of every time-window. In the view of the efficiency and accuracy, some important problems relating the satellite scheduling such as the angle relation between satellites and ground targets, positive and negative swinging angle and the computation of time window are analyzed and discussed. And many strategies to improve the efficiency of this model are also put forward. In order to solve the model, we bring forward the conception of activity sequence map. By using the activity sequence map, the activity choice and the start time of the activity can be divided. We also bring forward three neighborhood operators to search the result space. The front movement remaining time and the back movement remaining time are used to analyze the feasibility to generate solution from neighborhood operators. Lastly, the algorithm to solve the problem and model is put forward based genetic algorithm. Population initialization, crossover operator, mutation operator, individual evaluation, collision decrease operator, select operator and collision elimination operator is designed in the paper. Finally, the scheduling result and the simulation for a practical example on 5 satellites and 100 point targets with swinging mode is given, and the scheduling performances are also analyzed while the swinging angle in 0, 5, 10, 15, 25. It can be shown by the result that the model and the algorithm are more effective than those ones without swinging mode.
Multi-time Scale Joint Scheduling Method Considering the Grid of Renewable Energy
NASA Astrophysics Data System (ADS)
Zhijun, E.; Wang, Weichen; Cao, Jin; Wang, Xin; Kong, Xiangyu; Quan, Shuping
2018-01-01
Renewable new energy power generation prediction error like wind and light, brings difficulties to dispatch the power system. In this paper, a multi-time scale robust scheduling method is set to solve this problem. It reduces the impact of clean energy prediction bias to the power grid by using multi-time scale (day-ahead, intraday, real time) and coordinating the dispatching power output of various power supplies such as hydropower, thermal power, wind power, gas power and. The method adopts the robust scheduling method to ensure the robustness of the scheduling scheme. By calculating the cost of the abandon wind and the load, it transforms the robustness into the risk cost and optimizes the optimal uncertainty set for the smallest integrative costs. The validity of the method is verified by simulation.
Tan, Jin; Zhang, Yingchen
2017-02-02
With increasing penetrations of wind generation on electric grids, wind power plants (WPPs) are encouraged to provide frequency ancillary services (FAS); however, it is a challenge to ensure that variable wind generation can reliably provide these ancillary services. This paper proposes using a battery energy storage system (BESS) to ensure the WPPs' commitment to FAS. This method also focuses on reducing the BESS's size and extending its lifetime. In this paper, a state-machine-based coordinated control strategy is developed to utilize a BESS to support the obliged FAS of a WPP (including both primary and secondary frequency control). This method takesmore » into account the operational constraints of the WPP (e.g., real-time reserve) and the BESS (e.g., state of charge [SOC], charge and discharge rate) to provide reliable FAS. Meanwhile, an adaptive SOC-feedback control is designed to maintain SOC at the optimal value as much as possible and thus reduce the size and extend the lifetime of the BESS. In conclusion, the effectiveness of the control strategy is validated with an innovative, multi-area, interconnected power system simulation platform that can mimic realistic power systems operation and control by simulating real-time economic dispatch, regulating reserve scheduling, multi-area automatic generation control, and generators' dynamic response.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Jin; Zhang, Yingchen
With increasing penetrations of wind generation on electric grids, wind power plants (WPPs) are encouraged to provide frequency ancillary services (FAS); however, it is a challenge to ensure that variable wind generation can reliably provide these ancillary services. This paper proposes using a battery energy storage system (BESS) to ensure the WPPs' commitment to FAS. This method also focuses on reducing the BESS's size and extending its lifetime. In this paper, a state-machine-based coordinated control strategy is developed to utilize a BESS to support the obliged FAS of a WPP (including both primary and secondary frequency control). This method takesmore » into account the operational constraints of the WPP (e.g., real-time reserve) and the BESS (e.g., state of charge [SOC], charge and discharge rate) to provide reliable FAS. Meanwhile, an adaptive SOC-feedback control is designed to maintain SOC at the optimal value as much as possible and thus reduce the size and extend the lifetime of the BESS. In conclusion, the effectiveness of the control strategy is validated with an innovative, multi-area, interconnected power system simulation platform that can mimic realistic power systems operation and control by simulating real-time economic dispatch, regulating reserve scheduling, multi-area automatic generation control, and generators' dynamic response.« less
NASA Astrophysics Data System (ADS)
Ono, Y.; Murakami, H.; Kobayashi, H.; Nasahara, K. N.; Kajiwara, K.; Honda, Y.
2014-12-01
Leaf Area Index (LAI) is defined as the one-side green leaf area per unit ground surface area. Global LAI products, such as MOD15 (Terra&Aqua/MODIS) and CYCLOPES (SPOT/VEGETATION) are used for many global terrestrial carbon models. Japan Aerospace eXploration Agency (JAXA) is planning to launch GCOM-C (Global Change Observation Mission-Climate) which carries SGLI (Second-generation GLobal Imager) in the Japanese Fiscal Year 2017. SGLI has the features, such as 17-channel from near ultraviolet to thermal infrared, 250-m spatial resolution, polarization, and multi-angle (nadir and ±45-deg. along-track slant) observation. In the GCOM-C/SGLI land science team, LAI is scheduled to be generated from GCOM-C/SGLI observation data as a standard product (daily 250-m). In extisting algorithms, LAI is estimated by the reverse analysis of vegetation radiative transfer models (RTMs) using multi-spectral and mono-angle observation data. Here, understory layer in vegetation RTMs is assumed by plane parallel (green leaves + soil) which set up arbitrary understroy LAI. However, actual understory consists of various elements, such as green leaves, dead leaves, branches, soil, and snow. Therefore, if understory in vegetation RTMs differs from reality, it will cause an error of LAI to estimate. This report describes an algorithm which estimates LAI in consideration of the influence of understory using GCOM-C/SGLI multi-spectral and multi-angle observation data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behboodi, Sahand; Chassin, David P.; Djilali, Ned
This study describes a new approach for solving the multi-area electricity resource allocation problem when considering both intermittent renewables and demand response. The method determines the hourly inter-area export/import set that maximizes the interconnection (global) surplus satisfying transmission, generation and load constraints. The optimal inter-area transfer set effectively makes the electricity price uniform over the interconnection apart from constrained areas, which overall increases the consumer surplus more than it decreases the producer surplus. The method is computationally efficient and suitable for use in simulations that depend on optimal scheduling models. The method is demonstrated on a system that represents Northmore » America Western Interconnection for the planning year of 2024. Simulation results indicate that effective use of interties reduces the system operation cost substantially. Excluding demand response, both the unconstrained and the constrained scheduling solutions decrease the global production cost (and equivalently increase the global economic surplus) by 12.30B and 10.67B per year, respectively, when compared to the standalone case in which each control area relies only on its local supply resources. This cost saving is equal to 25% and 22% of the annual production cost. Including 5% demand response, the constrained solution decreases the annual production cost by 10.70B, while increases the annual surplus by 9.32B in comparison to the standalone case.« less
Behboodi, Sahand; Chassin, David P.; Djilali, Ned; ...
2016-12-23
This study describes a new approach for solving the multi-area electricity resource allocation problem when considering both intermittent renewables and demand response. The method determines the hourly inter-area export/import set that maximizes the interconnection (global) surplus satisfying transmission, generation and load constraints. The optimal inter-area transfer set effectively makes the electricity price uniform over the interconnection apart from constrained areas, which overall increases the consumer surplus more than it decreases the producer surplus. The method is computationally efficient and suitable for use in simulations that depend on optimal scheduling models. The method is demonstrated on a system that represents Northmore » America Western Interconnection for the planning year of 2024. Simulation results indicate that effective use of interties reduces the system operation cost substantially. Excluding demand response, both the unconstrained and the constrained scheduling solutions decrease the global production cost (and equivalently increase the global economic surplus) by 12.30B and 10.67B per year, respectively, when compared to the standalone case in which each control area relies only on its local supply resources. This cost saving is equal to 25% and 22% of the annual production cost. Including 5% demand response, the constrained solution decreases the annual production cost by 10.70B, while increases the annual surplus by 9.32B in comparison to the standalone case.« less
NASA Astrophysics Data System (ADS)
Sahelgozin, M.; Alimohammadi, A.
2015-12-01
Increasing distances between locations of residence and services leads to a large number of daily commutes in urban areas. Developing subway systems has been taken into consideration of transportation managers as a response to this huge amount of travel demands. In developments of subway infrastructures, representing a temporal schedule for trains is an important task; because an appropriately designed timetable decreases Total passenger travel times, Total Operation Costs and Energy Consumption of trains. Since these variables are not positively correlated, subway scheduling is considered as a multi-criteria optimization problem. Therefore, proposing a proper solution for subway scheduling has been always a controversial issue. On the other hand, research on a phenomenon requires a summarized representation of the real world that is known as Model. In this study, it is attempted to model temporal schedule of urban trains that can be applied in Multi-Criteria Subway Schedule Optimization (MCSSO) problems. At first, a conceptual framework is represented for MCSSO. Then, an agent-based simulation environment is implemented to perform Sensitivity Analysis (SA) that is used to extract the interrelations between the framework components. These interrelations is then taken into account in order to construct the proposed model. In order to evaluate performance of the model in MCSSO problems, Tehran subway line no. 1 is considered as the case study. Results of the study show that the model was able to generate an acceptable distribution of Pareto-optimal solutions which are applicable in the real situations while solving a MCSSO is the goal. Also, the accuracy of the model in representing the operation of subway systems was significant.
Performance and policy dimensions in internet routing
NASA Technical Reports Server (NTRS)
Mills, David L.; Boncelet, Charles G.; Elias, John G.; Schragger, Paul A.; Jackson, Alden W.; Thyagarajan, Ajit
1995-01-01
The Internet Routing Project, referred to in this report as the 'Highball Project', has been investigating architectures suitable for networks spanning large geographic areas and capable of very high data rates. The Highball network architecture is based on a high speed crossbar switch and an adaptive, distributed, TDMA scheduling algorithm. The scheduling algorithm controls the instantaneous configuration and swell time of the switch, one of which is attached to each node. In order to send a single burst or a multi-burst packet, a reservation request is sent to all nodes. The scheduling algorithm then configures the switches immediately prior to the arrival of each burst, so it can be relayed immediately without requiring local storage. Reservations and housekeeping information are sent using a special broadcast-spanning-tree schedule. Progress to date in the Highball Project includes the design and testing of a suite of scheduling algorithms, construction of software reservation/scheduling simulators, and construction of a strawman hardware and software implementation. A prototype switch controller and timestamp generator have been completed and are in test. Detailed documentation on the algorithms, protocols and experiments conducted are given in various reports and papers published. Abstracts of this literature are included in the bibliography at the end of this report, which serves as an extended executive summary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krogh, B.; Chow, J.H.; Javid, H.S.
1983-05-01
A multi-stage formulation of the problem of scheduling generation, load shedding and short term transmission capacity for the alleviation of a viability emergency is presented. The formulation includes generation rate of change constraints, a linear network solution, and a model of the short term thermal overload capacity of transmission lines. The concept of rotating transmission line overloads for emergency state control is developed. The ideas are illustrated by a numerical example.
Automatic Generation of Heuristics for Scheduling
NASA Technical Reports Server (NTRS)
Morris, Robert A.; Bresina, John L.; Rodgers, Stuart M.
1997-01-01
This paper presents a technique, called GenH, that automatically generates search heuristics for scheduling problems. The impetus for developing this technique is the growing consensus that heuristics encode advice that is, at best, useful in solving most, or typical, problem instances, and, at worst, useful in solving only a narrowly defined set of instances. In either case, heuristic problem solvers, to be broadly applicable, should have a means of automatically adjusting to the idiosyncrasies of each problem instance. GenH generates a search heuristic for a given problem instance by hill-climbing in the space of possible multi-attribute heuristics, where the evaluation of a candidate heuristic is based on the quality of the solution found under its guidance. We present empirical results obtained by applying GenH to the real world problem of telescope observation scheduling. These results demonstrate that GenH is a simple and effective way of improving the performance of an heuristic scheduler.
Scheduling optimization of design stream line for production research and development projects
NASA Astrophysics Data System (ADS)
Liu, Qinming; Geng, Xiuli; Dong, Ming; Lv, Wenyuan; Ye, Chunming
2017-05-01
In a development project, efficient design stream line scheduling is difficult and important owing to large design imprecision and the differences in the skills and skill levels of employees. The relative skill levels of employees are denoted as fuzzy numbers. Multiple execution modes are generated by scheduling different employees for design tasks. An optimization model of a design stream line scheduling problem is proposed with the constraints of multiple executive modes, multi-skilled employees and precedence. The model considers the parallel design of multiple projects, different skills of employees, flexible multi-skilled employees and resource constraints. The objective function is to minimize the duration and tardiness of the project. Moreover, a two-dimensional particle swarm algorithm is used to find the optimal solution. To illustrate the validity of the proposed method, a case is examined in this article, and the results support the feasibility and effectiveness of the proposed model and algorithm.
NASA Astrophysics Data System (ADS)
Sivarami Reddy, N.; Ramamurthy, D. V., Dr.; Prahlada Rao, K., Dr.
2017-08-01
This article addresses simultaneous scheduling of machines, AGVs and tools where machines are allowed to share the tools considering transfer times of jobs and tools between machines, to generate best optimal sequences that minimize makespan in a multi-machine Flexible Manufacturing System (FMS). Performance of FMS is expected to improve by effective utilization of its resources, by proper integration and synchronization of their scheduling. Symbiotic Organisms Search (SOS) algorithm is a potent tool which is a better alternative for solving optimization problems like scheduling and proven itself. The proposed SOS algorithm is tested on 22 job sets with makespan as objective for scheduling of machines and tools where machines are allowed to share tools without considering transfer times of jobs and tools and the results are compared with the results of existing methods. The results show that the SOS has outperformed. The same SOS algorithm is used for simultaneous scheduling of machines, AGVs and tools where machines are allowed to share tools considering transfer times of jobs and tools to determine the best optimal sequences that minimize makespan.
The Traffic Management Advisor
NASA Technical Reports Server (NTRS)
Nedell, William; Erzberger, Heinz; Neuman, Frank
1990-01-01
The traffic management advisor (TMA) is comprised of algorithms, a graphical interface, and interactive tools for controlling the flow of air traffic into the terminal area. The primary algorithm incorporated in it is a real-time scheduler which generates efficient landing sequences and landing times for arrivals within about 200 n.m. from touchdown. A unique feature of the TMA is its graphical interface that allows the traffic manager to modify the computer-generated schedules for specific aircraft while allowing the automatic scheduler to continue generating schedules for all other aircraft. The graphical interface also provides convenient methods for monitoring the traffic flow and changing scheduling parameters during real-time operation.
Artificial Bee Colony Optimization for Short-Term Hydrothermal Scheduling
NASA Astrophysics Data System (ADS)
Basu, M.
2014-12-01
Artificial bee colony optimization is applied to determine the optimal hourly schedule of power generation in a hydrothermal system. Artificial bee colony optimization is a swarm-based algorithm inspired by the food foraging behavior of honey bees. The algorithm is tested on a multi-reservoir cascaded hydroelectric system having prohibited operating zones and thermal units with valve point loading. The ramp-rate limits of thermal generators are taken into consideration. The transmission losses are also accounted for through the use of loss coefficients. The algorithm is tested on two hydrothermal multi-reservoir cascaded hydroelectric test systems. The results of the proposed approach are compared with those of differential evolution, evolutionary programming and particle swarm optimization. From numerical results, it is found that the proposed artificial bee colony optimization based approach is able to provide better solution.
Extended precedence preservative crossover for job shop scheduling problems
NASA Astrophysics Data System (ADS)
Ong, Chung Sin; Moin, Noor Hasnah; Omar, Mohd
2013-04-01
Job shop scheduling problems (JSSP) is one of difficult combinatorial scheduling problems. A wide range of genetic algorithms based on the two parents crossover have been applied to solve the problem but multi parents (more than two parents) crossover in solving the JSSP is still lacking. This paper proposes the extended precedence preservative crossover (EPPX) which uses multi parents for recombination in the genetic algorithms. EPPX is a variation of the precedence preservative crossover (PPX) which is one of the crossovers that perform well to find the solutions for the JSSP. EPPX is based on a vector to determine the gene selected in recombination for the next generation. Legalization of children (offspring) can be eliminated due to the JSSP representation encoded by using permutation with repetition that guarantees the feasibility of chromosomes. The simulations are performed on a set of benchmarks from the literatures and the results are compared to ensure the sustainability of multi parents recombination in solving the JSSP.
Future applications of artificial intelligence to Mission Control Centers
NASA Technical Reports Server (NTRS)
Friedland, Peter
1991-01-01
Future applications of artificial intelligence to Mission Control Centers are presented in the form of the viewgraphs. The following subject areas are covered: basic objectives of the NASA-wide AI program; inhouse research program; constraint-based scheduling; learning and performance improvement for scheduling; GEMPLAN multi-agent planner; planning, scheduling, and control; Bayesian learning; efficient learning algorithms; ICARUS (an integrated architecture for learning); design knowledge acquisition and retention; computer-integrated documentation; and some speculation on future applications.
A Photo Album of Earth Scheduling Landsat 7 Mission Daily Activities
NASA Technical Reports Server (NTRS)
Potter, William; Gasch, John; Bauer, Cynthia
1998-01-01
Landsat7 is a member of a new generation of Earth observation satellites. Landsat7 will carry on the mission of the aging Landsat 5 spacecraft by acquiring high resolution, multi-spectral images of the Earth surface for strategic, environmental, commercial, agricultural and civil analysis and research. One of the primary mission goals of Landsat7 is to accumulate and seasonally refresh an archive of global images with full coverage of Earth's landmass, less the central portion of Antarctica. This archive will enable further research into seasonal, annual and long-range trending analysis in such diverse research areas as crop yields, deforestation, population growth, and pollution control, to name just a few. A secondary goal of Landsat7 is to fulfill imaging requests from our international partners in the mission. Landsat7 will transmit raw image data from the spacecraft to 25 ground stations in 20 subscribing countries. Whereas earlier Landsat missions were scheduled manually (as are the majority of current low-orbit satellite missions), the task of manually planning and scheduling Landsat7 mission activities would be overwhelmingly complex when considering the large volume of image requests, the limited resources available, spacecraft instrument limitations, and the limited ground image processing capacity, not to mention avoidance of foul weather systems. The Landsat7 Mission Operation Center (MOC) includes an image scheduler subsystem that is designed to automate the majority of mission planning and scheduling, including selection of the images to be acquired, managing the recording and playback of the images by the spacecraft, scheduling ground station contacts for downlink of images, and generating the spacecraft commands for controlling the imager, recorder, transmitters and antennas. The image scheduler subsystem autonomously generates 90% of the spacecraft commanding with minimal manual intervention. The image scheduler produces a conflict-free schedule for acquiring images of the "best" 250 scenes daily for refreshing the global archive. It then equitably distributes the remaining resources for acquiring up to 430 scenes to satisfy requests by international subscribers. The image scheduler selects candidate scenes based on priority and age of the requests, and predicted cloud cover and sun angle at each scene. It also selects these scenes to avoid instrument constraint violations and maximizes efficiency of resource usage by encouraging acquisition of scenes in clusters. Of particular interest to the mission planners, it produces the resulting schedule in a reasonable time, typically within 15 minutes.
Spatial analysis of fuel treatment options for chaparral on the Angeles national forest
G. Jones; J. Chew; R. Silverstein; C. Stalling; J. Sullivan; J. Troutwine; D. Weise; D. Garwood
2008-01-01
Spatial fuel treatment schedules were developed for the chaparral vegetation type on the Angeles National Forest using the Multi-resource Analysis and Geographic Information System (MAGIS). Schedules varied by the priority given to various wildland urban interface areas and the general forest, as well as by the number of acres treated per decade. The effectiveness of...
An expert system for scheduling requests for communications Links between TDRSS and ERBS
NASA Technical Reports Server (NTRS)
Mclean, David R.; Littlefield, Ronald G.; Beyer, David S.
1987-01-01
An ERBS-TDRSS Contact Planning System (ERBS-TDRSS CPS) is described which uses a graphics interface and the NASA Transportable Interference Engine. The procedure involves transfer of the ERBS-TDRSS Ground Track Orbit Prediction data to the ERBS flight operations area, where the ERBS-TDRSS CPS automatically generates requests for TDRSS service. As requested events are rejected, alternative context sensitive strategies are employed to generate new requested events until a schedule is completed. A report generator builds schedule requests for separate ERBS-TDRSS contacts.
Microgrid Optimal Scheduling With Chance-Constrained Islanding Capability
Liu, Guodong; Starke, Michael R.; Xiao, B.; ...
2017-01-13
To facilitate the integration of variable renewable generation and improve the resilience of electricity sup-ply in a microgrid, this paper proposes an optimal scheduling strategy for microgrid operation considering constraints of islanding capability. A new concept, probability of successful islanding (PSI), indicating the probability that a microgrid maintains enough spinning reserve (both up and down) to meet local demand and accommodate local renewable generation after instantaneously islanding from the main grid, is developed. The PSI is formulated as mixed-integer linear program using multi-interval approximation taking into account the probability distributions of forecast errors of wind, PV and load. With themore » goal of minimizing the total operating cost while preserving user specified PSI, a chance-constrained optimization problem is formulated for the optimal scheduling of mirogrids and solved by mixed integer linear programming (MILP). Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator and a battery demonstrate the effectiveness of the proposed scheduling strategy. Lastly, we verify the relationship between PSI and various factors.« less
Model for multi-stand management based on structural attributes of individual stands
G.W. Miller; J. Sullivan
1997-01-01
A growing interest in managing forest ecosystems calls for decision models that take into account attribute goals for large forest areas while continuing to recognize the individual stand as a basic unit of forest management. A dynamic, nonlinear forest management model is described that schedules silvicultural treatments for individual stands that are linked by multi-...
NASA Astrophysics Data System (ADS)
Nijland, Linda; Arentze, Theo; Timmermans, Harry
2014-01-01
Modeling multi-day planning has received scarce attention in activity-based transport demand modeling so far. However, new dynamic activity-based approaches are being developed at the current moment. The frequency and inflexibility of planned activities and events in activity schedules of individuals indicate the importance of incorporating those pre-planned activities in the new generation of dynamic travel demand models. Elaborating and combining previous work on event-driven activity generation, the aim of this paper is to develop and illustrate an extension of a need-based model of activity generation that takes into account possible influences of pre-planned activities and events. This paper describes the theory and shows the results of simulations of the extension. The simulation was conducted for six different activities, and the parameter values used were consistent with an earlier estimation study. The results show that the model works well and that the influences of the parameters are consistent, logical, and have clear interpretations. These findings offer further evidence of face and construct validity to the suggested modeling approach.
Integrated resource scheduling in a distributed scheduling environment
NASA Technical Reports Server (NTRS)
Zoch, David; Hall, Gardiner
1988-01-01
The Space Station era presents a highly-complex multi-mission planning and scheduling environment exercised over a highly distributed system. In order to automate the scheduling process, customers require a mechanism for communicating their scheduling requirements to NASA. A request language that a remotely-located customer can use to specify his scheduling requirements to a NASA scheduler, thus automating the customer-scheduler interface, is described. This notation, Flexible Envelope-Request Notation (FERN), allows the user to completely specify his scheduling requirements such as resource usage, temporal constraints, and scheduling preferences and options. The FERN also contains mechanisms for representing schedule and resource availability information, which are used in the inter-scheduler inconsistency resolution process. Additionally, a scheduler is described that can accept these requests, process them, generate schedules, and return schedule and resource availability information to the requester. The Request-Oriented Scheduling Engine (ROSE) was designed to function either as an independent scheduler or as a scheduling element in a network of schedulers. When used in a network of schedulers, each ROSE communicates schedule and resource usage information to other schedulers via the FERN notation, enabling inconsistencies to be resolved between schedulers. Individual ROSE schedules are created by viewing the problem as a constraint satisfaction problem with a heuristically guided search strategy.
A Systematic Multi-Time Scale Solution for Regional Power Grid Operation
NASA Astrophysics Data System (ADS)
Zhu, W. J.; Liu, Z. G.; Cheng, T.; Hu, B. Q.; Liu, X. Z.; Zhou, Y. F.
2017-10-01
Many aspects need to be taken into consideration in a regional grid while making schedule plans. In this paper, a systematic multi-time scale solution for regional power grid operation considering large scale renewable energy integration and Ultra High Voltage (UHV) power transmission is proposed. In the time scale aspect, we discuss the problem from month, week, day-ahead, within-day to day-behind, and the system also contains multiple generator types including thermal units, hydro-plants, wind turbines and pumped storage stations. The 9 subsystems of the scheduling system are described, and their functions and relationships are elaborated. The proposed system has been constructed in a provincial power grid in Central China, and the operation results further verified the effectiveness of the system.
Improved NSGA model for multi objective operation scheduling and its evaluation
NASA Astrophysics Data System (ADS)
Li, Weining; Wang, Fuyu
2017-09-01
Reasonable operation can increase the income of the hospital and improve the patient’s satisfactory level. In this paper, by using multi object operation scheduling method with improved NSGA algorithm, it shortens the operation time, reduces the operation costand lowers the operation risk, the multi-objective optimization model is established for flexible operation scheduling, through the MATLAB simulation method, the Pareto solution is obtained, the standardization of data processing. The optimal scheduling scheme is selected by using entropy weight -Topsis combination method. The results show that the algorithm is feasible to solve the multi-objective operation scheduling problem, and provide a reference for hospital operation scheduling.
Nonlinear Pricing in Energy and Environmental Markets
NASA Astrophysics Data System (ADS)
Ito, Koichiro
This dissertation consists of three empirical studies on nonlinear pricing in energy and environmental markets. The first investigates how consumers respond to multi-tier nonlinear price schedules for residential electricity. Chapter 2 asks a similar research question for residential water pricing. Finally, I examine the effect of nonlinear financial rewards for energy conservation by applying a regression discontinuity design to a large-scale electricity rebate program that was implemented in California. Economic theory generally assumes that consumers respond to marginal prices when making economic decisions, but this assumption may not hold for complex price schedules. The chapter "Do Consumers Respond to Marginal or Average Price? Evidence from Nonlinear Electricity Pricing" provides empirical evidence that consumers respond to average price rather than marginal price when faced with nonlinear electricity price schedules. Nonlinear price schedules, such as progressive income tax rates and multi-tier electricity prices, complicate economic decisions by creating multiple marginal prices for the same good. Evidence from laboratory experiments suggests that consumers facing such price schedules may respond to average price as a heuristic. I empirically test this prediction using field data by exploiting price variation across a spatial discontinuity in electric utility service areas. The territory border of two electric utilities lies within several city boundaries in southern California. As a result, nearly identical households experience substantially different nonlinear electricity price schedules. Using monthly household-level panel data from 1999 to 2008, I find strong evidence that consumers respond to average price rather than marginal or expected marginal price. I show that even though this sub-optimizing behavior has a minimal impact on individual welfare, it can critically alter the policy implications of nonlinear pricing. The second chapter " How Do Consumers Respond to Nonlinear Pricing? Evidence from Household Water Demand" provides similar empirical evidence in residential water markets. In this paper, I exploit variation in residential water pricing in Southern California to examine how consumers respond to nonlinear pricing. Contrary to the standard predictions for nonlinear budget sets, I find no bunching of consumers around the kink points of their nonlinear price schedule. I then explore whether consumers respond to marginal price, expected marginal price, or average price when faced with nonlinear water price schedules. The price schedule of one service area was changed from a linear price schedule to a nonlinear price schedule. This policy change lead to an increase in marginal price and expected marginal price but a decrease in average price for many consumers. Using household-level panel data, I find strong evidence that consumers respond to average price rather than marginal or expected marginal price. Estimates of the short-run price elasticity for the summer and winter months are -.127 and -.097, and estimates of the long-run price elasticity for the summer and winter months are -.203 and -.154. I conclude with "The Effect of Cash Rewards on Energy Conservation: Evidence from a Regression Discontinuity Design" to examine the effect of an alternative form of nonlinear pricing that was developed to provide an explicit financial incentive for conservation. In the summer of 2005, California residents received a 20% discount on their summer electricity bills if they could reduce their electricity consumption by 20% relative to 2004. Nearly all households automatically participated in the program, but the eligibility rule required households to have started their electricity service by a certain cutoff date in 2004. This rule generated an essentially random assignment of the program among households that started their service right before and after the cutoff date. Using household-level monthly billing records from the three largest California electric utilities, I find evidence that the rebate incentive reduced consumption by 5% to 10% in the areas where summer temperature is persistently high and income-level is relatively low, but the estimated treatment effects are nearly zero in other areas. To save 1 kWh of electricity, the program cost 2 cents in inland areas, 91 cents in coastal areas, and 14.8 cents for all service areas.
Residential Consumption Scheduling Based on Dynamic User Profiling
NASA Astrophysics Data System (ADS)
Mangiatordi, Federica; Pallotti, Emiliano; Del Vecchio, Paolo; Capodiferro, Licia
Deployment of household appliances and of electric vehicles raises the electricity demand in the residential areas and the impact of the building's electrical power. The variations of electricity consumption across the day, may affect both the design of the electrical generation facilities and the electricity bill, mainly when a dynamic pricing is applied. This paper focuses on an energy management system able to control the day-ahead electricity demand in a residential area, taking into account both the variability of the energy production costs and the profiling of the users. The user's behavior is dynamically profiled on the basis of the tasks performed during the previous days and of the tasks foreseen for the current day. Depending on the size and on the flexibility in time of the user tasks, home inhabitants are grouped in, one over N, energy profiles, using a k-means algorithm. For a fixed energy generation cost, each energy profile is associated to a different hourly energy cost. The goal is to identify any bad user profile and to make it pay a highest bill. A bad profile example is when a user applies a lot of consumption tasks and low flexibility in task reallocation time. The proposed energy management system automatically schedules the tasks, solving a multi-objective optimization problem based on an MPSO strategy. The goals, when identifying bad users profiles, are to reduce the peak to average ratio in energy demand, and to minimize the energy costs, promoting virtuous behaviors.
NASA Astrophysics Data System (ADS)
Zhong, Shuya; Pantelous, Athanasios A.; Beer, Michael; Zhou, Jian
2018-05-01
Offshore wind farm is an emerging source of renewable energy, which has been shown to have tremendous potential in recent years. In this blooming area, a key challenge is that the preventive maintenance of offshore turbines should be scheduled reasonably to satisfy the power supply without failure. In this direction, two significant goals should be considered simultaneously as a trade-off. One is to maximise the system reliability and the other is to minimise the maintenance related cost. Thus, a non-linear multi-objective programming model is proposed including two newly defined objectives with thirteen families of constraints suitable for the preventive maintenance of offshore wind farms. In order to solve our model effectively, the nondominated sorting genetic algorithm II, especially for the multi-objective optimisation is utilised and Pareto-optimal solutions of schedules can be obtained to offer adequate support to decision-makers. Finally, an example is given to illustrate the performances of the devised model and algorithm, and explore the relationships of the two targets with the help of a contrast model.
NASA Astrophysics Data System (ADS)
Zhao, Wei-hu; Zhao, Jing; Zhao, Shang-hong; Li, Yong-jun; Wang, Xiang; Dong, Yi; Dong, Chen
2013-08-01
Optical satellite communication with the advantages of broadband, large capacity and low power consuming broke the bottleneck of the traditional microwave satellite communication. The formation of the Space-based Information System with the technology of high performance optical inter-satellite communication and the realization of global seamless coverage and mobile terminal accessing are the necessary trend of the development of optical satellite communication. Considering the resources, missions and restraints of Data Relay Satellite Optical Communication System, a model of optical communication resources scheduling is established and a scheduling algorithm based on artificial intelligent optimization is put forwarded. According to the multi-relay-satellite, multi-user-satellite, multi-optical-antenna and multi-mission with several priority weights, the resources are scheduled reasonable by the operation: "Ascertain Current Mission Scheduling Time" and "Refresh Latter Mission Time-Window". The priority weight is considered as the parameter of the fitness function and the scheduling project is optimized by the Genetic Algorithm. The simulation scenarios including 3 relay satellites with 6 optical antennas, 12 user satellites and 30 missions, the simulation result reveals that the algorithm obtain satisfactory results in both efficiency and performance and resources scheduling model and the optimization algorithm are suitable in multi-relay-satellite, multi-user-satellite, and multi-optical-antenna recourses scheduling problem.
Transportation Improvement Program of the Mid-Ohio Regional Planning Commission
DOT National Transportation Integrated Search
1996-06-20
The MORPC Transportation Improvement program (TIP) is a staged, multi-year schedule of regionally significant transportation improvements in the Columbus area. The Federal-aid Highway Act of 1962 and the federal Urban Mass Transportation Act of 1964 ...
Multi-Satellite Scheduling Approach for Dynamic Areal Tasks Triggered by Emergent Disasters
NASA Astrophysics Data System (ADS)
Niu, X. N.; Zhai, X. J.; Tang, H.; Wu, L. X.
2016-06-01
The process of satellite mission scheduling, which plays a significant role in rapid response to emergent disasters, e.g. earthquake, is used to allocate the observation resources and execution time to a series of imaging tasks by maximizing one or more objectives while satisfying certain given constraints. In practice, the information obtained of disaster situation changes dynamically, which accordingly leads to the dynamic imaging requirement of users. We propose a satellite scheduling model to address dynamic imaging tasks triggered by emergent disasters. The goal of proposed model is to meet the emergency response requirements so as to make an imaging plan to acquire rapid and effective information of affected area. In the model, the reward of the schedule is maximized. To solve the model, we firstly present a dynamic segmenting algorithm to partition area targets. Then the dynamic heuristic algorithm embedding in a greedy criterion is designed to obtain the optimal solution. To evaluate the model, we conduct experimental simulations in the scene of Wenchuan Earthquake. The results show that the simulated imaging plan can schedule satellites to observe a wider scope of target area. We conclude that our satellite scheduling model can optimize the usage of satellite resources so as to obtain images in disaster response in a more timely and efficient manner.
Nonlinear dynamic simulation of single- and multi-spool core engines
NASA Technical Reports Server (NTRS)
Schobeiri, T.; Lippke, C.; Abouelkheir, M.
1993-01-01
In this paper a new computational method for accurate simulation of the nonlinear dynamic behavior of single- and multi-spool core engines, turbofan engines, and power generation gas turbine engines is presented. In order to perform the simulation, a modularly structured computer code has been developed which includes individual mathematical modules representing various engine components. The generic structure of the code enables the dynamic simulation of arbitrary engine configurations ranging from single-spool thrust generation to multi-spool thrust/power generation engines under adverse dynamic operating conditions. For precise simulation of turbine and compressor components, row-by-row calculation procedures were implemented that account for the specific turbine and compressor cascade and blade geometry and characteristics. The dynamic behavior of the subject engine is calculated by solving a number of systems of partial differential equations, which describe the unsteady behavior of the individual components. In order to ensure the capability, accuracy, robustness, and reliability of the code, comprehensive critical performance assessment and validation tests were performed. As representatives, three different transient cases with single- and multi-spool thrust and power generation engines were simulated. The transient cases range from operating with a prescribed fuel schedule, to extreme load changes, to generator and turbine shut down.
NASA Astrophysics Data System (ADS)
Moghaddam, Kamran S.; Usher, John S.
2011-07-01
In this article, a new multi-objective optimization model is developed to determine the optimal preventive maintenance and replacement schedules in a repairable and maintainable multi-component system. In this model, the planning horizon is divided into discrete and equally-sized periods in which three possible actions must be planned for each component, namely maintenance, replacement, or do nothing. The objective is to determine a plan of actions for each component in the system while minimizing the total cost and maximizing overall system reliability simultaneously over the planning horizon. Because of the complexity, combinatorial and highly nonlinear structure of the mathematical model, two metaheuristic solution methods, generational genetic algorithm, and a simulated annealing are applied to tackle the problem. The Pareto optimal solutions that provide good tradeoffs between the total cost and the overall reliability of the system can be obtained by the solution approach. Such a modeling approach should be useful for maintenance planners and engineers tasked with the problem of developing recommended maintenance plans for complex systems of components.
Multi-trip vehicle routing and scheduling problem with time window in real life
NASA Astrophysics Data System (ADS)
Sze, San-Nah; Chiew, Kang-Leng; Sze, Jeeu-Fong
2012-09-01
This paper studies a manpower scheduling problem with multiple maintenance operations and vehicle routing considerations. Service teams located at a common service centre are required to travel to different customer sites. All customers must be served within given time window, which are known in advance. The scheduling process must take into consideration complex constraints such as a meal break during the team's shift, multiple travelling trips, synchronisation of service teams and working shifts. The main objective of this study is to develop a heuristic that can generate high quality solution in short time for large problem instances. A Two-stage Scheduling Heuristic is developed for different variants of the problem. Empirical results show that the proposed solution performs effectively and efficiently. In addition, our proposed approximation algorithm is very flexible and can be easily adapted to different scheduling environments and operational requirements.
Swarm satellite mission scheduling & planning using Hybrid Dynamic Mutation Genetic Algorithm
NASA Astrophysics Data System (ADS)
Zheng, Zixuan; Guo, Jian; Gill, Eberhard
2017-08-01
Space missions have traditionally been controlled by operators from a mission control center. Given the increasing number of satellites for some space missions, generating a command list for multiple satellites can be time-consuming and inefficient. Developing multi-satellite, onboard mission scheduling & planning techniques is, therefore, a key research field for future space mission operations. In this paper, an improved Genetic Algorithm (GA) using a new mutation strategy is proposed as a mission scheduling algorithm. This new mutation strategy, called Hybrid Dynamic Mutation (HDM), combines the advantages of both dynamic mutation strategy and adaptive mutation strategy, overcoming weaknesses such as early convergence and long computing time, which helps standard GA to be more efficient and accurate in dealing with complex missions. HDM-GA shows excellent performance in solving both unconstrained and constrained test functions. The experiments of using HDM-GA to simulate a multi-satellite, mission scheduling problem demonstrates that both the computation time and success rate mission requirements can be met. The results of a comparative test between HDM-GA and three other mutation strategies also show that HDM has outstanding performance in terms of speed and reliability.
Efficiently Scheduling Multi-core Guest Virtual Machines on Multi-core Hosts in Network Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S
2011-01-01
Virtual machine (VM)-based simulation is a method used by network simulators to incorporate realistic application behaviors by executing actual VMs as high-fidelity surrogates for simulated end-hosts. A critical requirement in such a method is the simulation time-ordered scheduling and execution of the VMs. Prior approaches such as time dilation are less efficient due to the high degree of multiplexing possible when multiple multi-core VMs are simulated on multi-core host systems. We present a new simulation time-ordered scheduler to efficiently schedule multi-core VMs on multi-core real hosts, with a virtual clock realized on each virtual core. The distinguishing features of ourmore » approach are: (1) customizable granularity of the VM scheduling time unit on the simulation time axis, (2) ability to take arbitrary leaps in virtual time by VMs to maximize the utilization of host (real) cores when guest virtual cores idle, and (3) empirically determinable optimality in the tradeoff between total execution (real) time and time-ordering accuracy levels. Experiments show that it is possible to get nearly perfect time-ordered execution, with a slight cost in total run time, relative to optimized non-simulation VM schedulers. Interestingly, with our time-ordered scheduler, it is also possible to reduce the time-ordering error from over 50% of non-simulation scheduler to less than 1% realized by our scheduler, with almost the same run time efficiency as that of the highly efficient non-simulation VM schedulers.« less
A comprehensive approach to reactive power scheduling in restructured power systems
NASA Astrophysics Data System (ADS)
Shukla, Meera
Financial constraints, regulatory pressure, and need for more economical power transfers have increased the loading of interconnected transmission systems. As a consequence, power systems have been operated close to their maximum power transfer capability limits, making the system more vulnerable to voltage instability events. The problem of voltage collapse characterized by a severe local voltage depression is generally believed to be associated with inadequate VAr support at key buses. The goal of reactive power planning is to maintain a high level of voltage security, through installation of properly sized and located reactive sources and their optimal scheduling. In case of vertically-operated power systems, the reactive requirement of the system is normally satisfied by using all of its reactive sources. But in case of different scenarios of restructured power systems, one may consider a fixed amount of exchange of reactive power through tie lines. Reviewed literature suggests a need for optimal scheduling of reactive power generation for fixed inter area reactive power exchange. The present work proposed a novel approach for reactive power source placement and a novel approach for its scheduling. The VAr source placement technique was based on the property of system connectivity. This is followed by development of optimal reactive power dispatch formulation which facilitated fixed inter area tie line reactive power exchange. This formulation used a Line Flow-Based (LFB) model of power flow analysis. The formulation determined the generation schedule for fixed inter area tie line reactive power exchange. Different operating scenarios were studied to analyze the impact of VAr management approach for vertically operated and restructured power systems. The system loadability, losses, generation and the cost of generation were the performance measures to study the impact of VAr management strategy. The novel approach was demonstrated on IEEE 30 bus system.
NASA Technical Reports Server (NTRS)
Mourou, Pascal; Fade, Bernard
1992-01-01
This article describes a planning method applicable to agents with great perception and decision-making capabilities and the ability to communicate with other agents. Each agent has a task to fulfill allowing for the actions of other agents in its vicinity. Certain simultaneous actions may cause conflicts because they require the same resource. The agent plans each of its actions and simultaneously transmits these to its neighbors. In a similar way, it receives plans from the other agents and must take account of these plans. The planning method allows us to build a distributed scheduling system. Here, these agents are robot vehicles on a highway communicating by radio. In this environment, conflicts between agents concern the allocation of space in time and are connected with the inertia of the vehicles. Each vehicle made a temporal, spatial, and situated reasoning in order to drive without collision. The flexibility and reactivity of the method presented here allows the agent to generate its plan based on assumptions concerning the other agents and then check these assumptions progressively as plans are received from the other agents. A multi-agent execution monitoring of these plans can be done, using data generated during planning and the multi-agent decision-making algorithm described here. A selective backtrack allows us to perform incremental rescheduling.
Multi-objective decision-making model based on CBM for an aircraft fleet
NASA Astrophysics Data System (ADS)
Luo, Bin; Lin, Lin
2018-04-01
Modern production management patterns, in which multi-unit (e.g., a fleet of aircrafts) are managed in a holistic manner, have brought new challenges for multi-unit maintenance decision making. To schedule a good maintenance plan, not only does the individual machine maintenance have to be considered, but also the maintenance of the other individuals have to be taken into account. Since most condition-based maintenance researches for aircraft focused on solely reducing maintenance cost or maximizing the availability of single aircraft, as well as considering that seldom researches concentrated on both the two objectives: minimizing cost and maximizing the availability of a fleet (total number of available aircraft in fleet), a multi-objective decision-making model based on condition-based maintenance concentrated both on the above two objectives is established. Furthermore, in consideration of the decision maker may prefer providing the final optimal result in the form of discrete intervals instead of a set of points (non-dominated solutions) in real decision-making problem, a novel multi-objective optimization method based on support vector regression is proposed to solve the above multi-objective decision-making model. Finally, a case study regarding a fleet is conducted, with the results proving that the approach efficiently generates outcomes that meet the schedule requirements.
Reservoir system expansion scheduling under conflicting interests - A Blue Nile application
NASA Astrophysics Data System (ADS)
Geressu, Robel; Harou, Julien
2017-04-01
New water resource developments are facing increasing resistance due to their real and perceived potential to affect existing systems' performance negatively. Hence, scheduling new dams in multi-reservoir systems requires considering conflicting performance objectives to minimize impacts, create consensus among wider stakeholder groups and avoid conflict. However, because of the large number of alternative expansion schedules, planning approaches often rely on simplifying assumptions such as the appropriate gap between expansion stages or less flexibility in reservoir release rules than what is possible. In this study, we investigate the extent to which these assumptions could limit our ability to find better performing alternatives. We apply a many-objective sequencing approach to the proposed Blue Nile hydropower reservoir system in Ethiopia to find best investment schedules and operating rules that maximize long-term discounted net benefits, downstream releases and energy generation during reservoir filling periods. The system is optimized using 30 realizations of stochastically generated streamflow data, statistically resembling the historical flow. Results take the form of Pareto-optimal trade-offs where each point on the curve or surface represents a combination of new reservoirs, their implementation dates and operating rules. Results show a significant relationship between detail in operating rule design (i.e., changing operating rules as the multi-reservoir expansion progresses) and the system performance. For the Blue Nile, failure to optimize operating rules in sufficient detail could result in underestimation of the net worth of the proposed investments by up to 6 billion USD if a development option with low downstream impact (slow filling of the reservoirs) is to be implemented.
Automated Long - Term Scheduling for the SOFIA Airborne Observatory
NASA Technical Reports Server (NTRS)
Civeit, Thomas
2013-01-01
The NASA Stratospheric Observatory for Infrared Astronomy (SOFIA) is a joint US/German project to develop and operate a gyro-stabilized 2.5-meter telescope in a Boeing 747SP. SOFIA's first science observations were made in December 2010. During 2011, SOFIA accomplished 30 flights in the "Early Science" program as well as a deployment to Germany. The new observing period, known as Cycle 1, is scheduled to begin in 2012. It includes 46 science flights grouped in four multi-week observing campaigns spread through a 13-month span. Automation of the flight scheduling process offers a major challenge to the SOFIA mission operations. First because it is needed to mitigate its relatively high cost per unit observing time compared to space-borne missions. Second because automated scheduling techniques available for ground-based and space-based telescopes are inappropriate for an airborne observatory. Although serious attempts have been made in the past to solve part of the problem, until recently mission operations staff was still manually scheduling flights. We present in this paper a new automated solution for generating SOFIA long-term schedules that will be used in operations from the Cycle 1 observing period. We describe the constraints that should be satisfied to solve the SOFIA scheduling problem in the context of real operations. We establish key formulas required to efficiently calculate the aircraft course over ground when evaluating flight schedules. We describe the foundations of the SOFIA long-term scheduler, the constraint representation, and the random search based algorithm that generates observation and instrument schedules. Finally, we report on how the new long-term scheduler has been used in operations to date.
NASA Astrophysics Data System (ADS)
Santos, O.
2002-01-01
The Space Station Biological Research Project (SSBRP) has developed a new plan which greatly reduces the development costs required to complete the facility. This new plan retains core capabilities while allowing for future growth. The most important piece of equipment required for quality biological research, the 2.5 meter diameter centrifuge capable of accommodating research specimen habitats at simulated gravity levels ranging from microgravity to 2.0 g, is being developed by NASDA, the Japanese space agency, for the SSBRP. This is scheduled for flight to the ISS in 2007. The project is also developing a multi-purpose incubator, an automated cell culture unit, and two microgravity habitat holding racks, currently scheduled for launch in 2005. In addition the Canadian Space Agency is developing for the project an insect habitat, which houses Drosophila melanogaster, and provides an internal centrifuge for 1 g controls. NASDA is also developing for the project a glovebox for the contained manipulation and analysis of biological specimens, scheduled for launch in 2006. This core facility will allow for experimentation on small plants (Arabidopsis species), nematode worms (C. elegans), fruit flies (Drosophila melanogaster), and a variety of microorganisms, bacteria, yeast, and mammalian cells. We propose a plan for early utilization which focuses on surveys of changes in gene expression and protein structure due to the space flight environment. In the future, the project is looking to continue development of a rodent habitat and a plant habitat that can be accommodated on the 2.5 meter centrifuge. By utilizing the early phases of the ISS to broadly answer what changes occur at the genetic and protein level of cells and organisms exposed to the ISS low earth orbit environment, we can generate interest for future experiments when the ISS capabilities allow for direct manipulation and intervention of experiments. The ISS continues to hold promise for high quality, long term, multi-generational biological studies with large sample sizes and appropriate controls.
A Hybrid Cellular Genetic Algorithm for Multi-objective Crew Scheduling Problem
NASA Astrophysics Data System (ADS)
Jolai, Fariborz; Assadipour, Ghazal
Crew scheduling is one of the important problems of the airline industry. This problem aims to cover a number of flights by crew members, such that all the flights are covered. In a robust scheduling the assignment should be so that the total cost, delays, and unbalanced utilization are minimized. As the problem is NP-hard and the objectives are in conflict with each other, a multi-objective meta-heuristic called CellDE, which is a hybrid cellular genetic algorithm, is implemented as the optimization method. The proposed algorithm provides the decision maker with a set of non-dominated or Pareto-optimal solutions, and enables them to choose the best one according to their preferences. A set of problems of different sizes is generated and solved using the proposed algorithm. Evaluating the performance of the proposed algorithm, three metrics are suggested, and the diversity and the convergence of the achieved Pareto front are appraised. Finally a comparison is made between CellDE and PAES, another meta-heuristic algorithm. The results show the superiority of CellDE.
T-L Plane Abstraction-Based Energy-Efficient Real-Time Scheduling for Multi-Core Wireless Sensors.
Kim, Youngmin; Lee, Ki-Seong; Pham, Ngoc-Son; Lee, Sun-Ro; Lee, Chan-Gun
2016-07-08
Energy efficiency is considered as a critical requirement for wireless sensor networks. As more wireless sensor nodes are equipped with multi-cores, there are emerging needs for energy-efficient real-time scheduling algorithms. The T-L plane-based scheme is known to be an optimal global scheduling technique for periodic real-time tasks on multi-cores. Unfortunately, there has been a scarcity of studies on extending T-L plane-based scheduling algorithms to exploit energy-saving techniques. In this paper, we propose a new T-L plane-based algorithm enabling energy-efficient real-time scheduling on multi-core sensor nodes with dynamic power management (DPM). Our approach addresses the overhead of processor mode transitions and reduces fragmentations of the idle time, which are inherent in T-L plane-based algorithms. Our experimental results show the effectiveness of the proposed algorithm compared to other energy-aware scheduling methods on T-L plane abstraction.
2008-06-01
capacity planning; • Electrical generation capacity planning; • Machine scheduling; • Freight scheduling; • Dairy farm expansion planning...Support Systems and Multi Criteria Decision Analysis Products A.2.11.2.2.1 ELECTRE IS ELECTRE IS is a generalization of ELECTRE I. It is a...criteria, ELECTRE IS supports the user in the process of selecting one alternative or a subset of alternatives. The method consists of two parts
A derived heuristics based multi-objective optimization procedure for micro-grid scheduling
NASA Astrophysics Data System (ADS)
Li, Xin; Deb, Kalyanmoy; Fang, Yanjun
2017-06-01
With the availability of different types of power generators to be used in an electric micro-grid system, their operation scheduling as the load demand changes with time becomes an important task. Besides satisfying load balance constraints and the generator's rated power, several other practicalities, such as limited availability of grid power and restricted ramping of power output from generators, must all be considered during the operation scheduling process, which makes it difficult to decide whether the optimization results are accurate and satisfactory. In solving such complex practical problems, heuristics-based customized optimization algorithms are suggested. However, due to nonlinear and complex interactions of variables, it is difficult to come up with heuristics in such problems off-hand. In this article, a two-step strategy is proposed in which the first task deciphers important heuristics about the problem and the second task utilizes the derived heuristics to solve the original problem in a computationally fast manner. Specifically, the specific operation scheduling is considered from a two-objective (cost and emission) point of view. The first task develops basic and advanced level knowledge bases offline from a series of prior demand-wise optimization runs and then the second task utilizes them to modify optimized solutions in an application scenario. Results on island and grid connected modes and several pragmatic formulations of the micro-grid operation scheduling problem clearly indicate the merit of the proposed two-step procedure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luszczek, Piotr R; Tomov, Stanimire Z; Dongarra, Jack J
We present an efficient and scalable programming model for the development of linear algebra in heterogeneous multi-coprocessor environments. The model incorporates some of the current best design and implementation practices for the heterogeneous acceleration of dense linear algebra (DLA). Examples are given as the basis for solving linear systems' algorithms - the LU, QR, and Cholesky factorizations. To generate the extreme level of parallelism needed for the efficient use of coprocessors, algorithms of interest are redesigned and then split into well-chosen computational tasks. The tasks execution is scheduled over the computational components of a hybrid system of multi-core CPUs andmore » coprocessors using a light-weight runtime system. The use of lightweight runtime systems keeps scheduling overhead low, while enabling the expression of parallelism through otherwise sequential code. This simplifies the development efforts and allows the exploration of the unique strengths of the various hardware components.« less
DEM generation in cloudy-rainy mountainous area with multi-baseline SAR interferometry
NASA Astrophysics Data System (ADS)
Wu, Hong'an; Zhang, Yonghong; Jiang, Decai; Kang, Yonghui
2018-03-01
Conventional singe baseline InSAR is easily affected by atmospheric artifacts, making it difficult to generate highprecision DEM. To solve this problem, in this paper, a multi-baseline interferometric phase accumulation method with weights fixed by coherence is proposed to generate higher accuracy DEM. The mountainous area in Kunming, Yunnan Province, China is selected as study area, which is characterized by cloudy weather, rugged terrain and dense vegetation. The multi-baseline InSAR experiments are carried out by use of four ALOS-2 PALSAR-2 images. The generated DEM is evaluated by Chinese Digital Products of Fundamental Geographic Information 1:50000 DEM. The results demonstrate that: 1) the proposed method can reduce atmospheric artifacts significantly; 2) the accuracy of InSAR DEM generated by six interferograms satisfies the standard of 1:50000 DEM Level Three and American DTED-1.
Multi-objective generation scheduling with hybrid energy resources
NASA Astrophysics Data System (ADS)
Trivedi, Manas
In economic dispatch (ED) of electric power generation, the committed generating units are scheduled to meet the load demand at minimum operating cost with satisfying all unit and system equality and inequality constraints. Generation of electricity from the fossil fuel releases several contaminants into the atmosphere. So the economic dispatch objective can no longer be considered alone due to the environmental concerns that arise from the emissions produced by fossil fueled electric power plants. This research is proposing the concept of environmental/economic generation scheduling with traditional and renewable energy sources. Environmental/economic dispatch (EED) is a multi-objective problem with conflicting objectives since emission minimization is conflicting with fuel cost minimization. Production and consumption of fossil fuel and nuclear energy are closely related to environmental degradation. This causes negative effects to human health and the quality of life. Depletion of the fossil fuel resources will also be challenging for the presently employed energy systems to cope with future energy requirements. On the other hand, renewable energy sources such as hydro and wind are abundant, inexhaustible and widely available. These sources use native resources and have the capacity to meet the present and the future energy demands of the world with almost nil emissions of air pollutants and greenhouse gases. The costs of fossil fuel and renewable energy are also heading in opposite directions. The economic policies needed to support the widespread and sustainable markets for renewable energy sources are rapidly evolving. The contribution of this research centers on solving the economic dispatch problem of a system with hybrid energy resources under environmental restrictions. It suggests an effective solution of renewable energy to the existing fossil fueled and nuclear electric utilities for the cheaper and cleaner production of electricity with hourly emission targets. Since minimizing the emissions and fuel cost are conflicting objectives, a practical approach based on multi-objective optimization is applied to obtain compromised solutions in a single simulation run using genetic algorithm. These solutions are known as non-inferior or Pareto-optimal solutions, graphically illustrated by the trade-off curves between criterions fuel cost and pollutant emission. The efficacy of the proposed approach is illustrated with the help of different sample test cases. This research would be useful for society, electric utilities, consultants, regulatory bodies, policy makers and planners.
T-L Plane Abstraction-Based Energy-Efficient Real-Time Scheduling for Multi-Core Wireless Sensors
Kim, Youngmin; Lee, Ki-Seong; Pham, Ngoc-Son; Lee, Sun-Ro; Lee, Chan-Gun
2016-01-01
Energy efficiency is considered as a critical requirement for wireless sensor networks. As more wireless sensor nodes are equipped with multi-cores, there are emerging needs for energy-efficient real-time scheduling algorithms. The T-L plane-based scheme is known to be an optimal global scheduling technique for periodic real-time tasks on multi-cores. Unfortunately, there has been a scarcity of studies on extending T-L plane-based scheduling algorithms to exploit energy-saving techniques. In this paper, we propose a new T-L plane-based algorithm enabling energy-efficient real-time scheduling on multi-core sensor nodes with dynamic power management (DPM). Our approach addresses the overhead of processor mode transitions and reduces fragmentations of the idle time, which are inherent in T-L plane-based algorithms. Our experimental results show the effectiveness of the proposed algorithm compared to other energy-aware scheduling methods on T-L plane abstraction. PMID:27399722
NASA Astrophysics Data System (ADS)
Ausaf, Muhammad Farhan; Gao, Liang; Li, Xinyu
2015-12-01
For increasing the overall performance of modern manufacturing systems, effective integration of process planning and scheduling functions has been an important area of consideration among researchers. Owing to the complexity of handling process planning and scheduling simultaneously, most of the research work has been limited to solving the integrated process planning and scheduling (IPPS) problem for a single objective function. As there are many conflicting objectives when dealing with process planning and scheduling, real world problems cannot be fully captured considering only a single objective for optimization. Therefore considering multi-objective IPPS (MOIPPS) problem is inevitable. Unfortunately, only a handful of research papers are available on solving MOIPPS problem. In this paper, an optimization algorithm for solving MOIPPS problem is presented. The proposed algorithm uses a set of dispatching rules coupled with priority assignment to optimize the IPPS problem for various objectives like makespan, total machine load, total tardiness, etc. A fixed sized external archive coupled with a crowding distance mechanism is used to store and maintain the non-dominated solutions. To compare the results with other algorithms, a C-matric based method has been used. Instances from four recent papers have been solved to demonstrate the effectiveness of the proposed algorithm. The experimental results show that the proposed method is an efficient approach for solving the MOIPPS problem.
Scheduling for the National Hockey League Using a Multi-objective Evolutionary Algorithm
NASA Astrophysics Data System (ADS)
Craig, Sam; While, Lyndon; Barone, Luigi
We describe a multi-objective evolutionary algorithm that derives schedules for the National Hockey League according to three objectives: minimising the teams' total travel, promoting equity in rest time between games, and minimising long streaks of home or away games. Experiments show that the system is able to derive schedules that beat the 2008-9 NHL schedule in all objectives simultaneously, and that it returns a set of schedules that offer a range of trade-offs across the objectives.
Energy Efficient Real-Time Scheduling Using DPM on Mobile Sensors with a Uniform Multi-Cores
Kim, Youngmin; Lee, Chan-Gun
2017-01-01
In wireless sensor networks (WSNs), sensor nodes are deployed for collecting and analyzing data. These nodes use limited energy batteries for easy deployment and low cost. The use of limited energy batteries is closely related to the lifetime of the sensor nodes when using wireless sensor networks. Efficient-energy management is important to extending the lifetime of the sensor nodes. Most effort for improving power efficiency in tiny sensor nodes has focused mainly on reducing the power consumed during data transmission. However, recent emergence of sensor nodes equipped with multi-cores strongly requires attention to be given to the problem of reducing power consumption in multi-cores. In this paper, we propose an energy efficient scheduling method for sensor nodes supporting a uniform multi-cores. We extend the proposed T-Ler plane based scheduling for global optimal scheduling of a uniform multi-cores and multi-processors to enable power management using dynamic power management. In the proposed approach, processor selection for a scheduling and mapping method between the tasks and processors is proposed to efficiently utilize dynamic power management. Experiments show the effectiveness of the proposed approach compared to other existing methods. PMID:29240695
CQPSO scheduling algorithm for heterogeneous multi-core DAG task model
NASA Astrophysics Data System (ADS)
Zhai, Wenzheng; Hu, Yue-Li; Ran, Feng
2017-07-01
Efficient task scheduling is critical to achieve high performance in a heterogeneous multi-core computing environment. The paper focuses on the heterogeneous multi-core directed acyclic graph (DAG) task model and proposes a novel task scheduling method based on an improved chaotic quantum-behaved particle swarm optimization (CQPSO) algorithm. A task priority scheduling list was built. A processor with minimum cumulative earliest finish time (EFT) was acted as the object of the first task assignment. The task precedence relationships were satisfied and the total execution time of all tasks was minimized. The experimental results show that the proposed algorithm has the advantage of optimization abilities, simple and feasible, fast convergence, and can be applied to the task scheduling optimization for other heterogeneous and distributed environment.
Segment Fixed Priority Scheduling for Self Suspending Real Time Tasks
2016-08-11
Segment-Fixed Priority Scheduling for Self-Suspending Real -Time Tasks Junsung Kim, Department of Electrical and Computer Engineering, Carnegie...4 2.1 Application of a Multi-Segment Self-Suspending Real -Time Task Model ............................. 5 3 Fixed Priority Scheduling...1 Figure 2: A multi-segment self-suspending real -time task model
Service-Oriented Node Scheduling Scheme for Wireless Sensor Networks Using Markov Random Field Model
Cheng, Hongju; Su, Zhihuang; Lloret, Jaime; Chen, Guolong
2014-01-01
Future wireless sensor networks are expected to provide various sensing services and energy efficiency is one of the most important criterions. The node scheduling strategy aims to increase network lifetime by selecting a set of sensor nodes to provide the required sensing services in a periodic manner. In this paper, we are concerned with the service-oriented node scheduling problem to provide multiple sensing services while maximizing the network lifetime. We firstly introduce how to model the data correlation for different services by using Markov Random Field (MRF) model. Secondly, we formulate the service-oriented node scheduling issue into three different problems, namely, the multi-service data denoising problem which aims at minimizing the noise level of sensed data, the representative node selection problem concerning with selecting a number of active nodes while determining the services they provide, and the multi-service node scheduling problem which aims at maximizing the network lifetime. Thirdly, we propose a Multi-service Data Denoising (MDD) algorithm, a novel multi-service Representative node Selection and service Determination (RSD) algorithm, and a novel MRF-based Multi-service Node Scheduling (MMNS) scheme to solve the above three problems respectively. Finally, extensive experiments demonstrate that the proposed scheme efficiently extends the network lifetime. PMID:25384005
Modeling Off-Nominal Recovery in NextGen Terminal-Area Operations
NASA Technical Reports Server (NTRS)
Callantine, Todd J.
2011-01-01
Robust schedule-based arrival management requires efficient recovery from off-nominal situations. This paper presents research on modeling off-nominal situations and plans for recovering from them using TRAC, a route/airspace design, fast-time simulation, and analysis tool for studying NextGen trajectory-based operations. The paper provides an overview of a schedule-based arrival-management concept and supporting controller tools, then describes TRAC implementations of methods for constructing off-nominal scenarios, generating trajectory options to meet scheduling constraints, and automatically producing recovery plans.
Multi-criteria evaluation methods in the production scheduling
NASA Astrophysics Data System (ADS)
Kalinowski, K.; Krenczyk, D.; Paprocka, I.; Kempa, W.; Grabowik, C.
2016-08-01
The paper presents a discussion on the practical application of different methods of multi-criteria evaluation in the process of scheduling in manufacturing systems. Among the methods two main groups are specified: methods based on the distance function (using metacriterion) and methods that create a Pareto set of possible solutions. The basic criteria used for scheduling were also described. The overall procedure of evaluation process in production scheduling was presented. It takes into account the actions in the whole scheduling process and human decision maker (HDM) participation. The specified HDM decisions are related to creating and editing a set of evaluation criteria, selection of multi-criteria evaluation method, interaction in the searching process, using informal criteria and making final changes in the schedule for implementation. According to need, process scheduling may be completely or partially automated. Full automatization is possible in case of metacriterion based objective function and if Pareto set is selected - the final decision has to be done by HDM.
Research on Production Scheduling System with Bottleneck Based on Multi-agent
NASA Astrophysics Data System (ADS)
Zhenqiang, Bao; Weiye, Wang; Peng, Wang; Pan, Quanke
Aimed at the imbalance problem of resource capacity in Production Scheduling System, this paper uses Production Scheduling System based on multi-agent which has been constructed, and combines the dynamic and autonomous of Agent; the bottleneck problem in the scheduling is solved dynamically. Firstly, this paper uses Bottleneck Resource Agent to find out the bottleneck resource in the production line, analyses the inherent mechanism of bottleneck, and describes the production scheduling process based on bottleneck resource. Bottleneck Decomposition Agent harmonizes the relationship of job's arrival time and transfer time in Bottleneck Resource Agent and Non-Bottleneck Resource Agents, therefore, the dynamic scheduling problem is simplified as the single machine scheduling of each resource which takes part in the scheduling. Finally, the dynamic real-time scheduling problem is effectively solved in Production Scheduling System.
Extended working hours: Impacts on workers
D. Mitchell; T. Gallagher
2010-01-01
Some logging business owners are trying to manage their equipment assets by increasing the scheduled machine hours. The intent is to maximize the total tons produced by a set of equipment. This practice is referred to as multi-shifting, double-shifting, or extended working hours. One area often overlooked is the impact that working non-traditional hours can have on...
Madduri, Ravi K.; Sulakhe, Dinanath; Lacinski, Lukasz; Liu, Bo; Rodriguez, Alex; Chard, Kyle; Dave, Utpal J.; Foster, Ian T.
2014-01-01
We describe Globus Genomics, a system that we have developed for rapid analysis of large quantities of next-generation sequencing (NGS) genomic data. This system achieves a high degree of end-to-end automation that encompasses every stage of data analysis including initial data retrieval from remote sequencing centers or storage (via the Globus file transfer system); specification, configuration, and reuse of multi-step processing pipelines (via the Galaxy workflow system); creation of custom Amazon Machine Images and on-demand resource acquisition via a specialized elastic provisioner (on Amazon EC2); and efficient scheduling of these pipelines over many processors (via the HTCondor scheduler). The system allows biomedical researchers to perform rapid analysis of large NGS datasets in a fully automated manner, without software installation or a need for any local computing infrastructure. We report performance and cost results for some representative workloads. PMID:25342933
Madduri, Ravi K; Sulakhe, Dinanath; Lacinski, Lukasz; Liu, Bo; Rodriguez, Alex; Chard, Kyle; Dave, Utpal J; Foster, Ian T
2014-09-10
We describe Globus Genomics, a system that we have developed for rapid analysis of large quantities of next-generation sequencing (NGS) genomic data. This system achieves a high degree of end-to-end automation that encompasses every stage of data analysis including initial data retrieval from remote sequencing centers or storage (via the Globus file transfer system); specification, configuration, and reuse of multi-step processing pipelines (via the Galaxy workflow system); creation of custom Amazon Machine Images and on-demand resource acquisition via a specialized elastic provisioner (on Amazon EC2); and efficient scheduling of these pipelines over many processors (via the HTCondor scheduler). The system allows biomedical researchers to perform rapid analysis of large NGS datasets in a fully automated manner, without software installation or a need for any local computing infrastructure. We report performance and cost results for some representative workloads.
Low Probability Tail Event Analysis and Mitigation in BPA Control Area: Task 2 Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Shuai; Makarov, Yuri V.; McKinstry, Craig A.
Task report detailing low probability tail event analysis and mitigation in BPA control area. Tail event refers to the situation in a power system when unfavorable forecast errors of load and wind are superposed onto fast load and wind ramps, or non-wind generators falling short of scheduled output, causing the imbalance between generation and load to become very significant.
Measurement of Cosmic-Ray TeV Electrons
NASA Astrophysics Data System (ADS)
Schubnell, Michael; Anderson, T.; Bower, C.; Coutu, S.; Gennaro, J.; Geske, M.; Mueller, D.; Musser, J.; Nutter, S.; Park, N.; Tarle, G.; Wakely, S.
2011-09-01
The Cosmic Ray Electron Synchrotron Telescope (CREST) high-altitude balloon experiment is a pathfinding effort to detect for the first time multi-TeV cosmic-ray electrons. At these energies distant sources will not contribute to the local electron spectrum due to the strong energy losses of the electrons and thus TeV observations will reflect the distribution and abundance of nearby acceleration sites. CREST will detect electrons indirectly by measuring the characteristic synchrotron photons generated in the Earth's magnetic field. The instrument consist of an array of 1024 BaF2 crystals viewed by photomultiplier tubes surrounded by a hermetic scintillator shield. Since the primary electron itself need not traverse the payload, an effective detection area is achieved that is several times the nominal 6.4 m2 instrument. CREST is scheduled to fly in a long duration circumpolar orbit over Antarctica during the 2011-12 season.
Ha, Kyungyeon; Choi, Hoseop; Jung, Kinam; Han, Kyuhee; Lee, Jong-Kwon; Ahn, KwangJun; Choi, Mansoo
2014-06-06
We present an approach utilizing ion assisted aerosol lithography (IAAL) with a newly designed multi-pin spark discharge generator (SDG) for fabricating large-area three-dimensional (3D) nanoparticle-structure (NPS) arrays. The design of the multi-pin SDG allows us to uniformly construct 3D NPSs on a large area of 50 mm × 50 mm in a parallel fashion at atmospheric pressure. The ion-induced focusing capability of IAAL significantly reduces the feature size of 3D NPSs compared to that of the original pre-patterns formed on a substrate. The spatial uniformity of 3D NPSs is above 95% using the present multi-pin SDG, which is far superior to that of the previous single-pin SDG with less than 32% uniformity. The effect of size distributions of nanoparticles generated via the multi-pin SDG on the 3D NPSs also has been studied. In addition, we measured spectral reflectance for the present 3D NPSs coated with Ag, demonstrating enhanced diffuse reflectance.
NASA Astrophysics Data System (ADS)
Ha, Kyungyeon; Choi, Hoseop; Jung, Kinam; Han, Kyuhee; Lee, Jong-Kwon; Ahn, KwangJun; Choi, Mansoo
2014-06-01
We present an approach utilizing ion assisted aerosol lithography (IAAL) with a newly designed multi-pin spark discharge generator (SDG) for fabricating large-area three-dimensional (3D) nanoparticle-structure (NPS) arrays. The design of the multi-pin SDG allows us to uniformly construct 3D NPSs on a large area of 50 mm × 50 mm in a parallel fashion at atmospheric pressure. The ion-induced focusing capability of IAAL significantly reduces the feature size of 3D NPSs compared to that of the original pre-patterns formed on a substrate. The spatial uniformity of 3D NPSs is above 95% using the present multi-pin SDG, which is far superior to that of the previous single-pin SDG with less than 32% uniformity. The effect of size distributions of nanoparticles generated via the multi-pin SDG on the 3D NPSs also has been studied. In addition, we measured spectral reflectance for the present 3D NPSs coated with Ag, demonstrating enhanced diffuse reflectance.
Multi-core processing and scheduling performance in CMS
NASA Astrophysics Data System (ADS)
Hernández, J. M.; Evans, D.; Foulkes, S.
2012-12-01
Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resulting in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.
Evaluation of new spectral bands for multi-spectral imaging: SMIRR aircraft test results
Goetz, Alexander F.H.; Rowan, Lawrence C.; Barringer, Anthony R.
1980-01-01
A 10-channel radiometer called the Shuttle Multispectral Infrared Radiometer (SMIRR) is scheduled to take data from orbit on the second shuttle orbital light test. As part of the instrument test sequence, a series of aircraft flights was carried out over 10 test areas in Utah and Nevada. Apart from vegetation, the materials exposed at the surface were volcanic sequences ranging from tuffs to basalts, areas of hydrothermally altered volcanic rocks, sedimentary sequences of sandstone and carbonate rocks, and alluvial cover.
Scheduling and Pricing for Expected Ramp Capability in Real-Time Power Markets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ela, Erik; O'Malley, Mark
2016-05-01
Higher variable renewable generation penetrations are occurring throughout the world on different power systems. These resources increase the variability and uncertainty on the system which must be accommodated by an increase in the flexibility of the system resources in order to maintain reliability. Many scheduling strategies have been discussed and introduced to ensure that this flexibility is available at multiple timescales. To meet variability, that is, the expected changes in system conditions, two recent strategies have been introduced: time-coupled multi-period market clearing models and the incorporation of ramp capability constraints. To appropriately evaluate these methods, it is important to assessmore » both efficiency and reliability. But it is also important to assess the incentive structure to ensure that resources asked to perform in different ways have the proper incentives to follow these directions, which is a step often ignored in simulation studies. We find that there are advantages and disadvantages to both approaches. We also find that look-ahead horizon length in multi-period market models can impact incentives. This paper proposes scheduling and pricing methods that ensure expected ramps are met reliably, efficiently, and with associated prices based on true marginal costs that incentivize resources to do as directed by the market. Case studies show improvements of the new method.« less
Matrix Algebra for GPU and Multicore Architectures (MAGMA) for Large Petascale Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dongarra, Jack J.; Tomov, Stanimire
2014-03-24
The goal of the MAGMA project is to create a new generation of linear algebra libraries that achieve the fastest possible time to an accurate solution on hybrid Multicore+GPU-based systems, using all the processing power that future high-end systems can make available within given energy constraints. Our efforts at the University of Tennessee achieved the goals set in all of the five areas identified in the proposal: 1. Communication optimal algorithms; 2. Autotuning for GPU and hybrid processors; 3. Scheduling and memory management techniques for heterogeneity and scale; 4. Fault tolerance and robustness for large scale systems; 5. Building energymore » efficiency into software foundations. The University of Tennessee’s main contributions, as proposed, were the research and software development of new algorithms for hybrid multi/many-core CPUs and GPUs, as related to two-sided factorizations and complete eigenproblem solvers, hybrid BLAS, and energy efficiency for dense, as well as sparse, operations. Furthermore, as proposed, we investigated and experimented with various techniques targeting the five main areas outlined.« less
Multi-core processing and scheduling performance in CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez, J. M.; Evans, D.; Foulkes, S.
2012-01-01
Commodity hardware is going many-core. We might soon not be able to satisfy the job memory needs per core in the current single-core processing model in High Energy Physics. In addition, an ever increasing number of independent and incoherent jobs running on the same physical hardware not sharing resources might significantly affect processing performance. It will be essential to effectively utilize the multi-core architecture. CMS has incorporated support for multi-core processing in the event processing framework and the workload management system. Multi-core processing jobs share common data in memory, such us the code libraries, detector geometry and conditions data, resultingmore » in a much lower memory usage than standard single-core independent jobs. Exploiting this new processing model requires a new model in computing resource allocation, departing from the standard single-core allocation for a job. The experiment job management system needs to have control over a larger quantum of resource since multi-core aware jobs require the scheduling of multiples cores simultaneously. CMS is exploring the approach of using whole nodes as unit in the workload management system where all cores of a node are allocated to a multi-core job. Whole-node scheduling allows for optimization of the data/workflow management (e.g. I/O caching, local merging) but efficient utilization of all scheduled cores is challenging. Dedicated whole-node queues have been setup at all Tier-1 centers for exploring multi-core processing workflows in CMS. We present the evaluation of the performance scheduling and executing multi-core workflows in whole-node queues compared to the standard single-core processing workflows.« less
Systematic Approach to Better Understanding Integration Costs: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stark, Gregory B.
2015-09-28
When someone mentions integration costs, thoughts of the costs of integrating renewable generation into an existing system come to mind. We think about how variability and uncertainty can increase power system cycling costs as increasing amounts of wind or solar generation are incorporated into the generation mix. However, seldom do we think about what happens to system costs when new baseload generation is added to an existing system or when generation self-schedules. What happens when a highly flexible combined-cycle plant is added? Do system costs go up, or do they go down? Are other, non-cycling, maintenance costs impacted? In thismore » paper we investigate six technologies and operating practices--including VG, baseload generation, generation mix, gas prices, self-scheduling, and fast-start generation--and how changes in these areas can impact a system's operating costs. This paper provides a working definition of integration costs and four components of variable costs. It describes the study approach and how a production cost modeling-based method was used to determine the cost effects, and, as a part of the study approach section, it describes the test system and data used for the comparisons. Finally, it presents the research findings, and, in closing, suggests three areas for future work.« less
Modeling of a production system using the multi-agent approach
NASA Astrophysics Data System (ADS)
Gwiazda, A.; Sękala, A.; Banaś, W.
2017-08-01
The method that allows for the analysis of complex systems is a multi-agent simulation. The multi-agent simulation (Agent-based modeling and simulation - ABMS) is modeling of complex systems consisting of independent agents. In the case of the model of the production system agents may be manufactured pieces set apart from other types of agents like machine tools, conveyors or replacements stands. Agents are magazines and buffers. More generally speaking, the agents in the model can be single individuals, but you can also be defined as agents of collective entities. They are allowed hierarchical structures. It means that a single agent could belong to a certain class. Depending on the needs of the agent may also be a natural or physical resource. From a technical point of view, the agent is a bundle of data and rules describing its behavior in different situations. Agents can be autonomous or non-autonomous in making the decision about the types of classes of agents, class sizes and types of connections between elements of the system. Multi-agent modeling is a very flexible technique for modeling and model creating in the convention that could be adapted to any research problem analyzed from different points of views. One of the major problems associated with the organization of production is the spatial organization of the production process. Secondly, it is important to include the optimal scheduling. For this purpose use can approach multi-purposeful. In this regard, the model of the production process will refer to the design and scheduling of production space for four different elements. The program system was developed in the environment NetLogo. It was also used elements of artificial intelligence. The main agent represents the manufactured pieces that, according to previously assumed rules, generate the technological route and allow preprint the schedule of that line. Machine lines, reorientation stands, conveyors and transport devices also represent the other type of agent that are utilized in the described simulation. The article presents the idea of an integrated program approach and shows the resulting production layout as a virtual model. This model was developed in the NetLogo multi-agent program environment.
Yu, Yang; Wang, Sihan; Tang, Jiafu; Kaku, Ikou; Sun, Wei
2016-01-01
Productivity can be greatly improved by converting the traditional assembly line to a seru system, especially in the business environment with short product life cycles, uncertain product types and fluctuating production volumes. Line-seru conversion includes two decision processes, i.e., seru formation and seru load. For simplicity, however, previous studies focus on the seru formation with a given scheduling rule in seru load. We select ten scheduling rules usually used in seru load to investigate the influence of different scheduling rules on the performance of line-seru conversion. Moreover, we clarify the complexities of line-seru conversion for ten different scheduling rules from the theoretical perspective. In addition, multi-objective decisions are often used in line-seru conversion. To obtain Pareto-optimal solutions of multi-objective line-seru conversion, we develop two improved exact algorithms based on reducing time complexity and space complexity respectively. Compared with the enumeration based on non-dominated sorting to solve multi-objective problem, the two improved exact algorithms saves computation time greatly. Several numerical simulation experiments are performed to show the performance improvement brought by the two proposed exact algorithms.
Emergency response nurse scheduling with medical support robot by multi-agent and fuzzy technique.
Kono, Shinya; Kitamura, Akira
2015-08-01
In this paper, a new co-operative re-scheduling method corresponding the medical support tasks that the time of occurrence can not be predicted is described, assuming robot can co-operate medical activities with the nurse. Here, Multi-Agent-System (MAS) is used for the co-operative re-scheduling, in which Fuzzy-Contract-Net (FCN) is applied to the robots task assignment for the emergency tasks. As the simulation results, it is confirmed that the re-scheduling results by the proposed method can keep the patients satisfaction and decrease the work load of the nurse.
The Power of Flexibility: Autonomous Agents That Conserve Energy in Commercial Buildings
NASA Astrophysics Data System (ADS)
Kwak, Jun-young
Agent-based systems for energy conservation are now a growing area of research in multiagent systems, with applications ranging from energy management and control on the smart grid, to energy conservation in residential buildings, to energy generation and dynamic negotiations in distributed rural communities. Contributing to this area, my thesis presents new agent-based models and algorithms aiming to conserve energy in commercial buildings. More specifically, my thesis provides three sets of algorithmic contributions. First, I provide online predictive scheduling algorithms to handle massive numbers of meeting/event scheduling requests considering flexibility , which is a novel concept for capturing generic user constraints while optimizing the desired objective. Second, I present a novel BM-MDP ( Bounded-parameter Multi-objective Markov Decision Problem) model and robust algorithms for multi-objective optimization under uncertainty both at the planning and execution time. The BM-MDP model and its robust algorithms are useful in (re)scheduling events to achieve energy efficiency in the presence of uncertainty over user's preferences. Third, when multiple users contribute to energy savings, fair division of credit for such savings to incentivize users for their energy saving activities arises as an important question. I appeal to cooperative game theory and specifically to the concept of Shapley value for this fair division. Unfortunately, scaling up this Shapley value computation is a major hindrance in practice. Therefore, I present novel approximation algorithms to efficiently compute the Shapley value based on sampling and partitions and to speed up the characteristic function computation. These new models have not only advanced the state of the art in multiagent algorithms, but have actually been successfully integrated within agents dedicated to energy efficiency: SAVES, TESLA and THINC. SAVES focuses on the day-to-day energy consumption of individuals and groups in commercial buildings by reactively suggesting energy conserving alternatives. TESLA takes a long-range planning perspective and optimizes overall energy consumption of a large number of group events or meetings together. THINC provides an end-to-end integration within a single agent of energy efficient scheduling, rescheduling and credit allocation. While SAVES, TESLA and THINC thus differ in their scope and applicability, they demonstrate the utility of agent-based systems in actually reducing energy consumption in commercial buildings. I evaluate my algorithms and agents using extensive analysis on data from over 110,000 real meetings/events at multiple educational buildings including the main libraries at the University of Southern California. I also provide results on simulations and real-world experiments, clearly demonstrating the power of agent technology to assist human users in saving energy in commercial buildings.
Scheduling Software for Complex Scenarios
NASA Technical Reports Server (NTRS)
2006-01-01
Preparing a vehicle and its payload for a single launch is a complex process that involves thousands of operations. Because the equipment and facilities required to carry out these operations are extremely expensive and limited in number, optimal assignment and efficient use are critically important. Overlapping missions that compete for the same resources, ground rules, safety requirements, and the unique needs of processing vehicles and payloads destined for space impose numerous constraints that, when combined, require advanced scheduling. Traditional scheduling systems use simple algorithms and criteria when selecting activities and assigning resources and times to each activity. Schedules generated by these simple decision rules are, however, frequently far from optimal. To resolve mission-critical scheduling issues and predict possible problem areas, NASA historically relied upon expert human schedulers who used their judgment and experience to determine where things should happen, whether they will happen on time, and whether the requested resources are truly necessary.
NASA Technical Reports Server (NTRS)
Callantine, Todd J.; Cabrall, Christopher; Kupfer, Michael; Omar, Faisal G.; Prevot, Thomas
2012-01-01
NASA?s Air Traffic Management Demonstration-1 (ATD-1) is a multi-year effort to demonstrate high-throughput, fuel-efficient arrivals at a major U.S. airport using NASA-developed scheduling automation, controller decision-support tools, and ADS-B-enabled Flight-Deck Interval Management (FIM) avionics. First-year accomplishments include the development of a concept of operations for managing scheduled arrivals flying Optimized Profile Descents with equipped aircraft conducting FIM operations, and the integration of laboratory prototypes of the core ATD-1 technologies. Following each integration phase, a human-in-the-loop simulation was conducted to evaluate and refine controller tools, procedures, and clearance phraseology. From a ground-side perspective, the results indicate the concept is viable and the operations are safe and acceptable. Additional training is required for smooth operations that yield notable benefits, particularly in the areas of FIM operations and clearance phraseology.
The development and validation of command schedules for SeaWiFS
NASA Astrophysics Data System (ADS)
Woodward, Robert H.; Gregg, Watson W.; Patt, Frederick S.
1994-11-01
An automated method for developing and assessing spacecraft and instrument command schedules is presented for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) project. SeaWiFS is to be carried on the polar-orbiting SeaStar satellite in 1995. The primary goal of the SeaWiFS mission is to provide global ocean chlorophyll concentrations every four days by employing onboard recorders and a twice-a-day data downlink schedule. Global Area Coverage (GAC) data with about 4.5 km resolution will be used to produce the global coverage. Higher resolution (1.1 km resolution) Local Area Coverage (LAC) data will also be recorded to calibrate the sensor. In addition, LAC will be continuously transmitted from the satellite and received by High Resolution Picture Transmission (HRPT) stations. The methods used to generate commands for SeaWiFS employ numerous hierarchical checks as a means of maximizing coverage of the Earth's surface and fulfilling the LAC data requirements. The software code is modularized and written in Fortran with constructs to mirror the pre-defined mission rules. The overall method is specifically developed for low orbit Earth-observing satellites with finite onboard recording capabilities and regularly scheduled data downlinks. Two software packages using the Interactive Data Language (IDL) for graphically displaying and verifying the resultant command decisions are presented. Displays can be generated which show portions of the Earth viewed by the sensor and spacecraft sub-orbital locations during onboard calibration activities. An IDL-based interactive method of selecting and testing LAC targets and calibration activities for command generation is also discussed.
Multi-Temporal Decomposed Wind and Load Power Models for Electric Energy Systems
NASA Astrophysics Data System (ADS)
Abdel-Karim, Noha
This thesis is motivated by the recognition that sources of uncertainties in electric power systems are multifold and may have potentially far-reaching effects. In the past, only system load forecast was considered to be the main challenge. More recently, however, the uncertain price of electricity and hard-to-predict power produced by renewable resources, such as wind and solar, are making the operating and planning environment much more challenging. The near-real-time power imbalances are compensated by means of frequency regulation and generally require fast-responding costly resources. Because of this, a more accurate forecast and look-ahead scheduling would result in a reduced need for expensive power balancing. Similarly, long-term planning and seasonal maintenance need to take into account long-term demand forecast as well as how the short-term generation scheduling is done. The better the demand forecast, the more efficient planning will be as well. Moreover, computer algorithms for scheduling and planning are essential in helping the system operators decide what to schedule and planners what to build. This is needed given the overall complexity created by different abilities to adjust the power output of generation technologies, demand uncertainties and by the network delivery constraints. Given the growing presence of major uncertainties, it is likely that the main control applications will use more probabilistic approaches. Today's predominantly deterministic methods will be replaced by methods which account for key uncertainties as decisions are made. It is well-understood that although demand and wind power cannot be predicted at very high accuracy, taking into consideration predictions and scheduling in a look-ahead way over several time horizons generally results in more efficient and reliable utilization, than when decisions are made assuming deterministic, often worst-case scenarios. This change is in approach is going to ultimately require new electricity market rules capable of providing the right incentives to manage uncertainties and of differentiating various technologies according to the rate at which they can respond to ever changing conditions. Given the overall need for modeling uncertainties in electric energy systems, we consider in this thesis the problem of multi-temporal modeling of wind and demand power, in particular. Historic data is used to derive prediction models for several future time horizons. Short-term prediction models derived can be used for look-ahead economic dispatch and unit commitment, while the long-term annual predictive models can be used for investment planning. As expected, the accuracy of such predictive models depends on the time horizons over which the predictions are made, as well as on the nature of uncertain signals. It is shown that predictive models obtained using the same general modeling approaches result in different accuracy for wind than for demand power. In what follows, we introduce several models which have qualitatively different patterns, ranging from hourly to annual. We first transform historic time-stamped data into the Fourier Transform (Fr) representation. The frequency domain data representation is used to decompose the wind and load power signals and to derive predictive models relevant for short-term and long-term predictions using extracted spectral techniques. The short-term results are interpreted next as a Linear Prediction Coding Model (LPC) and its accuracy is analyzed. Next, a new Markov-Based Sensitivity Model (MBSM) for short term prediction has been proposed and the dispatched costs of uncertainties for different predictive models with comparisons have been developed. Moreover, the Discrete Markov Process (DMP) representation is applied to help assess probabilities of most likely short-, medium- and long-term states and the related multi-temporal risks. In addition, this thesis discusses operational impacts of wind power integration in different scenario levels by performing more than 9,000 AC Optimal Power Flow runs. The effects of both wind and load variations on system constraints and costs are presented. The limitations of DC Optimal Power Flow (DCOPF) vs. ACOPF are emphasized by means of system convergence problems due to the effect of wind power on changing line flows and net power injections. By studying the effect of having wind power on line flows, we found that the divergence problem applies in areas with high wind and hydro generation capacity share (cheap generations). (Abstract shortened by UMI.).
Sampling Approaches for Multi-Domain Internet Performance Measurement Infrastructures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calyam, Prasad
2014-09-15
The next-generation of high-performance networks being developed in DOE communities are critical for supporting current and emerging data-intensive science applications. The goal of this project is to investigate multi-domain network status sampling techniques and tools to measure/analyze performance, and thereby provide “network awareness” to end-users and network operators in DOE communities. We leverage the infrastructure and datasets available through perfSONAR, which is a multi-domain measurement framework that has been widely deployed in high-performance computing and networking communities; the DOE community is a core developer and the largest adopter of perfSONAR. Our investigations include development of semantic scheduling algorithms, measurement federationmore » policies, and tools to sample multi-domain and multi-layer network status within perfSONAR deployments. We validate our algorithms and policies with end-to-end measurement analysis tools for various monitoring objectives such as network weather forecasting, anomaly detection, and fault-diagnosis. In addition, we develop a multi-domain architecture for an enterprise-specific perfSONAR deployment that can implement monitoring-objective based sampling and that adheres to any domain-specific measurement policies.« less
NASA Astrophysics Data System (ADS)
Tabik, S.; Romero, L. F.; Mimica, P.; Plata, O.; Zapata, E. L.
2012-09-01
A broad area in astronomy focuses on simulating extragalactic objects based on Very Long Baseline Interferometry (VLBI) radio-maps. Several algorithms in this scope simulate what would be the observed radio-maps if emitted from a predefined extragalactic object. This work analyzes the performance and scaling of this kind of algorithms on multi-socket, multi-core architectures. In particular, we evaluate a sharing approach, a privatizing approach and a hybrid approach on systems with complex memory hierarchy that includes shared Last Level Cache (LLC). In addition, we investigate which manual processes can be systematized and then automated in future works. The experiments show that the data-privatizing model scales efficiently on medium scale multi-socket, multi-core systems (up to 48 cores) while regardless of algorithmic and scheduling optimizations, the sharing approach is unable to reach acceptable scalability on more than one socket. However, the hybrid model with a specific level of data-sharing provides the best scalability over all used multi-socket, multi-core systems.
Planning and Scheduling of Payloads of AstroSat During Initial and Normal Phase Observations
NASA Astrophysics Data System (ADS)
Pandiyan, R.; Subbarao, S. V.; Nagamani, T.; Rao, Chaitra; Rao, N. Hari Prasad; Joglekar, Harish; Kumar, Naresh; Dumpa, Surya Ratna Prakash; Chauhan, Anshu; Dakshayani, B. P.
2017-06-01
On 28th September 2015, India launched its first astronomical space observatory AstroSat, successfully. AstroSat carried five astronomy payloads, namely, (i) Cadmium Zinc Telluride Imager (CZTI), (ii) Large Area X-ray Proportional Counter (LAXPC), (iii) Soft X-ray Telescope (SXT), (iv) Ultra Violet Imaging Telescope (UVIT) and (v) Scanning Sky Monitor (SSM) and therefore, has the capability to observe celestial objects in multi-wavelength. Four of the payloads are co-aligned along the positive roll axis of the spacecraft and the remaining one is placed along the positive yaw axis direction. All the payloads are sensitive to bright objects and specifically, require avoiding bright Sun within a safe zone of their bore axes in orbit. Further, there are other operational constraints both from spacecraft side and payloads side which are to be strictly enforced during operations. Even on-orbit spacecraft manoeuvres are constrained to about two of the axes in order to avoid bright Sun within this safe zone and a special constrained manoeuvre is exercised during manoeuvres. The planning and scheduling of the payloads during the Performance Verification (PV) phase was carried out in semi-autonomous/manual mode and a complete automation is exercised for normal phase/Guaranteed Time Observation (GuTO) operations. The process is found to be labour intensive and several operational software tools, encompassing spacecraft sub-systems, on-orbit, domain and environmental constraints, were built-in and interacted with the scheduling tool for appropriate decision-making and science scheduling. The procedural details of the complex scheduling of a multi-wavelength astronomy space observatory and their working in PV phase and in normal/GuTO phases are presented in this paper.
A manpower scheduling heuristic for aircraft maintenance application
NASA Astrophysics Data System (ADS)
Sze, San-Nah; Sze, Jeeu-Fong; Chiew, Kang-Leng
2012-09-01
This research studies a manpower scheduling for aircraft maintenance, focusing on in-flight food loading operation. A group of loading teams with flexible shifts is required to deliver and upload packaged meals from the ground kitchen to aircrafts in multiple trips. All aircrafts must be served within predefined time windows. The scheduling process takes into account of various constraints such as meal break allocation, multi-trip traveling and food exposure time limit. Considering the aircrafts movement and predefined maximum working hours for each loading team, the main objective of this study is to form an efficient roster by assigning a minimum number of loading teams to the aircrafts. We proposed an insertion based heuristic to generate the solutions in a short period of time for large instances. This proposed algorithm is implemented in various stages for constructing trips due to the presence of numerous constraints. The robustness and efficiency of the algorithm is demonstrated in computational results. The results show that the insertion heuristic more efficiently outperforms the company's current practice.
Experiments with a Parallel Multi-Objective Evolutionary Algorithm for Scheduling
NASA Technical Reports Server (NTRS)
Brown, Matthew; Johnston, Mark D.
2013-01-01
Evolutionary multi-objective algorithms have great potential for scheduling in those situations where tradeoffs among competing objectives represent a key requirement. One challenge, however, is runtime performance, as a consequence of evolving not just a single schedule, but an entire population, while attempting to sample the Pareto frontier as accurately and uniformly as possible. The growing availability of multi-core processors in end user workstations, and even laptops, has raised the question of the extent to which such hardware can be used to speed up evolutionary algorithms. In this paper we report on early experiments in parallelizing a Generalized Differential Evolution (GDE) algorithm for scheduling long-range activities on NASA's Deep Space Network. Initial results show that significant speedups can be achieved, but that performance does not necessarily improve as more cores are utilized. We describe our preliminary results and some initial suggestions from parallelizing the GDE algorithm. Directions for future work are outlined.
The terminal area automated path generation problem
NASA Technical Reports Server (NTRS)
Hsin, C.-C.
1977-01-01
The automated terminal area path generation problem in the advanced Air Traffic Control System (ATC), has been studied. Definitions, input, output and the interrelationships with other ATC functions have been discussed. Alternatives in modeling the problem have been identified. Problem formulations and solution techniques are presented. In particular, the solution of a minimum effort path stretching problem (path generation on a given schedule) has been carried out using the Newton-Raphson trajectory optimization method. Discussions are presented on the effect of different delivery time, aircraft entry position, initial guess on the boundary conditions, etc. Recommendations are made on real-world implementations.
Scheduling: A guide for program managers
NASA Technical Reports Server (NTRS)
1994-01-01
The following topics are discussed concerning scheduling: (1) milestone scheduling; (2) network scheduling; (3) program evaluation and review technique; (4) critical path method; (5) developing a network; (6) converting an ugly duckling to a swan; (7) network scheduling problem; (8) (9) network scheduling when resources are limited; (10) multi-program considerations; (11) influence on program performance; (12) line-of-balance technique; (13) time management; (14) recapitulization; and (15) analysis.
On the Feasibility of Intense Radial Velocity Surveys for Earth-twin Discoveries
NASA Astrophysics Data System (ADS)
Hall, Richard D.; Thompson, Samantha J.; Handley, Will; Queloz, Didier
2018-06-01
This work assesses the potential capability of the next generation of high-precision Radial Velocity (RV) instruments for Earth-twin exoplanet detection. From the perspective of the importance of data sampling, the Terra Hunting Experiment aims to do this through an intense series of nightly RV observations over a long baseline on a carefully selected target list, via the brand-new instrument HARPS3. This paper describes an end-to-end simulation of generating and processing such data to help us better understand the impact of uncharacterised stellar noise in the recovery of Earth-mass planets with orbital periods of the order of many months. We consider full Keplerian systems, realistic simulated stellar noise, instrument white noise, and location-specific weather patterns for our observation schedules. We use Bayesian statistics to assess various planetary models fitted to the synthetic data, and compare the successful planet recovery of the Terra Hunting Experiment schedule with a typical reference survey. We find that the Terra Hunting Experiment can detect Earth-twins in the habitable zones of solar-type stars, in single and multi-planet systems, and in the presence of stellar signals. Also that it out-performs a typical reference survey on accuracy of recovered parameters, and that it performs comparably to an uninterrupted space-based schedule.
Advanced teleprocessing systems
NASA Astrophysics Data System (ADS)
Kleinrock, L.; Gerla, M.
1982-09-01
This Annual Technical Report covers research covering the period from October 1, 1981 to September 30, 1982. This contract has three primary designated research areas: packet radio systems, resource sharing and allocation, and distributed processing and control. This report contains abstracts of publications which summarize research results in these areas followed by the main body of the report which is devoted to a study of channel access protocols that are executed by the nodes of a network to schedule their transmissions on multi-access broadcast channel. In particular the main body consists of a Ph.D. dissertation, Channel Access Protocols for Multi-Hop Broadcast Packet Radio Networks. This work discusses some new channel access protocols useful for mobile radio networks. Included is an analysis of slotted ALOHA and some tight bounds on the performance of all possible protocols in a mobile environment.
A Search Algorithm for Generating Alternative Process Plans in Flexible Manufacturing System
NASA Astrophysics Data System (ADS)
Tehrani, Hossein; Sugimura, Nobuhiro; Tanimizu, Yoshitaka; Iwamura, Koji
Capabilities and complexity of manufacturing systems are increasing and striving for an integrated manufacturing environment. Availability of alternative process plans is a key factor for integration of design, process planning and scheduling. This paper describes an algorithm for generation of alternative process plans by extending the existing framework of the process plan networks. A class diagram is introduced for generating process plans and process plan networks from the viewpoint of the integrated process planning and scheduling systems. An incomplete search algorithm is developed for generating and searching the process plan networks. The benefit of this algorithm is that the whole process plan network does not have to be generated before the search algorithm starts. This algorithm is applicable to large and enormous process plan networks and also to search wide areas of the network based on the user requirement. The algorithm can generate alternative process plans and to select a suitable one based on the objective functions.
Taming Wild Horses: The Need for Virtual Time-based Scheduling of VMs in Network Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S; Henz, Brian J
2012-01-01
The next generation of scalable network simulators employ virtual machines (VMs) to act as high-fidelity models of traffic producer/consumer nodes in simulated networks. However, network simulations could be inaccurate if VMs are not scheduled according to virtual time, especially when many VMs are hosted per simulator core in a multi-core simulator environment. Since VMs are by default free-running, on the outset, it is not clear if, and to what extent, their untamed execution affects the results in simulated scenarios. Here, we provide the first quantitative basis for establishing the need for generalized virtual time scheduling of VMs in network simulators,more » based on an actual prototyped implementations. To exercise breadth, our system is tested with multiple disparate applications: (a) a set of message passing parallel programs, (b) a computer worm propagation phenomenon, and (c) a mobile ad-hoc wireless network simulation. We define and use error metrics and benchmarks in scaled tests to empirically report the poor match of traditional, fairness-based VM scheduling to VM-based network simulation, and also clearly show the better performance of our simulation-specific scheduler, with up to 64 VMs hosted on a 12-core simulator node.« less
Crest: A Balloon-borne Instrument to Measure Cosmic-ray Electrons above TeV Energies
NASA Astrophysics Data System (ADS)
Nutter, S.; Anderson, T.; Coutu, S.; Geske, M.; Bower, C.; Musser, J.; Muller, D.; Park, N.; Wakely, S.; Schubnell, M.; Tarle, G.; Yagi, A.
2009-05-01
The flux of high-energy (>1 TeV) electrons provides information about the spatial distribution and abundance of nearby cosmic ray sources. CREST, a balloon-borne array of 1024 BaF2 crystals viewed by PMTs, will measure the spectrum of multi-TeV electrons through detection of the x-ray synchrotron photons generated as the electrons traverse the Earth's magnetic field. This method naturally discriminates against the proton and gamma ray backgrounds, and achieves very large detector apertures, since the instrument need only intersect a portion of the kilometers-long line of photons and not the electron itself. Thus CREST's acceptance is several times its geometric area up to energies of 50 TeV, ˜10 times higher in energy than ground based techniques can reach. This measurement will overlap the recent HESS results and extend to higher energies. CREST is scheduled to fly in a long duration circumpolar orbit over Antarctica in 2010. An overview of the detector design and status will be presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yingchen; Tan, Jin; Krad, Ibrahim
Power system frequency needs to be maintained close to its nominal value at all times to successfully balance load and generation and maintain system reliability. Adequate primary frequency response and secondary frequency response are the primary forces to correct an energy imbalance at the second-to-minute level. As wind energy becomes a larger portion of the world's energy portfolio, there is an increased need for wind to provide frequency response. This paper addresses one of the major concerns about using wind for frequency regulation: the unknown factor of the interaction between primary and secondary reserves. The lack of a commercially availablemore » tool to model this has limited the energy industry's understanding of when the depletion of primary reserves will impact the performance of secondary response or vice versa. This paper investigates the issue by developing a multi-area frequency response integration tool with combined primary and secondary capabilities. The simulation is conducted in close coordination with economical energy scheduling scenarios to ensure credible simulation results.« less
Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments
Kadima, Hubert; Granado, Bertrand
2013-01-01
We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach. PMID:24319361
Multi-objective approach for energy-aware workflow scheduling in cloud computing environments.
Yassa, Sonia; Chelouah, Rachid; Kadima, Hubert; Granado, Bertrand
2013-01-01
We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach.
Options for Parallelizing a Planning and Scheduling Algorithm
NASA Technical Reports Server (NTRS)
Clement, Bradley J.; Estlin, Tara A.; Bornstein, Benjamin D.
2011-01-01
Space missions have a growing interest in putting multi-core processors onboard spacecraft. For many missions processing power significantly slows operations. We investigate how continual planning and scheduling algorithms can exploit multi-core processing and outline different potential design decisions for a parallelized planning architecture. This organization of choices and challenges helps us with an initial design for parallelizing the CASPER planning system for a mesh multi-core processor. This work extends that presented at another workshop with some preliminary results.
Transit scheduling data integration : paratransit operations review and analysis
DOT National Transportation Integrated Search
2000-05-01
The ability of transit service providers in small urban areas and rural communities to meet increasing demands generated by welfare-to-work customers and other social agencies depends on their ability to make best use of available resources through e...
Montaudon, M; Desbarats, P; Berger, P; de Dietrich, G; Marthan, R; Laurent, F
2007-01-01
A thickened bronchial wall is the morphological substratum of most diseases of the airway. Theoretical and clinical models of bronchial morphometry have so far focused on bronchial lumen diameter, and bronchial length and angles, mainly assessed from bronchial casts. However, these models do not provide information on bronchial wall thickness. This paper reports in vivo values of cross-sectional wall area, lumen area, wall thickness and lumen diameter in ten healthy subjects as assessed by multi-detector computed tomography. A validated dedicated software package was used to measure these morphometric parameters up to the 14th bronchial generation, with respect to Weibel's model of bronchial morphometry, and up to the 12th according to Boyden's classification. Measured lumen diameters and homothety ratios were compared with theoretical values obtained from previously published studies, and no difference was found when considering dichotomic division of the bronchial tree. Mean wall area, lumen area, wall thickness and lumen diameter were then provided according to bronchial generation order, and mean homothety ratios were computed for wall area, lumen area and wall thickness as well as equations giving the mean value of each parameter for a given bronchial generation with respect to its value in generation 0 (trachea). Multi-detector computed tomography measurements of bronchial morphometric parameters may help to improve our knowledge of bronchial anatomy in vivo, our understanding of the pathophysiology of bronchial diseases and the evaluation of pharmacological effects on the bronchial wall. PMID:17919291
Symbolic Analysis of Concurrent Programs with Polymorphism
NASA Technical Reports Server (NTRS)
Rungta, Neha Shyam
2010-01-01
The current trend of multi-core and multi-processor computing is causing a paradigm shift from inherently sequential to highly concurrent and parallel applications. Certain thread interleavings, data input values, or combinations of both often cause errors in the system. Systematic verification techniques such as explicit state model checking and symbolic execution are extensively used to detect errors in such systems [7, 9]. Explicit state model checking enumerates possible thread schedules and input data values of a program in order to check for errors [3, 9]. To partially mitigate the state space explosion from data input values, symbolic execution techniques substitute data input values with symbolic values [5, 7, 6]. Explicit state model checking and symbolic execution techniques used in conjunction with exhaustive search techniques such as depth-first search are unable to detect errors in medium to large-sized concurrent programs because the number of behaviors caused by data and thread non-determinism is extremely large. We present an overview of abstraction-guided symbolic execution for concurrent programs that detects errors manifested by a combination of thread schedules and data values [8]. The technique generates a set of key program locations relevant in testing the reachability of the target locations. The symbolic execution is then guided along these locations in an attempt to generate a feasible execution path to the error state. This allows the execution to focus in parts of the behavior space more likely to contain an error.
Benchmarking GNU Radio Kernels and Multi-Processor Scheduling
2013-01-14
AMD E350 APU , comparable to Atom • ARM Cortex A8 running on a Gumstix Overo on an Ettus USRP E110 The general testing procedure consists of • Build...Intel Atom, and the AMD E350 APU . 3.2 Multi-Processor Scheduling Figure 1: GFLOPs per second through an FFT array on an Intel i7. Example output from
NASA Astrophysics Data System (ADS)
Zhou, J.; Zeng, X.; Mo, L.; Chen, L.; Jiang, Z.; Feng, Z.; Yuan, L.; He, Z.
2017-12-01
Generally, the adaptive utilization and regulation of runoff in the source region of China's southwest rivers is classified as a typical multi-objective collaborative optimization problem. There are grim competitions and incidence relation in the subsystems of water supply, electricity generation and environment, which leads to a series of complex problems represented by hydrological process variation, blocked electricity output and water environment risk. Mathematically, the difficulties of multi-objective collaborative optimization focus on the description of reciprocal relationships and the establishment of evolving model of adaptive systems. Thus, based on the theory of complex systems science, this project tries to carry out the research from the following aspects: the changing trend of coupled water resource, the covariant factor and driving mechanism, the dynamic evolution law of mutual feedback dynamic process in the supply-generation-environment coupled system, the environmental response and influence mechanism of coupled mutual feedback water resource system, the relationship between leading risk factor and multiple risk based on evolutionary stability and dynamic balance, the transfer mechanism of multiple risk response with the variation of the leading risk factor, the multidimensional coupled feedback system of multiple risk assessment index system and optimized decision theory. Based on the above-mentioned research results, the dynamic method balancing the efficiency of multiple objectives in the coupled feedback system and optimized regulation model of water resources is proposed, and the adaptive scheduling mode considering the internal characteristics and external response of coupled mutual feedback system of water resource is established. In this way, the project can make a contribution to the optimal scheduling theory and methodology of water resource management under uncertainty in the source region of Southwest River.
NASA Astrophysics Data System (ADS)
Surjandari, Isti; Rachman, Amar; Dianawati, Fauzia; Wibowo, R. Pramono
2011-10-01
With the Oil and Gas Law No. 22 of 2001, national and foreign private enterprises can invest in all sectors of Oil and Gas in Indonesia. In anticipation of this free competition, Pertamina, as a state-owned enterprises, which previously had monopolized the oil and gas business activities in Indonesia, should be able to improve services as well as the efficiency in order to compete in the free market, especially in terms of cost efficiency of fuel distribution to gas station (SPBU). To optimize the distribution activity, it is necessary to design a scheduling system and its fuel delivery routes daily to every SPBU. The determination of routes and scheduling delivery of fuel to the SPBU can be modeled as a Petrol Station Replenishment Problem (PSRP) with the multi-depot, multi-product, time windows and split deliveries, which in this study will be completed by the Tabu Search algorithm (TS). This study was conducted in the area of Bandung, the capital of West Java province, which is a big city and the neighboring city of Jakarta, the capital city of Indonesia. By using the fuel delivery data for one day, the results showed a decrease of 16.38% of the distance of the route compared to the current conditions, which impacted on the reduction of distribution costs and decrease the number of total trips by 5.22% and 3.83%.
Generating variable and random schedules of reinforcement using Microsoft Excel macros.
Bancroft, Stacie L; Bourret, Jason C
2008-01-01
Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values.
Nurse Scheduling by Cooperative GA with Effective Mutation Operator
NASA Astrophysics Data System (ADS)
Ohki, Makoto
In this paper, we propose an effective mutation operators for Cooperative Genetic Algorithm (CGA) to be applied to a practical Nurse Scheduling Problem (NSP). The nurse scheduling is a very difficult task, because NSP is a complex combinatorial optimizing problem for which many requirements must be considered. In real hospitals, the schedule changes frequently. The changes of the shift schedule yields various problems, for example, a fall in the nursing level. We describe a technique of the reoptimization of the nurse schedule in response to a change. The conventional CGA is superior in ability for local search by means of its crossover operator, but often stagnates at the unfavorable situation because it is inferior to ability for global search. When the optimization stagnates for long generation cycle, a searching point, population in this case, would be caught in a wide local minimum area. To escape such local minimum area, small change in a population should be required. Based on such consideration, we propose a mutation operator activated depending on the optimization speed. When the optimization stagnates, in other words, when the optimization speed decreases, the mutation yields small changes in the population. Then the population is able to escape from a local minimum area by means of the mutation. However, this mutation operator requires two well-defined parameters. This means that user have to consider the value of these parameters carefully. To solve this problem, we propose a periodic mutation operator which has only one parameter to define itself. This simplified mutation operator is effective over a wide range of the parameter value.
Xiao, Lishan; Lin, Tao; Chen, Shaohua; Zhang, Guoqin; Ye, Zhilong; Yu, Zhaowu
2015-01-01
The relationship between social stratification and municipal solid waste generation remains uncertain under current rapid urbanization. Based on a multi-object spatial sampling technique, we selected 191 households in a rapidly urbanizing area of Xiamen, China. The selected communities were classified into three types: work-unit, transitional, and commercial communities in the context of housing policy reform in China. Field survey data were used to characterize household waste generation patterns considering community stratification. Our results revealed a disparity in waste generation profiles among different households. The three community types differed with respect to family income, living area, religious affiliation, and homeowner occupation. Income, family structure, and lifestyle caused significant differences in waste generation among work-unit, transitional, and commercial communities, respectively. Urban waste generation patterns are expected to evolve due to accelerating urbanization and associated community transition. A multi-scale integrated analysis of societal and ecosystem metabolism approach was applied to waste metabolism linking it to particular socioeconomic conditions that influence material flows and their evolution. Waste metabolism, both pace and density, was highest for family structure driven patterns, followed by lifestyle and income driven. The results will guide community-specific management policies in rapidly urbanizing areas.
Marching to the beat of Moore's Law
NASA Astrophysics Data System (ADS)
Borodovsky, Yan
2006-03-01
Area density scaling in integrated circuits, defined as transistor count per unit area, has followed the famous observation-cum-prediction by Gordon Moore for many generations. Known as "Moore's Law" which predicts density doubling every 18-24 month, it has provided all important synchronizing guidance and reference for tools and materials suppliers, IC manufacturers and their customers as to what minimal requirements their products and services need to meet to satisfy technical and financial expectations in support of the infrastructure required for the development and manufacturing of corresponding technology generation nodes. Multiple lithography solutions are usually under considerations for any given node. In general, three broad classes of solutions are considered: evolutionary - technology that is extension of existing technology infrastructure at similar or slightly higher cost and risk to schedule; revolutionary - technology that discards significant parts of the existing infrastructure at similar cost, higher risk to schedule but promises higher capability as compared to the evolutionary approach; and last but not least, disruptive - approach that as a rule promises similar or better capabilities, much lower cost and wholly unpredictable risk to schedule and products yields. This paper examines various lithography approaches, their respective merits against criteria of respective infrastructure availability, affordability and risk to IC manufacturer's schedules and strategy involved in developing and selecting best solution in an attempt to sort out key factors that will impact the decision on the lithography choice for large-scale manufacturing for the future technology nodes.
CMS Readiness for Multi-Core Workload Scheduling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez-Calero Yzquierdo, A.; Balcas, J.; Hernandez, J.
In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides amore » solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.« less
CMS readiness for multi-core workload scheduling
NASA Astrophysics Data System (ADS)
Perez-Calero Yzquierdo, A.; Balcas, J.; Hernandez, J.; Aftab Khan, F.; Letts, J.; Mason, D.; Verguilov, V.
2017-10-01
In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. The allocation of computing resources for multi-core tasks, however, becomes a complex problem in itself. The CMS workload submission infrastructure employs multi-slot partitionable pilots, built on HTCondor and GlideinWMS native features, to enable scheduling of single and multi-core jobs simultaneously. This provides a solution for the scheduling problem in a uniform way across grid sites running a diversity of gateways to compute resources and batch system technologies. This paper presents this strategy and the tools on which it has been implemented. The experience of managing multi-core resources at the Tier-0 and Tier-1 sites during 2015, along with the deployment phase to Tier-2 sites during early 2016 is reported. The process of performance monitoring and optimization to achieve efficient and flexible use of the resources is also described.
Scheduling Earth Observing Fleets Using Evolutionary Algorithms: Problem Description and Approach
NASA Technical Reports Server (NTRS)
Globus, Al; Crawford, James; Lohn, Jason; Morris, Robert; Clancy, Daniel (Technical Monitor)
2002-01-01
We describe work in progress concerning multi-instrument, multi-satellite scheduling. Most, although not all, Earth observing instruments currently in orbit are unique. In the relatively near future, however, we expect to see fleets of Earth observing spacecraft, many carrying nearly identical instruments. This presents a substantially new scheduling challenge. Inspired by successful commercial applications of evolutionary algorithms in scheduling domains, this paper presents work in progress regarding the use of evolutionary algorithms to solve a set of Earth observing related model problems. Both the model problems and the software are described. Since the larger problems will require substantial computation and evolutionary algorithms are embarrassingly parallel, we discuss our parallelization techniques using dedicated and cycle-scavenged workstations.
NASA Technical Reports Server (NTRS)
Smith, Stephen F.; Pathak, Dhiraj K.
1991-01-01
In this paper, we report work aimed at applying concepts of constraint-based problem structuring and multi-perspective scheduling to an over-subscribed scheduling problem. Previous research has demonstrated the utility of these concepts as a means for effectively balancing conflicting objectives in constraint-relaxable scheduling problems, and our goal here is to provide evidence of their similar potential in the context of HST observation scheduling. To this end, we define and experimentally assess the performance of two time-bounded heuristic scheduling strategies in balancing the tradeoff between resource setup time minimization and satisfaction of absolute time constraints. The first strategy considered is motivated by dispatch-based manufacturing scheduling research, and employs a problem decomposition that concentrates local search on minimizing resource idle time due to setup activities. The second is motivated by research in opportunistic scheduling and advocates a problem decomposition that focuses attention on the goal activities that have the tightest temporal constraints. Analysis of experimental results gives evidence of differential superiority on the part of each strategy in different problem solving circumstances. A composite strategy based on recognition of characteristics of the current problem solving state is then defined and tested to illustrate the potential benefits of constraint-based problem structuring and multi-perspective scheduling in over-subscribe scheduling problems.
Active local control of propeller-aircraft run-up noise.
Hodgson, Murray; Guo, Jingnan; Germain, Pierre
2003-12-01
Engine run-ups are part of the regular maintenance schedule at Vancouver International Airport. The noise generated by the run-ups propagates into neighboring communities, disturbing the residents. Active noise control is a potentially cost-effective alternative to passive methods, such as enclosures. Propeller aircraft generate low-frequency tonal noise that is highly compatible with active control. This paper presents a preliminary investigation of the feasibility and effectiveness of controlling run-up noise from propeller aircraft using local active control. Computer simulations for different configurations of multi-channel active-noise-control systems, aimed at reducing run-up noise in adjacent residential areas using a local-control strategy, were performed. These were based on an optimal configuration of a single-channel control system studied previously. The variations of the attenuation and amplification zones with the number of control channels, and with source/control-system geometry, were studied. Here, the aircraft was modeled using one or two sources, with monopole or multipole radiation patterns. Both free-field and half-space conditions were considered: for the configurations studied, results were similar in the two cases. In both cases, large triangular quiet zones, with local attenuations of 10 dB or more, were obtained when nine or more control channels were used. Increases of noise were predicted outside of these areas, but these were minimized as more control channels were employed. By combining predicted attenuations with measured noise spectra, noise levels after implementation of an active control system were estimated.
Generating Variable and Random Schedules of Reinforcement Using Microsoft Excel Macros
Bancroft, Stacie L; Bourret, Jason C
2008-01-01
Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values. PMID:18595286
Ancillary-service costs for 12 US electric utilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirby, B.; Hirst, E.
1996-03-01
Ancillary services are those functions performed by electrical generating, transmission, system-control, and distribution-system equipment and people to support the basic services of generating capacity, energy supply, and power delivery. The Federal Energy Regulatory Commission defined ancillary services as ``those services necessary to support the transmission of electric power from seller to purchaser given the obligations of control areas and transmitting utilities within those control areas to maintain reliable operations of the interconnected transmission system.`` FERC divided these services into three categories: ``actions taken to effect the transaction (such as scheduling and dispatching services) , services that are necessary to maintainmore » the integrity of the transmission system [and] services needed to correct for the effects associated with undertaking a transaction.`` In March 1995, FERC published a proposed rule to ensure open and comparable access to transmission networks throughout the country. The rule defined six ancillary services and developed pro forma tariffs for these services: scheduling and dispatch, load following, system protection, energy imbalance, loss compensation, and reactive power/voltage control.« less
On the potential of a multi-temporal AMSR-E data analysis for soil wetness monitoring
NASA Astrophysics Data System (ADS)
Lacava, T.; Coviello, I.; Calice, G.; Mazzeo, G.; Pergola, N.; Tramutoli, V.
2009-12-01
Soil moisture is a critical element for both global water and energy budget. The use of satellite remote sensing data for a characterizations of soil moisture fields at different spatial and temporal scales has more and more increased during last years, thanks also to the new generation of microwave sensors (both active and passive) orbiting around the Earth. Among microwave radiometers which could be used for soil moisture retrieval, the Advanced Microwave Scanning Radiometer on Earth Observing System (AMSR-E), is the one that, for its spectral characteristics, should give more reliable results. The possibility of collect information in five observational bands in the range 6.9 - 89 GHz (with dual polarization), make it currently, waiting for the next ESA Soil Moisture and Ocean Salinity Mission (SMOS - scheduled for September 2009) and the NASA Soil Moisture Active Passive Mission (SMAP - scheduled for 2013), the best radiometer for soil moisture retrieval. Unfortunately, after its launch (AMSR-E is flying aboard EOS-AQUA satellite since 2002) diffuse C-band Radio-Frequency Interferences (RFI) were discovered contaminating AMSR-E radiances over many areas in the world. For this reason, often X-band (less RFI affected) based soil moisture retrieval algorithms, instead of the original based on C-band, have been preferred. As a consequence, the sensitivity of such measurements is decreased, because of the lower penetrating capability of the X band wavelengths than C-band, as well as for their greater noisiness, due to their high sensitivity to the presence of vegetation in the sensor field of view. In order to face all these problems, in this work a general methodology for multi-temporal satellite data analysis (Robust Satellite Techniques, RST) will be used. RST approach, already successfully applied in the framework of hydro-meteorological risk mitigation, should help us in managing AMSR-E data for several purposes. In this paper, in particular, we have looked into the possible improvement, both in terms of quality and reliability, of AMSR-E C-band soil moisture retrieval which, a differential approach like RST, may produce. To reach this aim, a multi-temporal analysis of long-term historical series of AMSR-E C-band data has been performed. Preliminary results of such an analysis will be shown in this work and discussed also by a comparison with the standard AMSR-E soil moisture products, daily provided by NASA. In detail, achievements obtained investigating several flooding events happened in the past over different areas of the world will be presented.
Project Physics Teacher Guide 1, Concepts of Motion.
ERIC Educational Resources Information Center
Harvard Univ., Cambridge, MA. Harvard Project Physics.
Teaching procedures of Project Physics Unit 1 are presented in this manual to help teachers make effective use of learning materials. Curriculum objectives are discussed in connection with instructional materials, suggested year time schedules, multi-media schedules, schedule blocks, resource charts, and experiment summaries. Brief analyses are…
VAXELN Experimentation: Programming a Real-Time Periodic Task Dispatcher Using VAXELN Ada 1.1
1987-11-01
synchronization to the SQM and VAXELN semaphores. Based on real-time scheduling theory, the optimal rate-monotonic scheduling algorithm [Lui 73...schedulability test based on the rate-monotonic algorithm , namely task-lumping [Sha 871, was necessary to cal- culate the theoretically expected schedulability...8217 Guide Digital Equipment Corporation, Maynard, MA, 1986. [Lui 73] Liu, C.L., Layland, J.W. Scheduling Algorithms for Multi-programming in a Hard-Real-Time
NASA Astrophysics Data System (ADS)
Huang, Wei; Zhang, Xingnan; Li, Chenming; Wang, Jianying
Management of group decision-making is an important issue in water source management development. In order to overcome the defects in lacking of effective communication and cooperation in the existing decision-making models, this paper proposes a multi-layer dynamic model for coordination in water resource allocation and scheduling based group decision making. By introducing the scheme-recognized cooperative satisfaction index and scheme-adjusted rationality index, the proposed model can solve the problem of poor convergence of multi-round decision-making process in water resource allocation and scheduling. Furthermore, the problem about coordination of limited resources-based group decision-making process can be solved based on the effectiveness of distance-based group of conflict resolution. The simulation results show that the proposed model has better convergence than the existing models.
7 CFR 3560.353 - Scheduling of on-site monitoring reviews.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 15 2012-01-01 2012-01-01 false Scheduling of on-site monitoring reviews. 3560.353... SERVICE, DEPARTMENT OF AGRICULTURE DIRECT MULTI-FAMILY HOUSING LOANS AND GRANTS Agency Monitoring § 3560.353 Scheduling of on-site monitoring reviews. Generally, the Agency will provide the borrower prior...
7 CFR 3560.353 - Scheduling of on-site monitoring reviews.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 15 2011-01-01 2011-01-01 false Scheduling of on-site monitoring reviews. 3560.353... SERVICE, DEPARTMENT OF AGRICULTURE DIRECT MULTI-FAMILY HOUSING LOANS AND GRANTS Agency Monitoring § 3560.353 Scheduling of on-site monitoring reviews. Generally, the Agency will provide the borrower prior...
7 CFR 3560.353 - Scheduling of on-site monitoring reviews.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 15 2014-01-01 2014-01-01 false Scheduling of on-site monitoring reviews. 3560.353... SERVICE, DEPARTMENT OF AGRICULTURE DIRECT MULTI-FAMILY HOUSING LOANS AND GRANTS Agency Monitoring § 3560.353 Scheduling of on-site monitoring reviews. Generally, the Agency will provide the borrower prior...
7 CFR 3560.353 - Scheduling of on-site monitoring reviews.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 15 2013-01-01 2013-01-01 false Scheduling of on-site monitoring reviews. 3560.353... SERVICE, DEPARTMENT OF AGRICULTURE DIRECT MULTI-FAMILY HOUSING LOANS AND GRANTS Agency Monitoring § 3560.353 Scheduling of on-site monitoring reviews. Generally, the Agency will provide the borrower prior...
DOT National Transportation Integrated Search
2016-06-01
The purpose of this project is to study the optimal scheduling of work zones so that they have minimum negative impact (e.g., travel delay, gas consumption, accidents, etc.) on transport service vehicle flows. In this project, a mixed integer linear ...
Automatic Generation of Cycle-Approximate TLMs with Timed RTOS Model Support
NASA Astrophysics Data System (ADS)
Hwang, Yonghyun; Schirner, Gunar; Abdi, Samar
This paper presents a technique for automatically generating cycle-approximate transaction level models (TLMs) for multi-process applications mapped to embedded platforms. It incorporates three key features: (a) basic block level timing annotation, (b) RTOS model integration, and (c) RTOS overhead delay modeling. The inputs to TLM generation are application C processes and their mapping to processors in the platform. A processor data model, including pipelined datapath, memory hierarchy and branch delay model is used to estimate basic block execution delays. The delays are annotated to the C code, which is then integrated with a generated SystemC RTOS model. Our abstract RTOS provides dynamic scheduling and inter-process communication (IPC) with processor- and RTOS-specific pre-characterized timing. Our experiments using a MP3 decoder and a JPEG encoder show that timed TLMs, with integrated RTOS models, can be automatically generated in less than a minute. Our generated TLMs simulated three times faster than real-time and showed less than 10% timing error compared to board measurements.
Neighbourhood generation mechanism applied in simulated annealing to job shop scheduling problems
NASA Astrophysics Data System (ADS)
Cruz-Chávez, Marco Antonio
2015-11-01
This paper presents a neighbourhood generation mechanism for the job shop scheduling problems (JSSPs). In order to obtain a feasible neighbour with the generation mechanism, it is only necessary to generate a permutation of an adjacent pair of operations in a scheduling of the JSSP. If there is no slack time between the adjacent pair of operations that is permuted, then it is proven, through theory and experimentation, that the new neighbour (schedule) generated is feasible. It is demonstrated that the neighbourhood generation mechanism is very efficient and effective in a simulated annealing.
Multi-Mission Automated Task Invocation Subsystem
NASA Technical Reports Server (NTRS)
Cheng, Cecilia S.; Patel, Rajesh R.; Sayfi, Elias M.; Lee, Hyun H.
2009-01-01
Multi-Mission Automated Task Invocation Subsystem (MATIS) is software that establishes a distributed data-processing framework for automated generation of instrument data products from a spacecraft mission. Each mission may set up a set of MATIS servers for processing its data products. MATIS embodies lessons learned in experience with prior instrument- data-product-generation software. MATIS is an event-driven workflow manager that interprets project-specific, user-defined rules for managing processes. It executes programs in response to specific events under specific conditions according to the rules. Because requirements of different missions are too diverse to be satisfied by one program, MATIS accommodates plug-in programs. MATIS is flexible in that users can control such processing parameters as how many pipelines to run and on which computing machines to run them. MATIS has a fail-safe capability. At each step, MATIS captures and retains pertinent information needed to complete the step and start the next step. In the event of a restart, this information is retrieved so that processing can be resumed appropriately. At this writing, it is planned to develop a graphical user interface (GUI) for monitoring and controlling a product generation engine in MATIS. The GUI would enable users to schedule multiple processes and manage the data products produced in the processes. Although MATIS was initially designed for instrument data product generation,
NASA Technical Reports Server (NTRS)
Wong, Gregory L.; Denery, Dallas (Technical Monitor)
2000-01-01
The Dynamic Planner (DP) has been designed, implemented, and integrated into the Center-TRACON Automation System (CTAS) to assist Traffic Management Coordinators (TMCs), in real time, with the task of planning and scheduling arrival traffic approximately 35 to 200 nautical miles from the destination airport. The TMC may input to the DP a series of current and future scheduling constraints that reflect the operation and environmental conditions of the airspace. Under these constraints, the DP uses flight plans, track updates, and Estimated Time of Arrival (ETA) predictions to calculate optimal runway assignments and arrival schedules that help ensure an orderly, efficient, and conflict-free flow of traffic into the terminal area. These runway assignments and schedules can be shown directly to controllers or they can be used by other CTAS tools to generate advisories to the controllers. Additionally, the TMC and controllers may override the decisions made by the DP for tactical considerations. The DP will adapt to computations to accommodate these manual inputs.
Quantifying Scheduling Challenges for Exascale System Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mondragon, Oscar; Bridges, Patrick G.; Jones, Terry R
2015-01-01
The move towards high-performance computing (HPC) ap- plications comprised of coupled codes and the need to dra- matically reduce data movement is leading to a reexami- nation of time-sharing vs. space-sharing in HPC systems. In this paper, we discuss and begin to quantify the perfor- mance impact of a move away from strict space-sharing of nodes for HPC applications. Specifically, we examine the po- tential performance cost of time-sharing nodes between ap- plication components, we determine whether a simple coor- dinated scheduling mechanism can address these problems, and we research how suitable simple constraint-based opti- mization techniques are for solvingmore » scheduling challenges in this regime. Our results demonstrate that current general- purpose HPC system software scheduling and resource al- location systems are subject to significant performance de- ciencies which we quantify for six representative applica- tions. Based on these results, we discuss areas in which ad- ditional research is needed to meet the scheduling challenges of next-generation HPC systems.« less
Xiao, Lishan; Lin, Tao; Chen, Shaohua; Zhang, Guoqin; Ye, Zhilong; Yu, Zhaowu
2015-01-01
The relationship between social stratification and municipal solid waste generation remains uncertain under current rapid urbanization. Based on a multi-object spatial sampling technique, we selected 191 households in a rapidly urbanizing area of Xiamen, China. The selected communities were classified into three types: work-unit, transitional, and commercial communities in the context of housing policy reform in China. Field survey data were used to characterize household waste generation patterns considering community stratification. Our results revealed a disparity in waste generation profiles among different households. The three community types differed with respect to family income, living area, religious affiliation, and homeowner occupation. Income, family structure, and lifestyle caused significant differences in waste generation among work-unit, transitional, and commercial communities, respectively. Urban waste generation patterns are expected to evolve due to accelerating urbanization and associated community transition. A multi-scale integrated analysis of societal and ecosystem metabolism approach was applied to waste metabolism linking it to particular socioeconomic conditions that influence material flows and their evolution. Waste metabolism, both pace and density, was highest for family structure driven patterns, followed by lifestyle and income driven. The results will guide community-specific management policies in rapidly urbanizing areas. PMID:26690056
MULTI-FACETED SUSTAINABILITY ON ITHACA COLLEGE NATURAL LANDS
This student-generated proposal presents a multi-faceted program for sustainable stewardship of the natural areas south of the built campus of Ithaca College. Our challenge is to use student research and class projects to enhance biodiversity, support education and research, and...
NASA Astrophysics Data System (ADS)
Zhao, Yongli; Li, Yajie; Wang, Xinbo; Chen, Bowen; Zhang, Jie
2016-09-01
A hierarchical software-defined networking (SDN) control architecture is designed for multi-domain optical networks with the Open Daylight (ODL) controller. The OpenFlow-based Control Virtual Network Interface (CVNI) protocol is deployed between the network orchestrator and the domain controllers. Then, a dynamic bandwidth on demand (BoD) provisioning solution is proposed based on time scheduling in software-defined multi-domain optical networks (SD-MDON). Shared Risk Link Groups (SRLG)-disjoint routing schemes are adopted to separate each tenant for reliability. The SD-MDON testbed is built based on the proposed hierarchical control architecture. Then the proposed time scheduling-based BoD (Ts-BoD) solution is experimentally demonstrated on the testbed. The performance of the Ts-BoD solution is evaluated with respect to blocking probability, resource utilization, and lightpath setup latency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S; Henz, Brian J
2012-01-01
In prior work (Yoginath and Perumalla, 2011; Yoginath, Perumalla and Henz, 2012), the motivation, challenges and issues were articulated in favor of virtual time ordering of Virtual Machines (VMs) in network simulations hosted on multi-core machines. Two major components in the overall virtualization challenge are (1) virtual timeline establishment and scheduling of VMs, and (2) virtualization of inter-VM communication. Here, we extend prior work by presenting scaling results for the first component, with experiment results on up to 128 VMs scheduled in virtual time order on a single 12-core host. We also explore the solution space of design alternatives formore » the second component, and present performance results from a multi-threaded, multi-queue implementation of inter-VM network control for synchronized execution with VM scheduling, incorporated in our NetWarp simulation system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garikapati, Venu; Astroza, Sebastian; Bhat, Prerna C.
This paper is motivated by the increasing recognition that modeling activity-travel demand for a single day of the week, as is done in virtually all travel forecasting models, may be inadequate in capturing underlying processes that govern activity-travel scheduling behavior. The considerable variability in daily travel suggests that there are important complementary relationships and competing tradeoffs involved in scheduling and allocating time to various activities across days of the week. Both limited survey data availability and methodological challenges in modeling week-long activity-travel schedules have precluded the development of multi-day activity-travel demand models. With passive and technology-based data collection methods increasinglymore » in vogue, the collection of multi-day travel data may become increasingly commonplace in the years ahead. This paper addresses the methodological challenge associated with modeling multi-day activity-travel demand by formulating a multivariate multiple discrete-continuous probit (MDCP) model system. The comprehensive framework ties together two MDCP model components, one corresponding to weekday time allocation and the other to weekend activity-time allocation. By tying the two MDCP components together, the model system also captures relationships in activity-time allocation between weekdays on the one hand and weekend days on the other. Model estimation on a week-long travel diary data set from the United Kingdom shows that there are significant inter-relationships between weekdays and weekend days in activity-travel scheduling behavior. The model system presented in this paper may serve as a higher-level multi-day activity scheduler in conjunction with existing daily activity-based travel models.« less
Intercell scheduling: A negotiation approach using multi-agent coalitions
NASA Astrophysics Data System (ADS)
Tian, Yunna; Li, Dongni; Zheng, Dan; Jia, Yunde
2016-10-01
Intercell scheduling problems arise as a result of intercell transfers in cellular manufacturing systems. Flexible intercell routes are considered in this article, and a coalition-based scheduling (CBS) approach using distributed multi-agent negotiation is developed. Taking advantage of the extended vision of the coalition agents, the global optimization is improved and the communication cost is reduced. The objective of the addressed problem is to minimize mean tardiness. Computational results show that, compared with the widely used combinatorial rules, CBS provides better performance not only in minimizing the objective, i.e. mean tardiness, but also in minimizing auxiliary measures such as maximum completion time, mean flow time and the ratio of tardy parts. Moreover, CBS is better than the existing intercell scheduling approach for the same problem with respect to the solution quality and computational costs.
Generically Used Expert Scheduling System (GUESS): User's Guide Version 1.0
NASA Technical Reports Server (NTRS)
Liebowitz, Jay; Krishnamurthy, Vijaya; Rodens, Ira
1996-01-01
This user's guide contains instructions explaining how to best operate the program GUESS, a generic expert scheduling system. GUESS incorporates several important features for a generic scheduler, including automatic scheduling routines to generate a 'first' schedule for the user, a user interface that includes Gantt charts and enables the human scheduler to manipulate schedules manually, diagnostic report generators, and a variety of scheduling techniques. The current version of GUESS runs on an IBM PC or compatible in the Windows 3.1 or Windows '95 environment.
Continuous Improvement in Battery Testing at the NASA/JSC Energy System Test Area
NASA Technical Reports Server (NTRS)
Boyd, William; Cook, Joseph
2003-01-01
The Energy Systems Test Area (ESTA) at the Lyndon B. Johnson Space Center in Houston, Texas conducts development and qualification tests to fulfill Energy System Division responsibilities relevant to ASA programs and projects. EST A has historically called upon a variety of fluid, mechanical, electrical, environmental, and data system capabilities spread amongst five full-service facilities to test human and human supported spacecraft in the areas of propulsion systems, fluid systems, pyrotechnics, power generation, and power distribution and control systems. Improvements at ESTA are being made in full earnest of offering NASA project offices an option to choose a thorough test regime that is balanced with cost and schedule constraints. In order to continue testing of enabling power-related technologies utilized by the Energy System Division, an especially proactive effort has been made to increase the cost effectiveness and schedule responsiveness for battery testing. This paper describes the continuous improvement in battery testing at the Energy Systems Test Area being made through consolidation, streamlining, and standardization.
Multiple R&D projects scheduling optimization with improved particle swarm algorithm.
Liu, Mengqi; Shan, Miyuan; Wu, Juan
2014-01-01
For most enterprises, in order to win the initiative in the fierce competition of market, a key step is to improve their R&D ability to meet the various demands of customers more timely and less costly. This paper discusses the features of multiple R&D environments in large make-to-order enterprises under constrained human resource and budget, and puts forward a multi-project scheduling model during a certain period. Furthermore, we make some improvements to existed particle swarm algorithm and apply the one developed here to the resource-constrained multi-project scheduling model for a simulation experiment. Simultaneously, the feasibility of model and the validity of algorithm are proved in the experiment.
NASA Technical Reports Server (NTRS)
Lee, Katharine K.; Davis, Thomas J.; Levin, Kerry M.; Rowe, Dennis W.
2001-01-01
The Traffic Management Advisor (TMA) is a decision-support tool for traffic managers and air traffic controllers that provides traffic flow visualization and other flow management tools. TMA creates an efficiently sequenced and safely spaced schedule for arrival traffic that meets but does not exceed specified airspace system constraints. TMA is being deployed at selected facilities throughout the National Airspace System in the US as part of the FAA's Free Flight Phase 1 program. TMA development and testing, and its current deployment, focuses on managing the arrival capacity for single major airports within single terminal areas and single en route centers. The next phase of development for this technology is the expansion of the TMA capability to complex facilities in which a terminal area or airport is fed by multiple en route centers, thus creating a multicenter TMA functionality. The focus of the multi-center TMA (McTMA) development is on the busy facilities in the Northeast comdor of the US. This paper describes the planning and development of McTMA and the challenges associated with adapting a successful traffic flow management tool for a very complex airspace.
N.Y.C. School Marches to Unorthodox Schedule
ERIC Educational Resources Information Center
Sawchuk, Stephen
2010-01-01
Superficially, the Brooklyn Generation School, in the Flatbush area, looks a lot like the other six small public high schools that share space in this tall building, the former South Shore High School. What's noticeably different about it, though, is the strength of the relationships among staff members. Teachers can be seen running across the…
NASA Astrophysics Data System (ADS)
Santosa, B.; Siswanto, N.; Fiqihesa
2018-04-01
This paper proposes a discrete Particle Swam Optimization (PSO) to solve limited-wait hybrid flowshop scheduing problem with multi objectives. Flow shop schedulimg represents the condition when several machines are arranged in series and each job must be processed at each machine with same sequence. The objective functions are minimizing completion time (makespan), total tardiness time, and total machine idle time. Flow shop scheduling model always grows to cope with the real production system accurately. Since flow shop scheduling is a NP-Hard problem then the most suitable method to solve is metaheuristics. One of metaheuristics algorithm is Particle Swarm Optimization (PSO), an algorithm which is based on the behavior of a swarm. Originally, PSO was intended to solve continuous optimization problems. Since flow shop scheduling is a discrete optimization problem, then, we need to modify PSO to fit the problem. The modification is done by using probability transition matrix mechanism. While to handle multi objectives problem, we use Pareto Optimal (MPSO). The results of MPSO is better than the PSO because the MPSO solution set produced higher probability to find the optimal solution. Besides the MPSO solution set is closer to the optimal solution
NASA Astrophysics Data System (ADS)
Zhang, Xiaojie; Zeng, Qiming; Jiao, Jian; Zhang, Jingfa
2016-01-01
Repeat-pass Interferometric Synthetic Aperture Radar (InSAR) is a technique that can be used to generate DEMs. But the accuracy of InSAR is greatly limited by geometrical distortions, atmospheric effect, and decorrelations, particularly in mountainous areas, such as western China where no high quality DEM has so far been accomplished. Since each of InSAR DEMs generated using data of different frequencies and baselines has their own advantages and disadvantages, it is therefore very potential to overcome some of the limitations of InSAR by fusing Multi-baseline and Multi-frequency Interferometric Results (MMIRs). This paper proposed a fusion method based on Extended Kalman Filter (EKF), which takes the InSAR-derived DEMs as states in prediction step and the flattened interferograms as observations in control step to generate the final fused DEM. Before the fusion, detection of layover and shadow regions, low-coherence regions and regions with large height error is carried out because MMIRs in these regions are believed to be unreliable and thereafter are excluded. The whole processing flow is tested with TerraSAR-X and Envisat ASAR datasets. Finally, the fused DEM is validated with ASTER GDEM and national standard DEM of China. The results demonstrate that the proposed method is effective even in low coherence areas.
A New Lagrangian Relaxation Method Considering Previous Hour Scheduling for Unit Commitment Problem
NASA Astrophysics Data System (ADS)
Khorasani, H.; Rashidinejad, M.; Purakbari-Kasmaie, M.; Abdollahi, A.
2009-08-01
Generation scheduling is a crucial challenge in power systems especially under new environment of liberalization of electricity industry. A new Lagrangian relaxation method for unit commitment (UC) has been presented for solving generation scheduling problem. This paper focuses on the economical aspect of UC problem, while the previous hour scheduling as a very important issue is studied. In this paper generation scheduling of present hour has been conducted by considering the previous hour scheduling. The impacts of hot/cold start-up cost have been taken in to account in this paper. Case studies and numerical analysis presents significant outcomes while it demonstrates the effectiveness of the proposed method.
Bosompem, Christian; Stemn, Eric; Fei-Baffoe, Bernard
2016-10-01
The increase in the quantity of municipal solid waste generated as a result of population growth in most urban areas has resulted in the difficulty of locating suitable land areas to be used as landfills. To curb this, waste transfer stations are used. The Kumasi Metropolitan Area, even though it has an engineered landfill, is faced with the problem of waste collection from the generation centres to the final disposal site. Thus in this study, multi-criteria decision analysis incorporated into a geographic information system was used to determine potential waste transfer station sites. The key result established 11 sites located within six different sub-metros. This result can be used by decision makers for site selection of the waste transfer stations after taking into account other relevant ecological and economic factors. © The Author(s) 2016.
Generating description with multi-feature fusion and saliency maps of image
NASA Astrophysics Data System (ADS)
Liu, Lisha; Ding, Yuxuan; Tian, Chunna; Yuan, Bo
2018-04-01
Generating description for an image can be regard as visual understanding. It is across artificial intelligence, machine learning, natural language processing and many other areas. In this paper, we present a model that generates description for images based on RNN (recurrent neural network) with object attention and multi-feature of images. The deep recurrent neural networks have excellent performance in machine translation, so we use it to generate natural sentence description for images. The proposed method uses single CNN (convolution neural network) that is trained on ImageNet to extract image features. But we think it can not adequately contain the content in images, it may only focus on the object area of image. So we add scene information to image feature using CNN which is trained on Places205. Experiments show that model with multi-feature extracted by two CNNs perform better than which with a single feature. In addition, we make saliency weights on images to emphasize the salient objects in images. We evaluate our model on MSCOCO based on public metrics, and the results show that our model performs better than several state-of-the-art methods.
A novel multi-item joint replenishment problem considering multiple type discounts.
Cui, Ligang; Zhang, Yajun; Deng, Jie; Xu, Maozeng
2018-01-01
In business replenishment, discount offers of multi-item may either provide different discount schedules with a single discount type, or provide schedules with multiple discount types. The paper investigates the joint effects of multiple discount schemes on the decisions of multi-item joint replenishment. In this paper, a joint replenishment problem (JRP) model, considering three discount (all-unit discount, incremental discount, total volume discount) offers simultaneously, is constructed to determine the basic cycle time and joint replenishment frequencies of multi-item. To solve the proposed problem, a heuristic algorithm is proposed to find the optimal solutions and the corresponding total cost of the JRP model. Numerical experiment is performed to test the algorithm and the computational results of JRPs under different discount combinations show different significance in the replenishment cost reduction.
Operational algorithm development and refinement approaches
NASA Astrophysics Data System (ADS)
Ardanuy, Philip E.
2003-11-01
Next-generation polar and geostationary systems, such as the National Polar-orbiting Operational Environmental Satellite System (NPOESS) and the Geostationary Operational Environmental Satellite (GOES)-R, will deploy new generations of electro-optical reflective and emissive capabilities. These will include low-radiometric-noise, improved spatial resolution multi-spectral and hyperspectral imagers and sounders. To achieve specified performances (e.g., measurement accuracy, precision, uncertainty, and stability), and best utilize the advanced space-borne sensing capabilities, a new generation of retrieval algorithms will be implemented. In most cases, these advanced algorithms benefit from ongoing testing and validation using heritage research mission algorithms and data [e.g., the Earth Observing System (EOS)] Moderate-resolution Imaging Spectroradiometer (MODIS) and Shuttle Ozone Limb Scattering Experiment (SOLSE)/Limb Ozone Retreival Experiment (LORE). In these instances, an algorithm's theoretical basis is not static, but rather improves with time. Once frozen, an operational algorithm can "lose ground" relative to research analogs. Cost/benefit analyses provide a basis for change management. The challenge is in reconciling and balancing the stability, and "comfort," that today"s generation of operational platforms provide (well-characterized, known, sensors and algorithms) with the greatly improved quality, opportunities, and risks, that the next generation of operational sensors and algorithms offer. By using the best practices and lessons learned from heritage/groundbreaking activities, it is possible to implement an agile process that enables change, while managing change. This approach combines a "known-risk" frozen baseline with preset completion schedules with insertion opportunities for algorithm advances as ongoing validation activities identify and repair areas of weak performance. This paper describes an objective, adaptive implementation roadmap that takes into account the specific maturities of each system"s (sensor and algorithm) technology to provide for a program that contains continuous improvement while retaining its manageability.
User-Assisted Store Recycling for Dynamic Task Graph Schedulers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurt, Mehmet Can; Krishnamoorthy, Sriram; Agrawal, Gagan
The emergence of the multi-core era has led to increased interest in designing effective yet practical parallel programming models. Models based on task graphs that operate on single-assignment data are attractive in several ways: they can support dynamic applications and precisely represent the available concurrency. However, they also require nuanced algorithms for scheduling and memory management for efficient execution. In this paper, we consider memory-efficient dynamic scheduling of task graphs. Specifically, we present a novel approach for dynamically recycling the memory locations assigned to data items as they are produced by tasks. We develop algorithms to identify memory-efficient store recyclingmore » functions by systematically evaluating the validity of a set of (user-provided or automatically generated) alternatives. Because recycling function can be input data-dependent, we have also developed support for continued correct execution of a task graph in the presence of a potentially incorrect store recycling function. Experimental evaluation demonstrates that our approach to automatic store recycling incurs little to no overheads, achieves memory usage comparable to the best manually derived solutions, often produces recycling functions valid across problem sizes and input parameters, and efficiently recovers from an incorrect choice of store recycling functions.« less
Design and Analysis of Self-Adapted Task Scheduling Strategies in Wireless Sensor Networks
Guo, Wenzhong; Xiong, Naixue; Chao, Han-Chieh; Hussain, Sajid; Chen, Guolong
2011-01-01
In a wireless sensor network (WSN), the usage of resources is usually highly related to the execution of tasks which consume a certain amount of computing and communication bandwidth. Parallel processing among sensors is a promising solution to provide the demanded computation capacity in WSNs. Task allocation and scheduling is a typical problem in the area of high performance computing. Although task allocation and scheduling in wired processor networks has been well studied in the past, their counterparts for WSNs remain largely unexplored. Existing traditional high performance computing solutions cannot be directly implemented in WSNs due to the limitations of WSNs such as limited resource availability and the shared communication medium. In this paper, a self-adapted task scheduling strategy for WSNs is presented. First, a multi-agent-based architecture for WSNs is proposed and a mathematical model of dynamic alliance is constructed for the task allocation problem. Then an effective discrete particle swarm optimization (PSO) algorithm for the dynamic alliance (DPSO-DA) with a well-designed particle position code and fitness function is proposed. A mutation operator which can effectively improve the algorithm’s ability of global search and population diversity is also introduced in this algorithm. Finally, the simulation results show that the proposed solution can achieve significant better performance than other algorithms. PMID:22163971
Code of Federal Regulations, 2012 CFR
2012-01-01
... supervisory schedules for leader and supervisory wage employees in the Puerto Rico wage area. 532.261 Section... schedules for leader and supervisory wage employees in the Puerto Rico wage area. (a) The Department of... the Puerto Rico wage area. (c) The step 2 rate for the supervisory wage schedule shall be: (1) For...
Code of Federal Regulations, 2014 CFR
2014-01-01
... supervisory schedules for leader and supervisory wage employees in the Puerto Rico wage area. 532.261 Section... schedules for leader and supervisory wage employees in the Puerto Rico wage area. (a) The Department of... the Puerto Rico wage area. (c) The step 2 rate for the supervisory wage schedule shall be: (1) For...
Code of Federal Regulations, 2011 CFR
2011-01-01
... supervisory schedules for leader and supervisory wage employees in the Puerto Rico wage area. 532.261 Section... schedules for leader and supervisory wage employees in the Puerto Rico wage area. (a) The Department of... the Puerto Rico wage area. (c) The step 2 rate for the supervisory wage schedule shall be: (1) For...
Code of Federal Regulations, 2010 CFR
2010-01-01
... supervisory schedules for leader and supervisory wage employees in the Puerto Rico wage area. 532.261 Section... schedules for leader and supervisory wage employees in the Puerto Rico wage area. (a) The Department of... the Puerto Rico wage area. (c) The step 2 rate for the supervisory wage schedule shall be: (1) For...
Code of Federal Regulations, 2013 CFR
2013-01-01
... supervisory schedules for leader and supervisory wage employees in the Puerto Rico wage area. 532.261 Section... schedules for leader and supervisory wage employees in the Puerto Rico wage area. (a) The Department of... the Puerto Rico wage area. (c) The step 2 rate for the supervisory wage schedule shall be: (1) For...
NASA Astrophysics Data System (ADS)
Ghonima, M. S.; Yang, H.; Zhong, X.; Ozge, B.; Sahu, D. K.; Kim, C. K.; Babacan, O.; Hanna, R.; Kurtz, B.; Mejia, F. A.; Nguyen, A.; Urquhart, B.; Chow, C. W.; Mathiesen, P.; Bosch, J.; Wang, G.
2015-12-01
One of the main obstacles to high penetrations of solar power is the variable nature of solar power generation. To mitigate variability, grid operators have to schedule additional reliability resources, at considerable expense, to ensure that load requirements are met by generation. Thus despite the cost of solar PV decreasing, the cost of integrating solar power will increase as penetration of solar resources onto the electric grid increases. There are three principal tools currently available to mitigate variability impacts: (i) flexible generation, (ii) storage, either virtual (demand response) or physical devices and (iii) solar forecasting. Storage devices are a powerful tool capable of ensuring smooth power output from renewable resources. However, the high cost of storage is prohibitive and markets are still being designed to leverage their full potential and mitigate their limitation (e.g. empty storage). Solar forecasting provides valuable information on the daily net load profile and upcoming ramps (increasing or decreasing solar power output) thereby providing the grid advance warning to schedule ancillary generation more accurately, or curtail solar power output. In order to develop solar forecasting as a tool that can be utilized by the grid operators we identified two focus areas: (i) develop solar forecast technology and improve solar forecast accuracy and (ii) develop forecasts that can be incorporated within existing grid planning and operation infrastructure. The first issue required atmospheric science and engineering research, while the second required detailed knowledge of energy markets, and power engineering. Motivated by this background we will emphasize area (i) in this talk and provide an overview of recent advancements in solar forecasting especially in two areas: (a) Numerical modeling tools for coastal stratocumulus to improve scheduling in the day-ahead California energy market. (b) Development of a sky imager to provide short term forecasts (0-20 min ahead) to improve optimization and control of equipment on distribution feeders with high penetration of solar. Leveraging such tools that have seen extensive use in the atmospheric sciences supports the development of accurate physics-based solar forecast models. Directions for future research are also provided.
NASA Astrophysics Data System (ADS)
Bieniek, Andrzej
2017-10-01
The paper describe possibilities of energy generation using various rotor types but especially with multi-blade wind engine operates in the areas with unfavourable wind condition. The paper presents also wind energy conversion estimation results presented based on proposed solution of multi-blade wind turbine of outer diameter of 4 m. Based on the wind distribution histogram from the disadvantage wind condition zones (city of Basel) and taking into account design and estimated operating indexes of the considered wind engine rotor an annual energy generation was estimated. Also theoretical energy generation using various types of wind turbines operates at disadvantage wind conditions zones were estimated and compared. The conducted analysis shows that introduction of multi-blade wind rotor instead of the most popular 3- blades or vertical axis rotors results of about 5% better energy generation. Simultaneously there are energy production also at very disadvantages wind condition at wind speed lower then 4 m s-1. Based on considered construction of multi-blade wind engine the rise of rotor mounting height from 10 to 30 m results with more then 300 % better results in terms of electric energy generation.
An enhanced multi-channel bacterial foraging optimization algorithm for MIMO communication system
NASA Astrophysics Data System (ADS)
Palanimuthu, Senthilkumar Jayalakshmi; Muthial, Chandrasekaran
2017-04-01
Channel estimation and optimisation are the main challenging tasks in Multi Input Multi Output (MIMO) wireless communication systems. In this work, a Multi-Channel Bacterial Foraging Optimization Algorithm approach is proposed for the selection of antenna in a transmission area. The main advantage of this method is, it reduces the loss of bandwidth during data transmission effectively. Here, we considered the channel estimation and optimisation for improving the transmission speed and reducing the unused bandwidth. Initially, the message is given to the input of the communication system. Then, the symbol mapping process is performed for converting the message into signals. It will be encoded based on the space-time encoding technique. Here, the single signal is divided into multiple signals and it will be given to the input of space-time precoder. Hence, the multiplexing is applied to transmission channel estimation. In this paper, the Rayleigh channel is selected based on the bandwidth range. This is the Gaussian distribution type channel. Then, the demultiplexing is applied on the obtained signal that is the reverse function of multiplexing, which splits the combined signal arriving from a medium into the original information signal. Furthermore, the long-term evolution technique is used for scheduling the time to channels during transmission. Here, the hidden Markov model technique is employed to predict the status information of the channel. Finally, the signals are decoded and the reconstructed signal is obtained after performing the scheduling process. The experimental results evaluate the performance of the proposed MIMO communication system in terms of bit error rate, mean squared error, average throughput, outage capacity and signal to interference noise ratio.
A COTS-Based Attitude Dependent Contact Scheduling System
NASA Technical Reports Server (NTRS)
DeGumbia, Jonathan D.; Stezelberger, Shane T.; Woodard, Mark
2006-01-01
The mission architecture of the Gamma-ray Large Area Space Telescope (GLAST) requires a sophisticated ground system component for scheduling the downlink of science data. Contacts between the ````````````````` satellite and the Tracking and Data Relay Satellite System (TDRSS) are restricted by the limited field-of-view of the science data downlink antenna. In addition, contacts must be scheduled when permitted by the satellite s complex and non-repeating attitude profile. Complicating the matter further, the long lead-time required to schedule TDRSS services, combined with the short duration of the downlink contact opportunities, mandates accurate GLAST orbit and attitude modeling. These circumstances require the development of a scheduling system that is capable of predictively and accurately modeling not only the orbital position of GLAST but also its attitude. This paper details the methods used in the design of a Commercial Off The Shelf (COTS)-based attitude-dependent. TDRSS contact Scheduling system that meets the unique scheduling requirements of the GLAST mission, and it suggests a COTS-based scheduling approach to support future missions. The scheduling system applies filtering and smoothing algorithms to telemetered GPS data to produce high-accuracy predictive GLAST orbit ephemerides. Next, bus pointing commands from the GLAST Science Support Center are used to model the complexities of the two dynamic science gathering attitude modes. Attitude-dependent view periods are then generated between GLAST and each of the supporting TDRSs. Numerous scheduling constraints are then applied to account for various mission specific resource limitations. Next, an optimization engine is used to produce an optimized TDRSS contact schedule request which is sent to TDRSS scheduling for confirmation. Lastly, the confirmed TDRSS contact schedule is rectified with an updated ephemeris and adjusted bus pointing commands to produce a final science downlink contact schedule.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cartmell, D.B.
1995-09-01
Based on US Department of Energy (DOE), Richland Operations Office (RL) review, specific areas of Westinghouse Hanford Company (WHC), Transition Projects ``Draft`` Multi-Year Program Plan (MYPP) were revised in preparation for the RL approval ceremony on September 26, 1995. These changes were reviewed with the appropriate RL Project Manager. The changes have been incorporated to the MYPP electronic file, and hard copies replacing the ``Draft`` MYPP will be distributed after the formal signing. In addition to the comments received, a summary level schedule and outyear estimates for the K Basin deactivation beginning in FY 2001 have been included. The Kmore » Basin outyear waste data is nearing completion this week and will be incorporated. This exclusion was discussed with Mr. N.D. Moorer, RL, Facility Transition Program Support/Integration. The attached MYPP scope/schedule reflects the Integrated Target Case submitted in the April 1995 Activity Data Sheets (ADS) with the exception of B Plant and the Plutonium Finishing Plant (PFP). The 8 Plant assumption in FY 1997 reflects the planning case in the FY 1997 ADS with a shortfall of $5 million. PFP assumptions have been revised from the FY 1997 ADS based on the direction provided this past summer by DOE-Headquarters. This includes the acceleration of the polycube stabilization back to its originally planned completion date. Although the overall program repricing in FY 1996 allowed the scheduled acceleration to fall with the funding allocation, the FY 1997 total reflects a shortfall of $6 million.« less
Solving multi-objective job shop scheduling problems using a non-dominated sorting genetic algorithm
NASA Astrophysics Data System (ADS)
Piroozfard, Hamed; Wong, Kuan Yew
2015-05-01
The efforts of finding optimal schedules for the job shop scheduling problems are highly important for many real-world industrial applications. In this paper, a multi-objective based job shop scheduling problem by simultaneously minimizing makespan and tardiness is taken into account. The problem is considered to be more complex due to the multiple business criteria that must be satisfied. To solve the problem more efficiently and to obtain a set of non-dominated solutions, a meta-heuristic based non-dominated sorting genetic algorithm is presented. In addition, task based representation is used for solution encoding, and tournament selection that is based on rank and crowding distance is applied for offspring selection. Swapping and insertion mutations are employed to increase diversity of population and to perform intensive search. To evaluate the modified non-dominated sorting genetic algorithm, a set of modified benchmarking job shop problems obtained from the OR-Library is used, and the results are considered based on the number of non-dominated solutions and quality of schedules obtained by the algorithm.
MDTM: Optimizing Data Transfer using Multicore-Aware I/O Scheduling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Liang; Demar, Phil; Wu, Wenji
2017-05-09
Bulk data transfer is facing significant challenges in the coming era of big data. There are multiple performance bottlenecks along the end-to-end path from the source to destination storage system. The limitations of current generation data transfer tools themselves can have a significant impact on end-to-end data transfer rates. In this paper, we identify the issues that lead to underperformance of these tools, and present a new data transfer tool with an innovative I/O scheduler called MDTM. The MDTM scheduler exploits underlying multicore layouts to optimize throughput by reducing delay and contention for I/O reading and writing operations. With ourmore » evaluations, we show how MDTM successfully avoids NUMA-based congestion and significantly improves end-to-end data transfer rates across high-speed wide area networks.« less
MDTM: Optimizing Data Transfer using Multicore-Aware I/O Scheduling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Liang; Demar, Phil; Wu, Wenji
2017-01-01
Bulk data transfer is facing significant challenges in the coming era of big data. There are multiple performance bottlenecks along the end-to-end path from the source to destination storage system. The limitations of current generation data transfer tools themselves can have a significant impact on end-to-end data transfer rates. In this paper, we identify the issues that lead to underperformance of these tools, and present a new data transfer tool with an innovative I/O scheduler called MDTM. The MDTM scheduler exploits underlying multicore layouts to optimize throughput by reducing delay and contention for I/O reading and writing operations. With ourmore » evaluations, we show how MDTM successfully avoids NUMA-based congestion and significantly improves end-to-end data transfer rates across high-speed wide area networks.« less
Hybrid PV/diesel solar power system design using multi-level factor analysis optimization
NASA Astrophysics Data System (ADS)
Drake, Joshua P.
Solar power systems represent a large area of interest across a spectrum of organizations at a global level. It was determined that a clear understanding of current state of the art software and design methods, as well as optimization methods, could be used to improve the design methodology. Solar power design literature was researched for an in depth understanding of solar power system design methods and algorithms. Multiple software packages for the design and optimization of solar power systems were analyzed for a critical understanding of their design workflow. In addition, several methods of optimization were studied, including brute force, Pareto analysis, Monte Carlo, linear and nonlinear programming, and multi-way factor analysis. Factor analysis was selected as the most efficient optimization method for engineering design as it applied to solar power system design. The solar power design algorithms, software work flow analysis, and factor analysis optimization were combined to develop a solar power system design optimization software package called FireDrake. This software was used for the design of multiple solar power systems in conjunction with an energy audit case study performed in seven Tibetan refugee camps located in Mainpat, India. A report of solar system designs for the camps, as well as a proposed schedule for future installations was generated. It was determined that there were several improvements that could be made to the state of the art in modern solar power system design, though the complexity of current applications is significant.
Aeon: Synthesizing Scheduling Algorithms from High-Level Models
NASA Astrophysics Data System (ADS)
Monette, Jean-Noël; Deville, Yves; van Hentenryck, Pascal
This paper describes the aeon system whose aim is to synthesize scheduling algorithms from high-level models. A eon, which is entirely written in comet, receives as input a high-level model for a scheduling application which is then analyzed to generate a dedicated scheduling algorithm exploiting the structure of the model. A eon provides a variety of synthesizers for generating complete or heuristic algorithms. Moreover, synthesizers are compositional, making it possible to generate complex hybrid algorithms naturally. Preliminary experimental results indicate that this approach may be competitive with state-of-the-art search algorithms.
A Mixed Integer Linear Program for Solving a Multiple Route Taxi Scheduling Problem
NASA Technical Reports Server (NTRS)
Montoya, Justin Vincent; Wood, Zachary Paul; Rathinam, Sivakumar; Malik, Waqar Ahmad
2010-01-01
Aircraft movements on taxiways at busy airports often create bottlenecks. This paper introduces a mixed integer linear program to solve a Multiple Route Aircraft Taxi Scheduling Problem. The outputs of the model are in the form of optimal taxi schedules, which include routing decisions for taxiing aircraft. The model extends an existing single route formulation to include routing decisions. An efficient comparison framework compares the multi-route formulation and the single route formulation. The multi-route model is exercised for east side airport surface traffic at Dallas/Fort Worth International Airport to determine if any arrival taxi time savings can be achieved by allowing arrivals to have two taxi routes: a route that crosses an active departure runway and a perimeter route that avoids the crossing. Results indicate that the multi-route formulation yields reduced arrival taxi times over the single route formulation only when a perimeter taxiway is used. In conditions where the departure aircraft are given an optimal and fixed takeoff sequence, accumulative arrival taxi time savings in the multi-route formulation can be as high as 3.6 hours more than the single route formulation. If the departure sequence is not optimal, the multi-route formulation results in less taxi time savings made over the single route formulation, but the average arrival taxi time is significantly decreased.
Chung, Kwangzoo; Han, Youngyih; Kim, Jinsung; Ahn, Sung Hwan; Ju, Sang Gyu; Jung, Sang Hoon; Chung, Yoonsun; Cho, Sungkoo; Jo, Kwanghyun; Shin, Eun Hyuk; Hong, Chae-Seon; Shin, Jung Suk; Park, Seyjoon; Kim, Dae-Hyun; Kim, Hye Young; Lee, Boram; Shibagaki, Gantaro; Nonaka, Hideki; Sasai, Kenzo; Koyabu, Yukio; Choi, Changhoon; Huh, Seung Jae; Ahn, Yong Chan; Pyo, Hong Ryull; Lim, Do Hoon; Park, Hee Chul; Park, Won; Oh, Dong Ryul; Noh, Jae Myung; Yu, Jeong Il; Song, Sanghyuk; Lee, Ji Eun; Lee, Bomi; Choi, Doo Ho
2015-12-01
The purpose of this report is to describe the proton therapy system at Samsung Medical Center (SMC-PTS) including the proton beam generator, irradiation system, patient positioning system, patient position verification system, respiratory gating system, and operating and safety control system, and review the current status of the SMC-PTS. The SMC-PTS has a cyclotron (230 MeV) and two treatment rooms: one treatment room is equipped with a multi-purpose nozzle and the other treatment room is equipped with a dedicated pencil beam scanning nozzle. The proton beam generator including the cyclotron and the energy selection system can lower the energy of protons down to 70 MeV from the maximum 230 MeV. The multi-purpose nozzle can deliver both wobbling proton beam and active scanning proton beam, and a multi-leaf collimator has been installed in the downstream of the nozzle. The dedicated scanning nozzle can deliver active scanning proton beam with a helium gas filled pipe minimizing unnecessary interactions with the air in the beam path. The equipment was provided by Sumitomo Heavy Industries Ltd., RayStation from RaySearch Laboratories AB is the selected treatment planning system, and data management will be handled by the MOSAIQ system from Elekta AB. The SMC-PTS located in Seoul, Korea, is scheduled to begin treating cancer patients in 2015.
Stochastic online appointment scheduling of multi-step sequential procedures in nuclear medicine.
Pérez, Eduardo; Ntaimo, Lewis; Malavé, César O; Bailey, Carla; McCormack, Peter
2013-12-01
The increased demand for medical diagnosis procedures has been recognized as one of the contributors to the rise of health care costs in the U.S. in the last few years. Nuclear medicine is a subspecialty of radiology that uses advanced technology and radiopharmaceuticals for the diagnosis and treatment of medical conditions. Procedures in nuclear medicine require the use of radiopharmaceuticals, are multi-step, and have to be performed under strict time window constraints. These characteristics make the scheduling of patients and resources in nuclear medicine challenging. In this work, we derive a stochastic online scheduling algorithm for patient and resource scheduling in nuclear medicine departments which take into account the time constraints imposed by the decay of the radiopharmaceuticals and the stochastic nature of the system when scheduling patients. We report on a computational study of the new methodology applied to a real clinic. We use both patient and clinic performance measures in our study. The results show that the new method schedules about 600 more patients per year on average than a scheduling policy that was used in practice by improving the way limited resources are managed at the clinic. The new methodology finds the best start time and resources to be used for each appointment. Furthermore, the new method decreases patient waiting time for an appointment by about two days on average.
Energy efficient mechanisms for high-performance Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Alsaify, Baha'adnan
2009-12-01
Due to recent advances in microelectronics, the development of low cost, small, and energy efficient devices became possible. Those advances led to the birth of the Wireless Sensor Networks (WSNs). WSNs consist of a large set of sensor nodes equipped with communication capabilities, scattered in the area to monitor. Researchers focus on several aspects of WSNs. Such aspects include the quality of service the WSNs provide (data delivery delay, accuracy of data, etc...), the scalability of the network to contain thousands of sensor nodes (the terms node and sensor node are being used interchangeably), the robustness of the network (allowing the network to work even if a certain percentage of nodes fails), and making the energy consumption in the network as low as possible to prolong the network's lifetime. In this thesis, we present an approach that can be applied to the sensing devices that are scattered in an area for Sensor Networks. This work will use the well-known approach of using a awaking scheduling to extend the network's lifespan. We designed a scheduling algorithm that will reduce the delay's upper bound the reported data will experience, while at the same time keeps the advantages that are offered by the use of the awaking scheduling -- the energy consumption reduction which will lead to the increase in the network's lifetime. The wakeup scheduling is based on the location of the node relative to its neighbors and its distance from the Base Station (the terms Base Station and sink are being used interchangeably). We apply the proposed method to a set of simulated nodes using the "ONE Simulator". We test the performance of this approach with three other approaches -- Direct Routing technique, the well known LEACH algorithm, and a multi-parent scheduling algorithm. We demonstrate a good improvement on the network's quality of service and a reduction of the consumed energy.
Multi-Objective Scheduling for the Cluster II Constellation
NASA Technical Reports Server (NTRS)
Johnston, Mark D.; Giuliano, Mark
2011-01-01
This paper describes the application of the MUSE multiobjecctive scheduling framework to the Cluster II WBD scheduling domain. Cluster II is an ESA four-spacecraft constellation designed to study the plasma environment of the Earth and it's magnetosphere. One of the instruments on each of the four spacecraft is the Wide Band Data (WBD) plasma wave experiment. We have applied the MUSE evolutionary algorithm to the scheduling problem represented by this instrument, and the result has been adopted and utilized by the WBD schedulers for nearly a year. This paper describes the WBD scheduling problem, its representation in MUSE, and some of the visualization elements that provide insight into objective value tradeoffs.
Production scheduling with ant colony optimization
NASA Astrophysics Data System (ADS)
Chernigovskiy, A. S.; Kapulin, D. V.; Noskova, E. E.; Yamskikh, T. N.; Tsarev, R. Yu
2017-10-01
The optimum solution of the production scheduling problem for manufacturing processes at an enterprise is crucial as it allows one to obtain the required amount of production within a specified time frame. Optimum production schedule can be found using a variety of optimization algorithms or scheduling algorithms. Ant colony optimization is one of well-known techniques to solve the global multi-objective optimization problem. In the article, the authors present a solution of the production scheduling problem by means of an ant colony optimization algorithm. A case study of the algorithm efficiency estimated against some others production scheduling algorithms is presented. Advantages of the ant colony optimization algorithm and its beneficial effect on the manufacturing process are provided.
Constraint monitoring in TOSCA
NASA Technical Reports Server (NTRS)
Beck, Howard
1992-01-01
The Job-Shop Scheduling Problem (JSSP) deals with the allocation of resources over time to factory operations. Allocations are subject to various constraints (e.g., production precedence relationships, factory capacity constraints, and limits on the allowable number of machine setups) which must be satisfied for a schedule to be valid. The identification of constraint violations and the monitoring of constraint threats plays a vital role in schedule generation in terms of the following: (1) directing the scheduling process; and (2) informing scheduling decisions. This paper describes a general mechanism for identifying constraint violations and monitoring threats to the satisfaction of constraints throughout schedule generation.
NASA Astrophysics Data System (ADS)
Chen, Miawjane; Yan, Shangyao; Wang, Sin-Siang; Liu, Chiu-Lan
2015-02-01
An effective project schedule is essential for enterprises to increase their efficiency of project execution, to maximize profit, and to minimize wastage of resources. Heuristic algorithms have been developed to efficiently solve the complicated multi-mode resource-constrained project scheduling problem with discounted cash flows (MRCPSPDCF) that characterize real problems. However, the solutions obtained in past studies have been approximate and are difficult to evaluate in terms of optimality. In this study, a generalized network flow model, embedded in a time-precedence network, is proposed to formulate the MRCPSPDCF with the payment at activity completion times. Mathematically, the model is formulated as an integer network flow problem with side constraints, which can be efficiently solved for optimality, using existing mathematical programming software. To evaluate the model performance, numerical tests are performed. The test results indicate that the model could be a useful planning tool for project scheduling in the real world.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hobbs, B.; Kahabka, J.
1995-06-01
This paper discusses the use of a mechanical brush cleaning technology recently used to remove biofouling from the Circulating Water (CW) System at New York Power Authority`s James A. FitzPatrick Nuclear Power Plant. The FitzPatrick plant had previously used chemical molluscicide to treat zebra mussels in the CW system. Full system treatment was performed in 1992 with limited forebay/screenwell treatment in 1993. The New York Power Authority (NYPA) decided to conduct a mechanical cleaning of the intake system in 1994. Specific project objectives included: (1) Achieve a level of surface cleaniness greater than 98%; (2) Remove 100% of debris, bothmore » existing sediment and debris generated as a result of cleaning; (3) Inspect all surfaces and components, identifying any problem areas; (4) Complete the task in a time frame within the 1994-95 refueling outage schedule window, and; (5) Determine if underwater mechanical cleaning is a cost-effective zebra mussel control method suitable for future application at FitzPatrick. A pre-cleaning inspection, including underwater video photography, was conducted of each area. Cleaning was accomplished using diver-controlled, multi-brush equipment included the electro-hydraulic powered Submersible Cleaning and Maintenance Platform (SCAMP), and several designs of hand-held machines. The brushes swept all zebra mussels off surfaces, restoring concrete and metal substrates to their original condition. Sensitive areas including pump housings, standpipes, sensor piping and chlorine injection tubing, were cleaned without degradation. Submersible vortex vacuum pumps were used to remove debris from the cavity. More than 46,000 ft{sup 2} of surface area was cleaned and over 460 cubic yards of dewatered debris were removed. As each area was completed, a post-clean inspection with photos and video was performed.« less
A multi-group and preemptable scheduling of cloud resource based on HTCondor
NASA Astrophysics Data System (ADS)
Jiang, Xiaowei; Zou, Jiaheng; Cheng, Yaodong; Shi, Jingyan
2017-10-01
Due to the features of virtual machine-flexibility, easy controlling and various system environments, more and more fields utilize the virtualization technology to construct the distributed system with the virtual resources, also including high energy physics. This paper introduce a method used in high energy physics that supports multiple resource group and preemptable cloud resource scheduling, combining virtual machine with HTCondor (a batch system). It makes resource controlling more flexible and more efficient and makes resource scheduling independent of job scheduling. Firstly, the resources belong to different experiment-groups, and the type of user-groups mapping to resource-groups(same as experiment-group) is one-to-one or many-to-one. In order to make the confused group simply to be managed, we designed the permission controlling component to ensure that the different resource-groups can get the suitable jobs. Secondly, for the purpose of elastically allocating resources for suitable resource-group, it is necessary to schedule resources like scheduling jobs. So this paper designs the cloud resource scheduling to maintain a resource queue and allocate an appropriate amount of virtual resources to the request resource-group. Thirdly, in some kind of situations, because of the resource occupied for a long time, resources need to be preempted. This paper adds the preemption function for the resource scheduling that implement resource preemption based on the group priority. Additionally, the way to preempting is soft that when virtual resources are preempted, jobs will not be killed but also be held and rematched later. It is implemented with the help of HTCondor, storing the held job information in scheduler, releasing the job to idle status and doing second matcher. In IHEP (institute of high energy physics), we have built a batch system based on HTCondor with a virtual resources pool based on Openstack. And this paper will show some cases of experiment JUNO and LHAASO. The result indicates that multi-group and preemptable resource scheduling is efficient to support multi-group and soft preemption. Additionally, the permission controlling component has been used in the local computing cluster, supporting for experiment JUNO, CMS and LHAASO, and the scale will be expanded to more experiments at the first half year, including DYW, BES and so on. Its evidence that the permission controlling is efficient.
A System for Automatically Generating Scheduling Heuristics
NASA Technical Reports Server (NTRS)
Morris, Robert
1996-01-01
The goal of this research is to improve the performance of automated schedulers by designing and implementing an algorithm by automatically generating heuristics by selecting a schedule. The particular application selected by applying this method solves the problem of scheduling telescope observations, and is called the Associate Principal Astronomer. The input to the APA scheduler is a set of observation requests submitted by one or more astronomers. Each observation request specifies an observation program as well as scheduling constraints and preferences associated with the program. The scheduler employs greedy heuristic search to synthesize a schedule that satisfies all hard constraints of the domain and achieves a good score with respect to soft constraints expressed as an objective function established by an astronomer-user.
NASA Astrophysics Data System (ADS)
Afzal, Peyman; Mirzaei, Misagh; Yousefi, Mahyar; Adib, Ahmad; Khalajmasoumi, Masoumeh; Zarifi, Afshar Zia; Foster, Patrick; Yasrebi, Amir Bijan
2016-07-01
Recognition of significant geochemical signatures and separation of geochemical anomalies from background are critical issues in interpretation of stream sediment data to define exploration targets. In this paper, we used staged factor analysis in conjunction with the concentration-number (C-N) fractal model to generate exploration targets for prospecting Cr and Fe mineralization in Balvard area, SE Iran. The results show coexistence of derived multi-element geochemical signatures of the deposit-type sought and ultramafic-mafic rocks in the NE and northern parts of the study area indicating significant chromite and iron ore prospects. In this regard, application of staged factor analysis and fractal modeling resulted in recognition of significant multi-element signatures that have a high spatial association with host lithological units of the deposit-type sought, and therefore, the generated targets are reliable for further prospecting of the deposit in the study area.
Independent tasks scheduling in cloud computing via improved estimation of distribution algorithm
NASA Astrophysics Data System (ADS)
Sun, Haisheng; Xu, Rui; Chen, Huaping
2018-04-01
To minimize makespan for scheduling independent tasks in cloud computing, an improved estimation of distribution algorithm (IEDA) is proposed to tackle the investigated problem in this paper. Considering that the problem is concerned with multi-dimensional discrete problems, an improved population-based incremental learning (PBIL) algorithm is applied, which the parameter for each component is independent with other components in PBIL. In order to improve the performance of PBIL, on the one hand, the integer encoding scheme is used and the method of probability calculation of PBIL is improved by using the task average processing time; on the other hand, an effective adaptive learning rate function that related to the number of iterations is constructed to trade off the exploration and exploitation of IEDA. In addition, both enhanced Max-Min and Min-Min algorithms are properly introduced to form two initial individuals. In the proposed IEDA, an improved genetic algorithm (IGA) is applied to generate partial initial population by evolving two initial individuals and the rest of initial individuals are generated at random. Finally, the sampling process is divided into two parts including sampling by probabilistic model and IGA respectively. The experiment results show that the proposed IEDA not only gets better solution, but also has faster convergence speed.
An Energy Efficient MAC Protocol for Multi-Hop Swallowable Body Sensor Networks
Lin, Lin; Yang, Chengfeng; Wong, Kai Juan; Yan, Hao; Shen, Junwen; Phee, Soo Jay
2014-01-01
Swallowable body sensor networks (BSNs) are composed of sensors which are swallowed by patients and send the collected data to the outside coordinator. These sensors are energy constraint and the batteries are difficult to be replaced. The medium access control (MAC) protocol plays an important role in energy management. This paper investigates an energy efficient MAC protocol design for swallowable BSNs. Multi-hop communication is analyzed and proved more energy efficient than single-hop communication within the human body when the circuitry power is low. Based on this result, a centrally controlled time slotting schedule is proposed. The major workload is shifted from the sensors to the coordinator. The coordinator collects the path-loss map and calculates the schedules, including routing, slot assignment and transmission power. Sensor nodes follow the schedules to send data in a multi-hop way. The proposed protocol is compared with the IEEE 802.15.6 protocol in terms of energy consumption. The results show that it is more energy efficient than IEEE 802.15.6 for swallowable BSN scenarios. PMID:25330049
Mega-Scale Simulation of Multi-Layer Devices-- Formulation, Kinetics, and Visualization
1994-07-28
prototype code STRIDE, also initially developed under ARO support. The focus of the ARO supported research activities has been in the areas of multi ... FORTRAN -77. During its fifteen-year life- span several generations of researchers have modified the code . Due to this continual develop- ment, the...behavior. The replacement of the linear solver had no effect on the remainder of the code . We replaced the existing solver with a distributed multi -frontal
Dynamic Appliances Scheduling in Collaborative MicroGrids System
Bilil, Hasnae; Aniba, Ghassane; Gharavi, Hamid
2017-01-01
In this paper a new approach which is based on a collaborative system of MicroGrids (MG’s), is proposed to enable household appliance scheduling. To achieve this, appliances are categorized into flexible and non-flexible Deferrable Loads (DL’s), according to their electrical components. We propose a dynamic scheduling algorithm where users can systematically manage the operation of their electric appliances. The main challenge is to develop a flattening function calculus (reshaping) for both flexible and non-flexible DL’s. In addition, implementation of the proposed algorithm would require dynamically analyzing two successive multi-objective optimization (MOO) problems. The first targets the activation schedule of non-flexible DL’s and the second deals with the power profiles of flexible DL’s. The MOO problems are resolved by using a fast and elitist multi-objective genetic algorithm (NSGA-II). Finally, in order to show the efficiency of the proposed approach, a case study of a collaborative system that consists of 40 MG’s registered in the load curve for the flattening program has been developed. The results verify that the load curve can indeed become very flat by applying the proposed scheduling approach. PMID:28824226
Stochastic Modeling of Airlines' Scheduled Services Revenue
NASA Technical Reports Server (NTRS)
Hamed, M. M.
1999-01-01
Airlines' revenue generated from scheduled services account for the major share in the total revenue. As such, predicting airlines' total scheduled services revenue is of great importance both to the governments (in case of national airlines) and private airlines. This importance stems from the need to formulate future airline strategic management policies, determine government subsidy levels, and formulate governmental air transportation policies. The prediction of the airlines' total scheduled services revenue is dealt with in this paper. Four key components of airline's scheduled services are considered. These include revenues generated from passenger, cargo, mail, and excess baggage. By addressing the revenue generated from each schedule service separately, air transportation planners and designers arc able to enhance their ability to formulate specific strategies for each component. Estimation results clearly indicate that the four stochastic processes (scheduled services components) are represented by different Box-Jenkins ARIMA models. The results demonstrate the appropriateness of the developed models and their ability to provide air transportation planners with future information vital to the planning and design processes.
Stochastic Modeling of Airlines' Scheduled Services Revenue
NASA Technical Reports Server (NTRS)
Hamed, M. M.
1999-01-01
Airlines' revenue generated from scheduled services account for the major share in the total revenue. As such, predicting airlines' total scheduled services revenue is of great importance both to the governments (in case of national airlines) and private airlines. This importance stems from the need to formulate future airline strategic management policies, determine government subsidy levels, and formulate governmental air transportation policies. The prediction of the airlines' total scheduled services revenue is dealt with in this paper. Four key components of airline's scheduled services are considered. These include revenues generated from passenger, cargo, mail, and excess baggage. By addressing the revenue generated from each schedule service separately, air transportation planners and designers are able to enhance their ability to formulate specific strategies for each component. Estimation results clearly indicate that the four stochastic processes (scheduled services components) are represented by different Box-Jenkins ARIMA models. The results demonstrate the appropriateness of the developed models and their ability to provide air transportation planners with future information vital to the planning and design processes.
Tank waste remediation system multi-year work plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The Tank Waste Remediation System (TWRS) Multi-Year Work Plan (MYWP) documents the detailed total Program baseline and was constructed to guide Program execution. The TWRS MYWP is one of two elements that comprise the TWRS Program Management Plan. The TWRS MYWP fulfills the Hanford Site Management System requirement for a Multi-Year Program Plan and a Fiscal-Year Work Plan. The MYWP addresses program vision, mission, objectives, strategy, functions and requirements, risks, decisions, assumptions, constraints, structure, logic, schedule, resource requirements, and waste generation and disposition. Sections 1 through 6, Section 8, and the appendixes provide program-wide information. Section 7 includes a subsectionmore » for each of the nine program elements that comprise the TWRS Program. The foundation of any program baseline is base planning data (e.g., defendable product definition, logic, schedules, cost estimates, and bases of estimates). The TWRS Program continues to improve base data. As data improve, so will program element planning, integration between program elements, integration outside of the TWRS Program, and the overall quality of the TWRS MYWP. The MYWP establishes the TWRS baseline objectives to store, treat, and immobilize highly radioactive Hanford waste in an environmentally sound, safe, and cost-effective manner. The TWRS Program will complete the baseline mission in 2040 and will incur costs totalling approximately 40 billion dollars. The summary strategy is to meet the above objectives by using a robust systems engineering effort, placing the highest possible priority on safety and environmental protection; encouraging {open_quotes}out sourcing{close_quotes} of the work to the extent practical; and managing significant but limited resources to move toward final disposition of tank wastes, while openly communicating with all interested stakeholders.« less
Tank waste remediation system multi-year work plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-09-01
The Tank Waste Remediation System (TWRS) Multi-Year Work Plan (MYWP) documents the detailed total Program baseline and was constructed to guide Program execution. The TWRS MYWP is one of two elements that comprise the TWRS Program Management Plan. The TWRS MYWP fulfills the Hanford Site Management System requirement for a Multi-Year Program Plan and a Fiscal-Year Work Plan. The MYWP addresses program vision, mission, objectives, strategy, functions and requirements, risks, decisions, assumptions, constraints, structure, logic, schedule, resource requirements, and waste generation and disposition. Sections 1 through 6, Section 8, and the appendixes provide program-wide information. Section 7 includes a subsectionmore » for each of the nine program elements that comprise the TWRS Program. The foundation of any program baseline is base planning data (e.g., defendable product definition, logic, schedules, cost estimates, and bases of estimates). The TWRS Program continues to improve base data. As data improve, so will program element planning, integration between program elements, integration outside of the TWRS Program, and the overall quality of the TWRS MYWP. The MYWP establishes the TWRS baseline objectives to store, treat, and immobilize highly radioactive Hanford waste in an environmentally sound, safe, and cost-effective manner. The TWRS Program will complete the baseline mission in 2040 and will incur costs totalling approximately 40 billion dollars. The summary strategy is to meet the above objectives by using a robust systems engineering effort, placing the highest possible priority on safety and environmental protection; encouraging {open_quotes}out sourcing{close_quotes} of the work to the extent practical; and managing significant but limited resources to move toward final disposition of tank wastes, while openly communicating with all interested stakeholders.« less
2012-09-01
scheduler to adapt its uplink and downlink assignments to channel conditions. Sleep mode is used by the MS to minimize power drain and radio...is addressed in one resource unit, while for multi-user (MU) schemes , multiple users can be scheduled in one resource unit. Open-loop techniques...17 7. Mobility and Power Management ......................................... 18 8. Scheduling Services
Escalator: An Autonomous Scheduling Scheme for Convergecast in TSCH
Oh, Sukho; Hwang, DongYeop; Kim, Ki-Hyung; Kim, Kangseok
2018-01-01
Time Slotted Channel Hopping (TSCH) is widely used in the industrial wireless sensor networks due to its high reliability and energy efficiency. Various timeslot and channel scheduling schemes have been proposed for achieving high reliability and energy efficiency for TSCH networks. Recently proposed autonomous scheduling schemes provide flexible timeslot scheduling based on the routing topology, but do not take into account the network traffic and packet forwarding delays. In this paper, we propose an autonomous scheduling scheme for convergecast in TSCH networks with RPL as a routing protocol, named Escalator. Escalator generates a consecutive timeslot schedule along the packet forwarding path to minimize the packet transmission delay. The schedule is generated autonomously by utilizing only the local routing topology information without any additional signaling with other nodes. The generated schedule is guaranteed to be conflict-free, in that all nodes in the network could transmit packets to the sink in every slotframe cycle. We implement Escalator and evaluate its performance with existing autonomous scheduling schemes through a testbed and simulation. Experimental results show that the proposed Escalator has lower end-to-end delay and higher packet delivery ratio compared to the existing schemes regardless of the network topology. PMID:29659508
Escalator: An Autonomous Scheduling Scheme for Convergecast in TSCH.
Oh, Sukho; Hwang, DongYeop; Kim, Ki-Hyung; Kim, Kangseok
2018-04-16
Time Slotted Channel Hopping (TSCH) is widely used in the industrial wireless sensor networks due to its high reliability and energy efficiency. Various timeslot and channel scheduling schemes have been proposed for achieving high reliability and energy efficiency for TSCH networks. Recently proposed autonomous scheduling schemes provide flexible timeslot scheduling based on the routing topology, but do not take into account the network traffic and packet forwarding delays. In this paper, we propose an autonomous scheduling scheme for convergecast in TSCH networks with RPL as a routing protocol, named Escalator. Escalator generates a consecutive timeslot schedule along the packet forwarding path to minimize the packet transmission delay. The schedule is generated autonomously by utilizing only the local routing topology information without any additional signaling with other nodes. The generated schedule is guaranteed to be conflict-free, in that all nodes in the network could transmit packets to the sink in every slotframe cycle. We implement Escalator and evaluate its performance with existing autonomous scheduling schemes through a testbed and simulation. Experimental results show that the proposed Escalator has lower end-to-end delay and higher packet delivery ratio compared to the existing schemes regardless of the network topology.
Project Physics Teacher Guide 3, The Triumph of Mechanics.
ERIC Educational Resources Information Center
Harvard Univ., Cambridge, MA. Harvard Project Physics.
Teaching procedures of Project Physics Unit 3 are presented to help teachers make effective use of learning materials. Unit contents are discussed in connection with teaching aid perspective, multi-media schedules, schedule blocks, and resource charts. Brief analyses are made for transparencies, 16mm films, and reader articles. Included is…
DOT National Transportation Integrated Search
2012-06-01
The mobility allowance shuttle transit (MAST) system is a hybrid transit system in which vehicles are : allowed to deviate from a fixed route to serve flexible demand. A mixed integer programming (MIP) : formulation for the static scheduling problem ...
Project Physics Teacher Guide 2, Motion in the Heavens.
ERIC Educational Resources Information Center
Harvard Univ., Cambridge, MA. Harvard Project Physics.
Teaching procedures of Project Physics Unit 2 are presented to help teachers make effective use of learning materials. The unit contents are discussed in connection with teaching aid perspectives, multi-media schedules, schedule blocks, and resource charts. Analyses are made for transparencies, 16mm films, and reader articles. Included is…
Project Physics Teacher Guide 6, The Nucleus.
ERIC Educational Resources Information Center
Harvard Univ., Cambridge, MA. Harvard Project Physics.
Teaching procedures of Project Physics Unit 6 are presented to help teachers make effective use of learning materials. Unit contents are discussed in connection with teaching aid lists, multi-media schedules, schedule blocks, and resource charts. Brief summaries are made for transparencies, 16mm films, and reader articles. Included is information…
Project Physics Teacher Guide 4, Light and Electromagnetism.
ERIC Educational Resources Information Center
Harvard Univ., Cambridge, MA. Harvard Project Physics.
Teaching procedures of Project Physics Unit 4 are presented to help teachers make effective use of learning materials. Unit contents are discussed in connection with teaching aid lists, multi-media schedules, schedule blocks, and resources charts. Brief summaries are made for transparencies, 16mm films, and reader articles. Included is information…
Kaewkungwal, Jaranit; Singhasivanon, Pratap; Khamsiriwatchara, Amnat; Sawang, Surasak; Meankaew, Pongthep; Wechsart, Apisit
2010-11-03
To assess the application of cell phone integrating into the healthcare system to improve antenatal care (ANC) and expanded programme on immunization (EPI) services for the under-served population in border area. A module combining web-based and mobile technology was developed to generate ANC/EPI visit schedule dates in which the healthcare personnel can cross-check, identify and update the mother's ANC and child's EPI status at the healthcare facility or at the household location when performing home visit; with additional feature of sending appointment reminder directly to the scheduled mother in the community. The module improved ANC/EPI coverage in the study area along the country border including for both Thai and non-Thai mothers and children who were either permanent resident or migrants; numbers of ANC and EPI visit on-time as per schedule significantly increased; there was less delay of antenatal visits and immunizations. The module integrated and functioned successfully as part of the healthcare system; it is proved for its feasibility and the extent to which community healthcare personnel in the low resource setting could efficiently utilize it to perform their duties.
An investigation on impacts of scheduling configurations on Mississippi biology subject area testing
NASA Astrophysics Data System (ADS)
Marchette, Frances Lenora
The purpose of this mixed modal study was to compare the results of Biology Subject Area mean scores of students on a 4 x 4 block schedule, A/B block schedule, and traditional year-long schedule for 1A to 5A size schools. This study also reviewed the data to determine if minority or gender issues might influence the test results. Interviews with administrators and teachers were conducted about the type of schedule configuration they use and the influence that the schedule has on student academic performance on the Biology Subject Area Test. Additionally, this research further explored whether schedule configurations allow sufficient time for students to construct knowledge. This study is important to schools, teachers, and administrators because it can assist them in considering the impacts that different types of class schedules have on student performance and if ethnic or gender issues are influencing testing results. This study used the causal-comparative method for the quantitative portion of the study and constant comparative method for the qualitative portion to explore the relationship of school schedules on student academic achievement on the Mississippi Biology Subject Area Test. The aggregate means of selected student scores indicate that the Mississippi Biology Subject Area Test as a measure of student performance reveals no significant difference on student achievement for the three school schedule configurations. The data were adjusted for initial differences of gender, minority, and school size on the three schedule configurations. The results suggest that schools may employ various schedule configurations and expect student performance on the Mississippi Biology Subject Area Test to be unaffected. However, many areas of concern were identified in the interviews that might impact on school learning environments. These concerns relate to effective classroom management, the active involvement of students in learning, the adequacy of teacher education programs and the stress of testing on everyone involved in high-stakes testing.
Adaptive Control for Uncertain Nonlinear Multi-Input Multi-Output Systems
NASA Technical Reports Server (NTRS)
Cao, Chengyu (Inventor); Hovakimyan, Naira (Inventor); Xargay, Enric (Inventor)
2014-01-01
Systems and methods of adaptive control for uncertain nonlinear multi-input multi-output systems in the presence of significant unmatched uncertainty with assured performance are provided. The need for gain-scheduling is eliminated through the use of bandwidth-limited (low-pass) filtering in the control channel, which appropriately attenuates the high frequencies typically appearing in fast adaptation situations and preserves the robustness margins in the presence of fast adaptation.
7 CFR 29.9404 - Marketing area opening dates and marketing schedules.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Marketing area opening dates and marketing schedules... MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE COMMODITY....9404 Marketing area opening dates and marketing schedules. (a) The Flue-Cured Tobacco Advisory...
7 CFR 29.9404 - Marketing area opening dates and marketing schedules.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Marketing area opening dates and marketing schedules... MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE COMMODITY....9404 Marketing area opening dates and marketing schedules. (a) The Flue-Cured Tobacco Advisory...
7 CFR 29.9404 - Marketing area opening dates and marketing schedules.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Marketing area opening dates and marketing schedules... MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE COMMODITY....9404 Marketing area opening dates and marketing schedules. (a) The Flue-Cured Tobacco Advisory...
7 CFR 29.9404 - Marketing area opening dates and marketing schedules.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Marketing area opening dates and marketing schedules... MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE COMMODITY....9404 Marketing area opening dates and marketing schedules. (a) The Flue-Cured Tobacco Advisory...
7 CFR 29.9404 - Marketing area opening dates and marketing schedules.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Marketing area opening dates and marketing schedules... MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE COMMODITY....9404 Marketing area opening dates and marketing schedules. (a) The Flue-Cured Tobacco Advisory...
Technology for planning and scheduling under complex constraints
NASA Astrophysics Data System (ADS)
Alguire, Karen M.; Pedro Gomes, Carla O.
1997-02-01
Within the context of law enforcement, several problems fall into the category of planning and scheduling under constraints. Examples include resource and personnel scheduling, and court scheduling. In the case of court scheduling, a schedule must be generated considering available resources, e.g., court rooms and personnel. Additionally, there are constraints on individual court cases, e.g., temporal and spatial, and between different cases, e.g., precedence. Finally, there are overall objectives that the schedule should satisfy such as timely processing of cases and optimal use of court facilities. Manually generating a schedule that satisfies all of the constraints is a very time consuming task. As the number of court cases and constraints increases, this becomes increasingly harder to handle without the assistance of automatic scheduling techniques. This paper describes artificial intelligence (AI) technology that has been used to develop several high performance scheduling applications including a military transportation scheduler, a military in-theater airlift scheduler, and a nuclear power plant outage scheduler. We discuss possible law enforcement applications where we feel the same technology could provide long-term benefits to law enforcement agencies and their operations personnel.
Application of a hybrid generation/utility assessment heuristic to a class of scheduling problems
NASA Technical Reports Server (NTRS)
Heyward, Ann O.
1989-01-01
A two-stage heuristic solution approach for a class of multiobjective, n-job, 1-machine scheduling problems is described. Minimization of job-to-job interference for n jobs is sought. The first stage generates alternative schedule sequences by interchanging pairs of schedule elements. The set of alternative sequences can represent nodes of a decision tree; each node is reached via decision to interchange job elements. The second stage selects the parent node for the next generation of alternative sequences through automated paired comparison of objective performance for all current nodes. An application of the heuristic approach to communications satellite systems planning is presented.
ERIC Educational Resources Information Center
Okedeyi, Abiodun S.; Oginni, Aderonke M.; Adegorite, Solomon O.; Saibu, Sakibu O.
2015-01-01
This study investigated the relevance of multi media skills in teaching and learning of scientific concepts in secondary schools. Self constructed questionnaire was administered to 120 students randomly selected in four secondary schools in Ojo Local Government Area of Lagos state. Data generated were analyzed using chi-square statistical…
Controllable 3D architectures of aligned carbon nanotube arrays by multi-step processes
NASA Astrophysics Data System (ADS)
Huang, Shaoming
2003-06-01
An effective way to fabricate large area three-dimensional (3D) aligned CNTs pattern based on pyrolysis of iron(II) phthalocyanine (FePc) by two-step processes is reported. The controllable generation of different lengths and selective growth of the aligned CNT arrays on metal-patterned (e.g., Ag and Au) substrate are the bases for generating such 3D aligned CNTs architectures. By controlling experimental conditions 3D aligned CNT arrays with different lengths/densities and morphologies/structures as well as multi-layered architectures can be fabricated in large scale by multi-step pyrolysis of FePc. These 3D architectures could have interesting properties and be applied for developing novel nanotube-based devices.
A Bayesian multi-stage cost-effectiveness design for animal studies in stroke research
Cai, Chunyan; Ning, Jing; Huang, Xuelin
2017-01-01
Much progress has been made in the area of adaptive designs for clinical trials. However, little has been done regarding adaptive designs to identify optimal treatment strategies in animal studies. Motivated by an animal study of a novel strategy for treating strokes, we propose a Bayesian multi-stage cost-effectiveness design to simultaneously identify the optimal dose and determine the therapeutic treatment window for administrating the experimental agent. We consider a non-monotonic pattern for the dose-schedule-efficacy relationship and develop an adaptive shrinkage algorithm to assign more cohorts to admissible strategies. We conduct simulation studies to evaluate the performance of the proposed design by comparing it with two standard designs. These simulation studies show that the proposed design yields a significantly higher probability of selecting the optimal strategy, while it is generally more efficient and practical in terms of resource usage. PMID:27405325
Evolutionarily stable learning schedules and cumulative culture in discrete generation models.
Aoki, Kenichi; Wakano, Joe Yuichiro; Lehmann, Laurent
2012-06-01
Individual learning (e.g., trial-and-error) and social learning (e.g., imitation) are alternative ways of acquiring and expressing the appropriate phenotype in an environment. The optimal choice between using individual learning and/or social learning may be dictated by the life-stage or age of an organism. Of special interest is a learning schedule in which social learning precedes individual learning, because such a schedule is apparently a necessary condition for cumulative culture. Assuming two obligatory learning stages per discrete generation, we obtain the evolutionarily stable learning schedules for the three situations where the environment is constant, fluctuates between generations, or fluctuates within generations. During each learning stage, we assume that an organism may target the optimal phenotype in the current environment by individual learning, and/or the mature phenotype of the previous generation by oblique social learning. In the absence of exogenous costs to learning, the evolutionarily stable learning schedules are predicted to be either pure social learning followed by pure individual learning ("bang-bang" control) or pure individual learning at both stages ("flat" control). Moreover, we find for each situation that the evolutionarily stable learning schedule is also the one that optimizes the learned phenotype at equilibrium. Copyright © 2012 Elsevier Inc. All rights reserved.
Space communications scheduler: A rule-based approach to adaptive deadline scheduling
NASA Technical Reports Server (NTRS)
Straguzzi, Nicholas
1990-01-01
Job scheduling is a deceptively complex subfield of computer science. The highly combinatorial nature of the problem, which is NP-complete in nearly all cases, requires a scheduling program to intelligently transverse an immense search tree to create the best possible schedule in a minimal amount of time. In addition, the program must continually make adjustments to the initial schedule when faced with last-minute user requests, cancellations, unexpected device failures, quests, cancellations, unexpected device failures, etc. A good scheduler must be quick, flexible, and efficient, even at the expense of generating slightly less-than-optimal schedules. The Space Communication Scheduler (SCS) is an intelligent rule-based scheduling system. SCS is an adaptive deadline scheduler which allocates modular communications resources to meet an ordered set of user-specified job requests on board the NASA Space Station. SCS uses pattern matching techniques to detect potential conflicts through algorithmic and heuristic means. As a result, the system generates and maintains high density schedules without relying heavily on backtracking or blind search techniques. SCS is suitable for many common real-world applications.
Challenges in early clinical development of adjuvanted vaccines.
Della Cioppa, Giovanni; Jonsdottir, Ingileif; Lewis, David
2015-06-08
A three-step approach to the early development of adjuvanted vaccine candidates is proposed, the goal of which is to allow ample space for exploratory and hypothesis-generating human experiments and to select dose(s) and dosing schedule(s) to bring into full development. Although the proposed approach is more extensive than the traditional early development program, the authors suggest that by addressing key questions upfront the overall time, size and cost of development will be reduced and the probability of public health advancement enhanced. The immunogenicity end-points chosen for early development should be critically selected: an established immunological parameter with a well characterized assay should be selected as primary end-point for dose and schedule finding; exploratory information-rich end-points should be limited in number and based on pre-defined hypothesis generating plans, including system biology and pathway analyses. Building a pharmacodynamic profile is an important aspect of early development: to this end, multiple early (within 24h) and late (up to one year) sampling is necessary, which can be accomplished by sampling subgroups of subjects at different time points. In most cases the final target population, even if vulnerable, should be considered for inclusion in early development. In order to obtain the multiple formulations necessary for the dose and schedule finding, "bed-side mixing" of various components of the vaccine is often necessary: this is a complex and underestimated area that deserves serious research and logistical support. Copyright © 2015 Elsevier Ltd. All rights reserved.
Project Physics Teacher Guide 5, Models of the Atom.
ERIC Educational Resources Information Center
Harvard Univ., Cambridge, MA. Harvard Project Physics.
Teaching procedures of Project Physics Unit 5 are presented to help teachers make effective use of learning materials. Unit contents are discussed in connection with teaching aid lists, multi-media schedules, schedule blocks, and resource charts. Brief summaries are made for transparencies, 16mm films, and reader articles. Included is information…
Avoiding Biased-Feeding in the Scheduling of Collaborative Multipath TCP.
Tsai, Meng-Hsun; Chou, Chien-Ming; Lan, Kun-Chan
2016-01-01
Smartphones have become the major communication and portable computing devices that access the Internet through Wi-Fi or mobile networks. Unfortunately, users without a mobile data subscription can only access the Internet at limited locations, such as hotspots. In this paper, we propose a collaborative bandwidth sharing protocol (CBSP) built on top of MultiPath TCP (MPTCP). CBSP enables users to buy bandwidth on demand from neighbors (called Helpers) and uses virtual interfaces to bind the subflows of MPTCP to avoid modifying the implementation of MPTCP. However, although MPTCP provides the required multi-homing functionality for bandwidth sharing, the current packet scheduling in collaborative MPTCP (e.g., Co-MPTCP) leads to the so-called biased-feeding problem. In this problem, the fastest link might always be selected to send packets whenever it has available cwnd, which results in other links not being fully utilized. In this work, we set out to design an algorithm, called Scheduled Window-based Transmission Control (SWTC), to improve the performance of packet scheduling in MPTCP, and we perform extensive simulations to evaluate its performance.
A cross-domain communication resource scheduling method for grid-enabled communication networks
NASA Astrophysics Data System (ADS)
Zheng, Xiangquan; Wen, Xiang; Zhang, Yongding
2011-10-01
To support a wide range of different grid applications in environments where various heterogeneous communication networks coexist, it is important to enable advanced capabilities in on-demand and dynamical integration and efficient co-share with cross-domain heterogeneous communication resource, thus providing communication services which are impossible for single communication resource to afford. Based on plug-and-play co-share and soft integration with communication resource, Grid-enabled communication network is flexibly built up to provide on-demand communication services for gird applications with various requirements on quality of service. Based on the analysis of joint job and communication resource scheduling in grid-enabled communication networks (GECN), this paper presents a cross multi-domain communication resource cooperatively scheduling method and describes the main processes such as traffic requirement resolution for communication services, cross multi-domain negotiation on communication resource, on-demand communication resource scheduling, and so on. The presented method is to afford communication service capability to cross-domain traffic delivery in GECNs. Further research work towards validation and implement of the presented method is pointed out at last.
Avoiding Biased-Feeding in the Scheduling of Collaborative Multipath TCP
2016-01-01
Smartphones have become the major communication and portable computing devices that access the Internet through Wi-Fi or mobile networks. Unfortunately, users without a mobile data subscription can only access the Internet at limited locations, such as hotspots. In this paper, we propose a collaborative bandwidth sharing protocol (CBSP) built on top of MultiPath TCP (MPTCP). CBSP enables users to buy bandwidth on demand from neighbors (called Helpers) and uses virtual interfaces to bind the subflows of MPTCP to avoid modifying the implementation of MPTCP. However, although MPTCP provides the required multi-homing functionality for bandwidth sharing, the current packet scheduling in collaborative MPTCP (e.g., Co-MPTCP) leads to the so-called biased-feeding problem. In this problem, the fastest link might always be selected to send packets whenever it has available cwnd, which results in other links not being fully utilized. In this work, we set out to design an algorithm, called Scheduled Window-based Transmission Control (SWTC), to improve the performance of packet scheduling in MPTCP, and we perform extensive simulations to evaluate its performance. PMID:27529783
Intelligent scheduling of execution for customized physical fitness and healthcare system.
Huang, Chung-Chi; Liu, Hsiao-Man; Huang, Chung-Lin
2015-01-01
Physical fitness and health of white collar business person is getting worse and worse in recent years. Therefore, it is necessary to develop a system which can enhance physical fitness and health for people. Although the exercise prescription can be generated after diagnosing for customized physical fitness and healthcare. It is hard to meet individual execution needs for general scheduling of physical fitness and healthcare system. So the main purpose of this research is to develop an intelligent scheduling of execution for customized physical fitness and healthcare system. The results of diagnosis and prescription for customized physical fitness and healthcare system will be generated by fuzzy logic Inference. Then the results of diagnosis and prescription for customized physical fitness and healthcare system will be scheduled and executed by intelligent computing. The scheduling of execution is generated by using genetic algorithm method. It will improve traditional scheduling of exercise prescription for physical fitness and healthcare. Finally, we will demonstrate the advantages of the intelligent scheduling of execution for customized physical fitness and healthcare system.
2002-03-26
and Scheduling. New York: John Wiley & Sons, 1998. 91 Vita Captain Roberto Carlos Borges de Abreu was born on 31 July 1965 in Bom Jesus ...services, inventory management is also of great importance. “Imagine a hospital stocking out of blood, or the air force stocking out of a mission
High energy, high average power solid state green or UV laser
Hackel, Lloyd A.; Norton, Mary; Dane, C. Brent
2004-03-02
A system for producing a green or UV output beam for illuminating a large area with relatively high beam fluence. A Nd:glass laser produces a near-infrared output by means of an oscillator that generates a high quality but low power output and then multi-pass through and amplification in a zig-zag slab amplifier and wavefront correction in a phase conjugator at the midway point of the multi-pass amplification. The green or UV output is generated by means of conversion crystals that follow final propagation through the zig-zag slab amplifier.
Space power system scheduling using an expert system
NASA Technical Reports Server (NTRS)
Bahrami, K. A.; Biefeld, E.; Costello, L.; Klein, J. W.
1986-01-01
A most pressing problem in space exploration is timely spacecraft power system sequence generation, which requires the scheduling of a set of loads given a set of resource constraints. This is particularly important after an anomaly or failure. This paper discusses the power scheduling problem and how the software program, Plan-It, can be used as a consultant for scheduling power system activities. Modeling of power activities, human interface, and two of the many strategies used by Plan-It are discussed. Preliminary results showing the development of a conflict-free sequence from an initial sequence with conflicts is presented. It shows that a 4-day schedule can be generated in a matter of a few minutes, which provides sufficient time in many cases to aid the crew in the replanning of loads and generation use following a failure or anomaly.
NASA Technical Reports Server (NTRS)
Richards, Stephen F.
1991-01-01
Although computerized operations have significant gains realized in many areas, one area, scheduling, has enjoyed few benefits from automation. The traditional methods of industrial engineering and operations research have not proven robust enough to handle the complexities associated with the scheduling of realistic problems. To address this need, NASA has developed the computer-aided scheduling system (COMPASS), a sophisticated, interactive scheduling tool that is in wide-spread use within NASA and the contractor community. Therefore, COMPASS provides no explicit support for the large class of problems in which several people, perhaps at various locations, build separate schedules that share a common pool of resources. This research examines the issue of distributing scheduling, as applied to application domains characterized by the partial ordering of tasks, limited resources, and time restrictions. The focus of this research is on identifying issues related to distributed scheduling, locating applicable problem domains within NASA, and suggesting areas for ongoing research. The issues that this research identifies are goals, rescheduling requirements, database support, the need for communication and coordination among individual schedulers, the potential for expert system support for scheduling, and the possibility of integrating artificially intelligent schedulers into a network of human schedulers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flory, John Andrew; Padilla, Denise D.; Gauthier, John H.
Upcoming weapon programs require an aggressive increase in Application Specific Integrated Circuit (ASIC) production at Sandia National Laboratories (SNL). SNL has developed unique modeling and optimization tools that have been instrumental in improving ASIC production productivity and efficiency, identifying optimal operational and tactical execution plans under resource constraints, and providing confidence in successful mission execution. With ten products and unprecedented levels of demand, a single set of shared resources, highly variable processes, and the need for external supplier task synchronization, scheduling is an integral part of successful manufacturing. The scheduler uses an iterative multi-objective genetic algorithm and a multi-dimensional performancemore » evaluator. Schedule feasibility is assessed using a discrete event simulation (DES) that incorporates operational uncertainty, variability, and resource availability. The tools provide rapid scenario assessments and responses to variances in the operational environment, and have been used to inform major equipment investments and workforce planning decisions in multiple SNL facilities.« less
5 CFR 532.255 - Regular appropriated fund wage schedules in foreign areas.
Code of Federal Regulations, 2010 CFR
2010-01-01
... schedules shall provide rates of pay for nonsupervisory, leader, supervisory, and production facilitating employees. (b) Schedules shall be— (1) Computed on the basis of a simple average of all regular appropriated fund wage area schedules in effect on December 31; and (2) Effective on the first day of the first pay...
5 CFR 532.255 - Regular appropriated fund wage schedules in foreign areas.
Code of Federal Regulations, 2011 CFR
2011-01-01
... schedules shall provide rates of pay for nonsupervisory, leader, supervisory, and production facilitating employees. (b) Schedules shall be— (1) Computed on the basis of a simple average of all regular appropriated fund wage area schedules in effect on December 31; and (2) Effective on the first day of the first pay...
5 CFR 532.271 - Special wage schedules for National Park Service positions in overlap areas.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 5 Administrative Personnel 1 2012-01-01 2012-01-01 false Special wage schedules for National Park... wage schedules for National Park Service positions in overlap areas. (a)(1) The Department of the Interior shall establish special schedules for wage employees of the National Park Service whose duty...
5 CFR 532.271 - Special wage schedules for National Park Service positions in overlap areas.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Special wage schedules for National Park... wage schedules for National Park Service positions in overlap areas. (a)(1) The Department of the Interior shall establish special schedules for wage employees of the National Park Service whose duty...
5 CFR 532.271 - Special wage schedules for National Park Service positions in overlap areas.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 5 Administrative Personnel 1 2013-01-01 2013-01-01 false Special wage schedules for National Park... wage schedules for National Park Service positions in overlap areas. (a)(1) The Department of the Interior shall establish special schedules for wage employees of the National Park Service whose duty...
5 CFR 532.271 - Special wage schedules for National Park Service positions in overlap areas.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Special wage schedules for National Park... wage schedules for National Park Service positions in overlap areas. (a)(1) The Department of the Interior shall establish special schedules for wage employees of the National Park Service whose duty...
5 CFR 532.271 - Special wage schedules for National Park Service positions in overlap areas.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Special wage schedules for National Park... wage schedules for National Park Service positions in overlap areas. (a)(1) The Department of the Interior shall establish special schedules for wage employees of the National Park Service whose duty...
Chung, Kwangzoo; Kim, Jinsung; Ahn, Sung Hwan; Ju, Sang Gyu; Jung, Sang Hoon; Chung, Yoonsun; Cho, Sungkoo; Jo, Kwanghyun; Shin, Eun Hyuk; Hong, Chae-Seon; Shin, Jung Suk; Park, Seyjoon; Kim, Dae-Hyun; Kim, Hye Young; Lee, Boram; Shibagaki, Gantaro; Nonaka, Hideki; Sasai, Kenzo; Koyabu, Yukio; Choi, Changhoon; Huh, Seung Jae; Ahn, Yong Chan; Pyo, Hong Ryull; Lim, Do Hoon; Park, Hee Chul; Park, Won; Oh, Dong Ryul; Noh, Jae Myung; Yu, Jeong Il; Song, Sanghyuk; Lee, Ji Eun; Lee, Bomi; Choi, Doo Ho
2015-01-01
Purpose The purpose of this report is to describe the proton therapy system at Samsung Medical Center (SMC-PTS) including the proton beam generator, irradiation system, patient positioning system, patient position verification system, respiratory gating system, and operating and safety control system, and review the current status of the SMC-PTS. Materials and Methods The SMC-PTS has a cyclotron (230 MeV) and two treatment rooms: one treatment room is equipped with a multi-purpose nozzle and the other treatment room is equipped with a dedicated pencil beam scanning nozzle. The proton beam generator including the cyclotron and the energy selection system can lower the energy of protons down to 70 MeV from the maximum 230 MeV. Results The multi-purpose nozzle can deliver both wobbling proton beam and active scanning proton beam, and a multi-leaf collimator has been installed in the downstream of the nozzle. The dedicated scanning nozzle can deliver active scanning proton beam with a helium gas filled pipe minimizing unnecessary interactions with the air in the beam path. The equipment was provided by Sumitomo Heavy Industries Ltd., RayStation from RaySearch Laboratories AB is the selected treatment planning system, and data management will be handled by the MOSAIQ system from Elekta AB. Conclusion The SMC-PTS located in Seoul, Korea, is scheduled to begin treating cancer patients in 2015. PMID:26756034
NASA Astrophysics Data System (ADS)
Liu, Q.; Li, J.; Du, Y.; Wen, J.; Zhong, B.; Wang, K.
2011-12-01
As the remote sensing data accumulating, it is a challenge and significant issue how to generate high accurate and consistent land surface parameter product from the multi source remote observation and the radiation transfer modeling and inversion methodology are the theoretical bases. In this paper, recent research advances and unresolved issues are presented. At first, after a general overview, recent research advances on multi-scale remote sensing radiation transfer modeling are presented, including leaf spectrum model, vegetation canopy BRDF models, directional thermal infrared emission models, rugged mountains area radiation models, and kernel driven models etc. Then, new methodologies on land surface parameters inversion based on multi-source remote sensing data are proposed, taking the land surface Albedo, leaf area index, temperature/emissivity, and surface net radiation as examples. A new synthetic land surface parameter quantitative remote sensing product generation system is suggested and the software system prototype will be demonstrated. At last, multi-scale field experiment campaigns, such as the field campaigns in Gansu and Beijing, China are introduced briefly. The ground based, tower based, and airborne multi-angular measurement system have been built to measure the directional reflectance, emission and scattering characteristics from visible, near infrared, thermal infrared and microwave bands for model validation and calibration. The remote sensing pixel scale "true value" measurement strategy have been designed to gain the ground "true value" of LST, ALBEDO, LAI, soil moisture and ET etc. at 1-km2 for remote sensing product validation.
Artificial intelligence techniques for scheduling Space Shuttle missions
NASA Technical Reports Server (NTRS)
Henke, Andrea L.; Stottler, Richard H.
1994-01-01
Planning and scheduling of NASA Space Shuttle missions is a complex, labor-intensive process requiring the expertise of experienced mission planners. We have developed a planning and scheduling system using combinations of artificial intelligence knowledge representations and planning techniques to capture mission planning knowledge and automate the multi-mission planning process. Our integrated object oriented and rule-based approach reduces planning time by orders of magnitude and provides planners with the flexibility to easily modify planning knowledge and constraints without requiring programming expertise.
Li, Xiangyu; Xie, Nijie; Tian, Xinyue
2017-01-01
This paper proposes a scheduling and power management solution for energy harvesting heterogeneous multi-core WSN node SoC such that the system continues to operate perennially and uses the harvested energy efficiently. The solution consists of a heterogeneous multi-core system oriented task scheduling algorithm and a low-complexity dynamic workload scaling and configuration optimization algorithm suitable for light-weight platforms. Moreover, considering the power consumption of most WSN applications have the characteristic of data dependent behavior, we introduce branches handling mechanism into the solution as well. The experimental result shows that the proposed algorithm can operate in real-time on a lightweight embedded processor (MSP430), and that it can make a system do more valuable works and make more than 99.9% use of the power budget. PMID:28208730
Li, Xiangyu; Xie, Nijie; Tian, Xinyue
2017-02-08
This paper proposes a scheduling and power management solution for energy harvesting heterogeneous multi-core WSN node SoC such that the system continues to operate perennially and uses the harvested energy efficiently. The solution consists of a heterogeneous multi-core system oriented task scheduling algorithm and a low-complexity dynamic workload scaling and configuration optimization algorithm suitable for light-weight platforms. Moreover, considering the power consumption of most WSN applications have the characteristic of data dependent behavior, we introduce branches handling mechanism into the solution as well. The experimental result shows that the proposed algorithm can operate in real-time on a lightweight embedded processor (MSP430), and that it can make a system do more valuable works and make more than 99.9% use of the power budget.
Assessment of Delivery Accuracy in an Operational-Like Environment
NASA Technical Reports Server (NTRS)
Sharma, Shivanjli; Wynnyk, Mitch
2016-01-01
In order to enable arrival management concepts and solutions in a Next Generation Air Transportation System (NextGen) environment, ground-based sequencing and scheduling functions were developed to support metering operations in the National Airspace System. These sequencing and scheduling tools are designed to assist air traffic controllers in developing an overall arrival strategy, from enroute down to the terminal area boundary. NASA developed a ground system concept and protoype capability called Terminal Sequencing and Spacing (TSAS) to extend metering operations into the terminal area to the runway. To demonstrate the use of these scheduling and spacing tools in an operational-like environment, the FAA, NASA, and MITRE conducted an Operational Integration Assessment (OIA) of a prototype TSAS system at the FAA's William J. Hughes Technical Center (WJHTC). This paper presents an analysis of the arrival management strategies utilized and delivery accuracy achieved during the OIA. The analysis demonstrates how en route preconditioning, in various forms, and schedule disruptions impact delivery accuracy. As the simulation spanned both enroute and terminal airspace, the use of Ground Interval Management - Spacing (GIM-S) enroute speed advisories was investigated. Delivery accuracy was measured as the difference between the Scheduled Time of Arrival (STA) and the Actual Time of Arrival (ATA). The delivery accuracy was computed across all runs conducted during the OIA, which included deviations from nominal operations which are known to commonly occur in real operations, such as schedule changes and missed approaches. Overall, 83% of all flights were delivered into the terminal airspace within +/- 30 seconds of their STA and 94% of flights were delivered within +/- 60 seconds. The meter fix delivery accuracy standard deviation was found to be between 36 and 55 seconds across all arrival procedures. The data also showed when schedule disruptions were excluded, the percentage of aircraft delivered within +/- 30 seconds was between 85 and 90% across the various arrival procedures at the meter fix. This paper illustrates the ability to meet new delivery accuracy requirements in an operational-like environment using operational systems and NATCA controller participants, while also including common events that might cause disruptions to the schedule and overall system.
Multi-Attribute Task Battery - Applications in pilot workload and strategic behavior research
NASA Technical Reports Server (NTRS)
Arnegard, Ruth J.; Comstock, J. R., Jr.
1991-01-01
The Multi-Attribute Task (MAT) Battery provides a benchmark set of tasks for use in a wide range of lab studies of operator performance and workload. The battery incorporates tasks analogous to activities that aircraft crewmembers perform in flight, while providing a high degree of experimenter control, performance data on each subtask, and freedom to nonpilot test subjects. Features not found in existing computer based tasks include an auditory communication task (to simulate Air Traffic Control communication), a resource management task permitting many avenues or strategies of maintaining target performance, a scheduling window which gives the operator information about future task demands, and the option of manual or automated control of tasks. Performance data are generated for each subtask. In addition, the task battery may be paused and onscreen workload rating scales presented to the subject. The MAT Battery requires a desktop computer with color graphics. The communication task requires a serial link to a second desktop computer with a voice synthesizer or digitizer card.
The multi-attribute task battery for human operator workload and strategic behavior research
NASA Technical Reports Server (NTRS)
Comstock, J. Raymond, Jr.; Arnegard, Ruth J.
1992-01-01
The Multi-Attribute Task (MAT) Battery provides a benchmark set of tasks for use in a wide range of lab studies of operator performance and workload. The battery incorporates tasks analogous to activities that aircraft crewmembers perform in flight, while providing a high degree of experimenter control, performance data on each subtask, and freedom to use nonpilot test subjects. Features not found in existing computer based tasks include an auditory communication task (to simulate Air Traffic Control communication), a resource management task permitting many avenues or strategies of maintaining target performance, a scheduling window which gives the operator information about future task demands, and the option of manual or automated control of tasks. Performance data are generated for each subtask. In addition, the task battery may be paused and onscreen workload rating scales presented to the subject. The MAT Battery requires a desktop computer with color graphics. The communication task requires a serial link to a second desktop computer with a voice synthesizer or digitizer card.
Multi-objective group scheduling optimization integrated with preventive maintenance
NASA Astrophysics Data System (ADS)
Liao, Wenzhu; Zhang, Xiufang; Jiang, Min
2017-11-01
This article proposes a single-machine-based integration model to meet the requirements of production scheduling and preventive maintenance in group production. To describe the production for identical/similar and different jobs, this integrated model considers the learning and forgetting effects. Based on machine degradation, the deterioration effect is also considered. Moreover, perfect maintenance and minimal repair are adopted in this integrated model. The multi-objective of minimizing total completion time and maintenance cost is taken to meet the dual requirements of delivery date and cost. Finally, a genetic algorithm is developed to solve this optimization model, and the computation results demonstrate that this integrated model is effective and reliable.
Thread scheduling for GPU-based OPC simulation on multi-thread
NASA Astrophysics Data System (ADS)
Lee, Heejun; Kim, Sangwook; Hong, Jisuk; Lee, Sooryong; Han, Hwansoo
2018-03-01
As semiconductor product development based on shrinkage continues, the accuracy and difficulty required for the model based optical proximity correction (MBOPC) is increasing. OPC simulation time, which is the most timeconsuming part of MBOPC, is rapidly increasing due to high pattern density in a layout and complex OPC model. To reduce OPC simulation time, we attempt to apply graphic processing unit (GPU) to MBOPC because OPC process is good to be programmed in parallel. We address some issues that may typically happen during GPU-based OPC simulation in multi thread system, such as "out of memory" and "GPU idle time". To overcome these problems, we propose a thread scheduling method, which manages OPC jobs in multiple threads in such a way that simulations jobs from multiple threads are alternatively executed on GPU while correction jobs are executed at the same time in each CPU cores. It was observed that the amount of GPU peak memory usage decreases by up to 35%, and MBOPC runtime also decreases by 4%. In cases where out of memory issues occur in a multi-threaded environment, the thread scheduler was used to improve MBOPC runtime up to 23%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weekes, B.; Ewins, D.; Acciavatti, F.
2014-05-27
To date, differing implementations of continuous scan laser Doppler vibrometry have been demonstrated by various academic institutions, but since the scan paths were defined using step or sine functions from function generators, the paths were typically limited to 1D line scans or 2D areas such as raster paths or Lissajous trajectories. The excitation was previously often limited to a single frequency due to the specific signal processing performed to convert the scan data into an ODS. In this paper, a configuration of continuous-scan laser Doppler vibrometry is demonstrated which permits scanning of arbitrary areas, with the benefit of allowing multi-frequency/broadbandmore » excitation. Various means of generating scan paths to inspect arbitrary areas are discussed and demonstrated. Further, full 3D vibration capture is demonstrated by the addition of a range-finding facility to the described configuration, and iteratively relocating a single scanning laser head. Here, the range-finding facility was provided by a Microsoft Kinect, an inexpensive piece of consumer electronics.« less
De-Inventory Plan for Transuranic Waste Stored at Area G
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hargis, Kenneth Marshall; Christensen, Davis V.; Shepard, Mark D.
This report describes the strategy and detailed work plan developed by Los Alamos National Laboratory (LANL) to disposition transuranic (TRU) waste stored at its Area G radioactive waste storage site. The focus at this time is on disposition of 3,706 m 3 of TRU waste stored above grade by June 30, 2014, which is one of the commitments within the Framework Agreement: Realignment of Environmental Priorities between the Department of Energy (DOE) National Nuclear Security Administration (NNSA) and the State of New Mexico Environment Department (NMED), Reference 1. A detailed project management schedule has been developed to manage this workmore » and better ensure that all required activities are aligned and integrated. The schedule was developed in conjunction with personnel from the NNSA Los Alamos Site Office (LASO), the DOE Carlsbad Field Office (CBFO), the Central Characterization Project (CCP), and Los Alamos National Security, LLC (LANS). A detailed project management schedule for the remainder of the above grade inventory and the below grade inventory will be developed and incorporated into the De-Inventory Plan by December 31, 2012. This schedule will also include all newly-generated TRU waste received at Area G in FYs 2012 and 2013, which must be removed by no later than December 31, 2014, under the Framework Agreement. The TRU waste stored above grade at Area G is considered to be one of the highest nuclear safety risks at LANL, and the Defense Nuclear Facility Safety Board has expressed concern for the radioactive material at risk (MAR) contained within the above grade TRU waste inventory and has formally requested that DOE reduce the MAR. A large wildfire called the Las Conchas Fire burned extensive areas west of LANL in late June and July 2011. Although there was minimal to no impact by the fire to LANL, the fire heightened public concern and news media attention on TRU waste storage at Area G. After the fire, New Mexico Governor Susana Martinez also requested that LANL accelerate disposition of TRU waste stored above grade at Area G. The 3,706 m 3 volume of TRU waste stored above grade consists of 4,495 containers that include all above grade non-cemented waste as well as above grade cemented waste that was ready for characterization on October 1, 2011. This volume includes all newly-generated TRU waste currently stored at Area G as of October 1, 2011. This volume does not include the Bolas Grandes spheres, mixed low level waste (MLLW) containers, empty containers, cemented waste that requires remediation, projected newly generated TRU waste from FY 2012 and later, or TRU waste stored below grade. The 3,706 m 3 volume represents about 86 per cent of the total volume of TRU waste stored above grade on October 1, 2011. The De-Inventory Plan supports the DOE Office of Environmental Management (EM) goal to disposition 90% of the Legacy TRU waste within the DOE complex by the end of 2015 as stated in its Roadmap for EM’s Journey to Excellence (Reference 2). The plan also addresses precursor actions for disposition of TRU waste that are necessary for compliance with the Compliance Order on Consent issued by the NMED in 2005 (Reference 3).« less
Dedicated heterogeneous node scheduling including backfill scheduling
Wood, Robert R [Livermore, CA; Eckert, Philip D [Livermore, CA; Hommes, Gregg [Pleasanton, CA
2006-07-25
A method and system for job backfill scheduling dedicated heterogeneous nodes in a multi-node computing environment. Heterogeneous nodes are grouped into homogeneous node sub-pools. For each sub-pool, a free node schedule (FNS) is created so that the number of to chart the free nodes over time. For each prioritized job, using the FNS of sub-pools having nodes useable by a particular job, to determine the earliest time range (ETR) capable of running the job. Once determined for a particular job, scheduling the job to run in that ETR. If the ETR determined for a lower priority job (LPJ) has a start time earlier than a higher priority job (HPJ), then the LPJ is scheduled in that ETR if it would not disturb the anticipated start times of any HPJ previously scheduled for a future time. Thus, efficient utilization and throughput of such computing environments may be increased by utilizing resources otherwise remaining idle.
An Optimization Model for Scheduling Problems with Two-Dimensional Spatial Resource Constraint
NASA Technical Reports Server (NTRS)
Garcia, Christopher; Rabadi, Ghaith
2010-01-01
Traditional scheduling problems involve determining temporal assignments for a set of jobs in order to optimize some objective. Some scheduling problems also require the use of limited resources, which adds another dimension of complexity. In this paper we introduce a spatial resource-constrained scheduling problem that can arise in assembly, warehousing, cross-docking, inventory management, and other areas of logistics and supply chain management. This scheduling problem involves a twodimensional rectangular area as a limited resource. Each job, in addition to having temporal requirements, has a width and a height and utilizes a certain amount of space inside the area. We propose an optimization model for scheduling the jobs while respecting all temporal and spatial constraints.
Flockhart, D. T. Tyler; Wassenaar, Leonard I.; Martin, Tara G.; Hobson, Keith A.; Wunder, Michael B.; Norris, D. Ryan
2013-01-01
Insect migration may involve movements over multiple breeding generations at continental scales, resulting in formidable challenges to their conservation and management. Using distribution models generated from citizen scientist occurrence data and stable-carbon and -hydrogen isotope measurements, we tracked multi-generational colonization of the breeding grounds of monarch butterflies (Danaus plexippus) in eastern North America. We found that monarch breeding occurrence was best modelled with geographical and climatic variables resulting in an annual breeding distribution of greater than 12 million km2 that encompassed 99% occurrence probability. Combining occurrence models with stable isotope measurements to estimate natal origin, we show that butterflies which overwintered in Mexico came from a wide breeding distribution, including southern portions of the range. There was a clear northward progression of monarchs over successive generations from May until August when reproductive butterflies began to change direction and moved south. Fifth-generation individuals breeding in Texas in the late summer/autumn tended to originate from northern breeding areas rather than regions further south. Although the Midwest was the most productive area during the breeding season, monarchs that re-colonized the Midwest were produced largely in Texas, suggesting that conserving breeding habitat in the Midwest alone is insufficient to ensure long-term persistence of the monarch butterfly population in eastern North America. PMID:23926146
2001-02-14
The STS-102 crew watches a slidewire basket speed down the line to the landing area. At left (backs to camera, back to front) are Commander James Wetherbee, Mission Specialists Susan Helms and Paul Richards. At right are (left to right) Mission Specialists Andrew Thomas and James Voss and Pilot James Kelly. Not seen is Mission Specialist Yury Usachev. The crew is taking part in Terminal Countdown Demonstration Test activities, which include the emergency exit training and a simulated launch countdown. STS-102 is the eighth construction flight to the International Space Station, with Space Shuttle Discovery carrying the Multi-Purpose Logistics Module Leonardo. Launch on mission STS-102 is scheduled for March 8
Tsiourlis, Georgios; Andreadakis, Stamatis; Konstantinidis, Pavlos
2009-01-01
The SITHON system, a fully wireless optical imaging system, integrating a network of in-situ optical cameras linking to a multi-layer GIS database operated by Control Operating Centres, has been developed in response to the need for early detection, notification and monitoring of forest fires. This article presents in detail the architecture and the components of SITHON, and demonstrates the first encouraging results of an experimental test with small controlled fires over Sithonia Peninsula in Northern Greece. The system has already been scheduled to be installed in some fire prone areas of Greece. PMID:22408536
A Hybrid Task Graph Scheduler for High Performance Image Processing Workflows.
Blattner, Timothy; Keyrouz, Walid; Bhattacharyya, Shuvra S; Halem, Milton; Brady, Mary
2017-12-01
Designing applications for scalability is key to improving their performance in hybrid and cluster computing. Scheduling code to utilize parallelism is difficult, particularly when dealing with data dependencies, memory management, data motion, and processor occupancy. The Hybrid Task Graph Scheduler (HTGS) improves programmer productivity when implementing hybrid workflows for multi-core and multi-GPU systems. The Hybrid Task Graph Scheduler (HTGS) is an abstract execution model, framework, and API that increases programmer productivity when implementing hybrid workflows for such systems. HTGS manages dependencies between tasks, represents CPU and GPU memories independently, overlaps computations with disk I/O and memory transfers, keeps multiple GPUs occupied, and uses all available compute resources. Through these abstractions, data motion and memory are explicit; this makes data locality decisions more accessible. To demonstrate the HTGS application program interface (API), we present implementations of two example algorithms: (1) a matrix multiplication that shows how easily task graphs can be used; and (2) a hybrid implementation of microscopy image stitching that reduces code size by ≈ 43% compared to a manually coded hybrid workflow implementation and showcases the minimal overhead of task graphs in HTGS. Both of the HTGS-based implementations show good performance. In image stitching the HTGS implementation achieves similar performance to the hybrid workflow implementation. Matrix multiplication with HTGS achieves 1.3× and 1.8× speedup over the multi-threaded OpenBLAS library for 16k × 16k and 32k × 32k size matrices, respectively.
NASA Technical Reports Server (NTRS)
2000-01-01
In the Space Station Processing Facility sits Raffaello, one of two Multi-Purpose Logistics Modules (MPLMs) built by Italy for the International Space Station. Raffaello is scheduled on mission STS-100, the 9th flight to the Space Station in 2001. The other MPLM is Leonardo, scheduled on an earlier mission, STS-102, the 8th flight early in 2001.
NASA Technical Reports Server (NTRS)
2000-01-01
In the Space Station Processing Facility sit Raffaello (left) and Leonardo (right), two Multi-Purpose Logistics Modules (MPLMs) built by Italy for the International Space Station. Leonardo is scheduled on mission STS-102, the 8th flight to the Space Station early in 2001. Raffaello is scheduled on mission STS-100, the 9th flight to the Space Station in 2001.
NASA Technical Reports Server (NTRS)
2000-01-01
In the Space Station Processing Facility sit Leonardo (left) and Raffaello (right), two Multi-Purpose Logistics Modules (MPLMs) built by Italy for the International Space Station. Raffaello is scheduled on mission STS-100, the 9th flight to the Space Station in 2001. The other MPLM is Leonardo, scheduled on an earlier mission, STS-102, the 8th flight early in 2001.
Scheduling viability tests for seeds in long-term storage based on a Bayesian Multi-Level Model
USDA-ARS?s Scientific Manuscript database
Genebank managers conduct viability tests on stored seeds so they can replace lots that have viability near a critical threshold, such as 50 or 85% germination. Currently, these tests are typically scheduled at uniform intervals; testing every 5 years is common. A manager needs to balance the cost...
The Multi-energy High precision Data Processor Based on AD7606
NASA Astrophysics Data System (ADS)
Zhao, Chen; Zhang, Yanchi; Xie, Da
2017-11-01
This paper designs an information collector based on AD7606 to realize the high-precision simultaneous acquisition of multi-source information of multi-energy systems to form the information platform of the energy Internet at Laogang with electricty as its major energy source. Combined with information fusion technologies, this paper analyzes the data to improve the overall energy system scheduling capability and reliability.
Cooperative path planning for multi-USV based on improved artificial bee colony algorithm
NASA Astrophysics Data System (ADS)
Cao, Lu; Chen, Qiwei
2018-03-01
Due to the complex constraints, more uncertain factors and critical real-time demand of path planning for multiple unmanned surface vehicle (multi-USV), an improved artificial bee colony (I-ABC) algorithm were proposed to solve the model of cooperative path planning for multi-USV. First the Voronoi diagram of battle field space is conceived to generate the optimal area of USVs paths. Then the chaotic searching algorithm is used to initialize the collection of paths, which is regard as foods of the ABC algorithm. With the limited data, the initial collection can search the optimal area of paths perfectly. Finally simulations of the multi-USV path planning under various threats have been carried out. Simulation results verify that the I-ABC algorithm can improve the diversity of nectar source and the convergence rate of algorithm. It can increase the adaptability of dynamic battlefield and unexpected threats for USV.
Novel high-fidelity realistic explosion damage simulation for urban environments
NASA Astrophysics Data System (ADS)
Liu, Xiaoqing; Yadegar, Jacob; Zhu, Youding; Raju, Chaitanya; Bhagavathula, Jaya
2010-04-01
Realistic building damage simulation has a significant impact in modern modeling and simulation systems especially in diverse panoply of military and civil applications where these simulation systems are widely used for personnel training, critical mission planning, disaster management, etc. Realistic building damage simulation should incorporate accurate physics-based explosion models, rubble generation, rubble flyout, and interactions between flying rubble and their surrounding entities. However, none of the existing building damage simulation systems sufficiently faithfully realize the criteria of realism required for effective military applications. In this paper, we present a novel physics-based high-fidelity and runtime efficient explosion simulation system to realistically simulate destruction to buildings. In the proposed system, a family of novel blast models is applied to accurately and realistically simulate explosions based on static and/or dynamic detonation conditions. The system also takes account of rubble pile formation and applies a generic and scalable multi-component based object representation to describe scene entities and highly scalable agent-subsumption architecture and scheduler to schedule clusters of sequential and parallel events. The proposed system utilizes a highly efficient and scalable tetrahedral decomposition approach to realistically simulate rubble formation. Experimental results demonstrate that the proposed system has the capability to realistically simulate rubble generation, rubble flyout and their primary and secondary impacts on surrounding objects including buildings, constructions, vehicles and pedestrians in clusters of sequential and parallel damage events.
Automatic generation of efficient orderings of events for scheduling applications
NASA Technical Reports Server (NTRS)
Morris, Robert A.
1994-01-01
In scheduling a set of tasks, it is often not known with certainty how long a given event will take. We call this duration uncertainty. Duration uncertainty is a primary obstacle to the successful completion of a schedule. If a duration of one task is longer than expected, the remaining tasks are delayed. The delay may result in the abandonment of the schedule itself, a phenomenon known as schedule breakage. One response to schedule breakage is on-line, dynamic rescheduling. A more recent alternative is called proactive rescheduling. This method uses statistical data about the durations of events in order to anticipate the locations in the schedule where breakage is likely prior to the execution of the schedule. It generates alternative schedules at such sensitive points, which can be then applied by the scheduler at execution time, without the delay incurred by dynamic rescheduling. This paper proposes a technique for making proactive error management more effective. The technique is based on applying a similarity-based method of clustering to the problem of identifying similar events in a set of events.
NASA Astrophysics Data System (ADS)
Breinl, Korbinian; Di Baldassarre, Giuliano; Girons Lopez, Marc
2017-04-01
We assess uncertainties of multi-site rainfall generation across spatial scales and different climatic conditions. Many research subjects in earth sciences such as floods, droughts or water balance simulations require the generation of long rainfall time series. In large study areas the simulation at multiple sites becomes indispensable to account for the spatial rainfall variability, but becomes more complex compared to a single site due to the intermittent nature of rainfall. Weather generators can be used for extrapolating rainfall time series, and various models have been presented in the literature. Even though the large majority of multi-site rainfall generators is based on similar methods, such as resampling techniques or Markovian processes, they often become too complex. We think that this complexity has been a limit for the application of such tools. Furthermore, the majority of multi-site rainfall generators found in the literature are either not publicly available or intended for being applied at small geographical scales, often only in temperate climates. Here we present a revised, and now publicly available, version of a multi-site rainfall generation code first applied in 2014 in Austria and France, which we call TripleM (Multisite Markov Model). We test this fast and robust code with daily rainfall observations from the United States, in a subtropical, tropical and temperate climate, using rain gauge networks with a maximum site distance above 1,000km, thereby generating one million years of synthetic time series. The modelling of these one million years takes one night on a recent desktop computer. In this research, we first start the simulations with a small station network of three sites and progressively increase the number of sites and the spatial extent, and analyze the changing uncertainties for multiple statistical metrics such as dry and wet spells, rainfall autocorrelation, lagged cross correlations and the inter-annual rainfall variability. Our study contributes to the scientific community of earth sciences and the ongoing debate on extreme precipitation in a changing climate by making a stable, and very easily applicable, multi-site rainfall generation code available to the research community and providing a better understanding of the performance of multi-site rainfall generation depending on spatial scales and climatic conditions.
Mousavi, Maryam; Yap, Hwa Jen; Musa, Siti Nurmaya; Tahriri, Farzad; Md Dawal, Siti Zawiah
2017-01-01
Flexible manufacturing system (FMS) enhances the firm's flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs). An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and hybrid GA-PSO) to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs' battery charge. Assessment of the numerical examples' scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software.
Yap, Hwa Jen; Musa, Siti Nurmaya; Tahriri, Farzad; Md Dawal, Siti Zawiah
2017-01-01
Flexible manufacturing system (FMS) enhances the firm’s flexibility and responsiveness to the ever-changing customer demand by providing a fast product diversification capability. Performance of an FMS is highly dependent upon the accuracy of scheduling policy for the components of the system, such as automated guided vehicles (AGVs). An AGV as a mobile robot provides remarkable industrial capabilities for material and goods transportation within a manufacturing facility or a warehouse. Allocating AGVs to tasks, while considering the cost and time of operations, defines the AGV scheduling process. Multi-objective scheduling of AGVs, unlike single objective practices, is a complex and combinatorial process. In the main draw of the research, a mathematical model was developed and integrated with evolutionary algorithms (genetic algorithm (GA), particle swarm optimization (PSO), and hybrid GA-PSO) to optimize the task scheduling of AGVs with the objectives of minimizing makespan and number of AGVs while considering the AGVs’ battery charge. Assessment of the numerical examples’ scheduling before and after the optimization proved the applicability of all the three algorithms in decreasing the makespan and AGV numbers. The hybrid GA-PSO produced the optimum result and outperformed the other two algorithms, in which the mean of AGVs operation efficiency was found to be 69.4, 74, and 79.8 percent in PSO, GA, and hybrid GA-PSO, respectively. Evaluation and validation of the model was performed by simulation via Flexsim software. PMID:28263994
Su, Xianli; Wei, Ping; Li, Han; Liu, Wei; Yan, Yonggao; Li, Peng; Su, Chuqi; Xie, Changjun; Zhao, Wenyu; Zhai, Pengcheng; Zhang, Qingjie; Tang, Xinfeng; Uher, Ctirad
2017-05-01
Considering only about one third of the world's energy consumption is effectively utilized for functional uses, and the remaining is dissipated as waste heat, thermoelectric (TE) materials, which offer a direct and clean thermal-to-electric conversion pathway, have generated a tremendous worldwide interest. The last two decades have witnessed a remarkable development in TE materials. This Review summarizes the efforts devoted to the study of non-equilibrium synthesis of TE materials with multi-scale structures, their transport behavior, and areas of applications. Studies that work towards the ultimate goal of developing highly efficient TE materials possessing multi-scale architectures are highlighted, encompassing the optimization of TE performance via engineering the structures with different dimensional aspects spanning from the atomic and molecular scales, to nanometer sizes, and to the mesoscale. In consideration of the practical applications of high-performance TE materials, the non-equilibrium approaches offer a fast and controllable fabrication of multi-scale microstructures, and their scale up to industrial-size manufacturing is emphasized here. Finally, the design of two integrated power generating TE systems are described-a solar thermoelectric-photovoltaic hybrid system and a vehicle waste heat harvesting system-that represent perhaps the most important applications of thermoelectricity in the energy conversion area. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Pro-active Real-time Forecasting and Decision Support System for Daily Management of Marine Works
NASA Astrophysics Data System (ADS)
Bollen, Mark; Leyssen, Gert; Smets, Steven; De Wachter, Tom
2016-04-01
Marine Works involving turbidity generating activities (eg. dredging, dredge spoil placement) can generate environmental stress in and around a project area in the form of sediment plumes causing light reduction and sedimentation. If these works are situated near sensitive habitats like sea-grass beds, coral reefs or sensitive human activities eg. aquaculture farms or water intakes, or if contaminants are present in the water soil environmental scrutiny is advised. Environmental Regulations can impose limitations to these activities in the form of turbidity thresholds, spill budgets, contaminant levels. Breaching environmental regulations can result in increased monitoring, adaptation of the works planning and production rates and ultimately in a (temporary) stop of activities all of which entail time and cost impacts for a contractor and/or client. Sediment plume behaviour is governed by the dredging process, soil properties and ambient conditions (currents, water depth) and can be modelled. Usually this is done during the preparatory EIA phase of a project, for estimation of environmental impact based on climatic scenarios. An operational forecasting tool is developed to adapt marine work schedules to the real-time circumstances and thus evade exceedance of critical threshold levels at sensitive areas. The forecasting system is based on a Python-based workflow manager with a MySQL database and a Django frontend web tool for user interaction and visualisation of the model results. The core consists of a numerical hydrodynamic model with sediment transport module (Mike21 from DHI). This model is driven by space and time varying wind fields and wave boundary conditions, and turbidity inputs (suspended sediment source terms) based on marine works production rates and soil properties. The resulting threshold analysis allows the operator to indicate potential impact at the sensitive areas and instigate an adaption of the marine work schedule if needed. In order to use this toolbox in real-time situations and facilitate forecasting of impacts of planned dredge works, the following operational online functionalities are implemented: • Automated fetch and preparation of the input data, including 7 day forecast wind and wave fields and real-time measurements, and user defined the turbidity inputs based on scheduled marine works. • Generate automated forecasts and running user configurable scenarios at the same time in parallel. • Export and convert the model results, time series and maps, into a standardized format (netcdf). • Automatic analysis and processing of model results, including the calculation of indicator turbidity values and the exceedance analysis of threshold levels at the different sensitive areas. Data assimilation with the real time on site turbidity measurements is implemented in this threshold analysis. • Pre-programmed generation of animated sediment plumes, specific charts and pdf reports to allow a rapid interpretation of the model results by the operators and facilitating decision making in the operational planning. The performed marine works, resulting from the marine work schedule proposed by the forecasting system, are evaluated by a threshold analysis on the validated turbidity measurements on the sensitive sites. This machine learning loop allows a check of the system in order to evaluate forecast and model uncertainties.
Scheduling Results for the THEMIS Observation Scheduling Tool
NASA Technical Reports Server (NTRS)
Mclaren, David; Rabideau, Gregg; Chien, Steve; Knight, Russell; Anwar, Sadaat; Mehall, Greg; Christensen, Philip
2011-01-01
We describe a scheduling system intended to assist in the development of instrument data acquisitions for the THEMIS instrument, onboard the Mars Odyssey spacecraft, and compare results from multiple scheduling algorithms. This tool creates observations of both (a) targeted geographical regions of interest and (b) general mapping observations, while respecting spacecraft constraints such as data volume, observation timing, visibility, lighting, season, and science priorities. This tool therefore must address both geometric and state/timing/resource constraints. We describe a tool that maps geometric polygon overlap constraints to set covering constraints using a grid-based approach. These set covering constraints are then incorporated into a greedy optimization scheduling algorithm incorporating operations constraints to generate feasible schedules. The resultant tool generates schedules of hundreds of observations per week out of potential thousands of observations. This tool is currently under evaluation by the THEMIS observation planning team at Arizona State University.
Implementation of hospital examination reservation system using data mining technique.
Cha, Hyo Soung; Yoon, Tae Sik; Ryu, Ki Chung; Shin, Il Won; Choe, Yang Hyo; Lee, Kyoung Yong; Lee, Jae Dong; Ryu, Keun Ho; Chung, Seung Hyun
2015-04-01
New methods for obtaining appropriate information for users have been attempted with the development of information technology and the Internet. Among such methods, the demand for systems and services that can improve patient satisfaction has increased in hospital care environments. In this paper, we proposed the Hospital Exam Reservation System (HERS), which uses the data mining method. First, we focused on carrying clinical exam data and finding the optimal schedule for generating rules using the multi-examination pattern-mining algorithm. Then, HERS was applied by a rule master and recommending system with an exam log. Finally, HERS was designed as a user-friendly interface. HERS has been applied at the National Cancer Center in Korea since June 2014. As the number of scheduled exams increased, the time required to schedule more than a single condition decreased (from 398.67% to 168.67% and from 448.49% to 188.49%; p < 0.0001). As the number of tests increased, the difference between HERS and non-HERS increased (from 0.18 days to 0.81 days). It was possible to expand the efficiency of HERS studies using mining technology in not only exam reservations, but also the medical environment. The proposed system based on doctor prescription removes exams that were not executed in order to improve recommendation accuracy. In addition, we expect HERS to become an effective system in various medical environments.
Development of an Open Rotor Cycle Model in NPSS Using a Multi-Design Point Approach
NASA Technical Reports Server (NTRS)
Hendricks, Eric S.
2011-01-01
NASA's Environmentally Responsible Aviation Project and Subsonic Fixed Wing Project are focused on developing concepts and technologies which may enable dramatic reductions to the environmental impact of future generation subsonic aircraft (Refs. 1 and 2). The open rotor concept (also referred to as the Unducted Fan or advanced turboprop) may allow the achievement of this objective by reducing engine emissions and fuel consumption. To evaluate its potential impact, an open rotor cycle modeling capability is needed. This paper presents the initial development of an open rotor cycle model in the Numerical Propulsion System Simulation (NPSS) computer program which can then be used to evaluate the potential benefit of this engine. The development of this open rotor model necessitated addressing two modeling needs within NPSS. First, a method for evaluating the performance of counter-rotating propellers was needed. Therefore, a new counter-rotating propeller NPSS component was created. This component uses propeller performance maps developed from historic counter-rotating propeller experiments to determine the thrust delivered and power required. Second, several methods for modeling a counter-rotating power turbine within NPSS were explored. These techniques used several combinations of turbine components within NPSS to provide the necessary power to the propellers. Ultimately, a single turbine component with a conventional turbine map was selected. Using these modeling enhancements, an open rotor cycle model was developed in NPSS using a multi-design point approach. The multi-design point (MDP) approach improves the engine cycle analysis process by making it easier to properly size the engine to meet a variety of thrust targets throughout the flight envelope. A number of design points are considered including an aerodynamic design point, sea-level static, takeoff and top of climb. The development of this MDP model was also enabled by the selection of a simple power management scheme which schedules propeller blade angles with the freestream Mach number. Finally, sample open rotor performance results and areas for further model improvements are presented.
Automated Scheduling Via Artificial Intelligence
NASA Technical Reports Server (NTRS)
Biefeld, Eric W.; Cooper, Lynne P.
1991-01-01
Artificial-intelligence software that automates scheduling developed in Operations Mission Planner (OMP) research project. Software used in both generation of new schedules and modification of existing schedules in view of changes in tasks and/or available resources. Approach based on iterative refinement. Although project focused upon scheduling of operations of scientific instruments and other equipment aboard spacecraft, also applicable to such terrestrial problems as scheduling production in factory.
An Interactive Decision Support System for Scheduling Fighter Pilot Training
2002-03-26
Deitel , H.M. and Deitel , P.J. C: How to Program , 2nd ed., Prentice Hall, 1994. 8. Deitel , H.M. and Deitel , P.J. How to Program Java...Visual Basic Programming language, the Excel tool is modified in several ways. Scheduling Dispatch rules are implemented to automatically generate... programming language, the Excel tool was modified in several ways. Scheduling dispatch rules are implemented to automatically generate
Development of a decentralized multi-axis synchronous control approach for real-time networks.
Xu, Xiong; Gu, Guo-Ying; Xiong, Zhenhua; Sheng, Xinjun; Zhu, Xiangyang
2017-05-01
The message scheduling and the network-induced delays of real-time networks, together with the different inertias and disturbances in different axes, make the synchronous control of the real-time network-based systems quite challenging. To address this challenge, a decentralized multi-axis synchronous control approach is developed in this paper. Due to the limitations of message scheduling and network bandwidth, error of the position synchronization is firstly defined in the proposed control approach as a subset of preceding-axis pairs. Then, a motion message estimator is designed to reduce the effect of network delays. It is proven that position and synchronization errors asymptotically converge to zero in the proposed controller with the delay compensation. Finally, simulation and experimental results show that the developed control approach can achieve the good position synchronization performance for the multi-axis motion over the real-time network. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Hu, Cong; Li, Zhi; Zhou, Tian; Zhu, Aijun; Xu, Chuanpei
2016-01-01
We propose a new meta-heuristic algorithm named Levy flights multi-verse optimizer (LFMVO), which incorporates Levy flights into multi-verse optimizer (MVO) algorithm to solve numerical and engineering optimization problems. The Original MVO easily falls into stagnation when wormholes stochastically re-span a number of universes (solutions) around the best universe achieved over the course of iterations. Since Levy flights are superior in exploring unknown, large-scale search space, they are integrated into the previous best universe to force MVO out of stagnation. We test this method on three sets of 23 well-known benchmark test functions and an NP complete problem of test scheduling for Network-on-Chip (NoC). Experimental results prove that the proposed LFMVO is more competitive than its peers in both the quality of the resulting solutions and convergence speed.
Hu, Cong; Li, Zhi; Zhou, Tian; Zhu, Aijun; Xu, Chuanpei
2016-01-01
We propose a new meta-heuristic algorithm named Levy flights multi-verse optimizer (LFMVO), which incorporates Levy flights into multi-verse optimizer (MVO) algorithm to solve numerical and engineering optimization problems. The Original MVO easily falls into stagnation when wormholes stochastically re-span a number of universes (solutions) around the best universe achieved over the course of iterations. Since Levy flights are superior in exploring unknown, large-scale search space, they are integrated into the previous best universe to force MVO out of stagnation. We test this method on three sets of 23 well-known benchmark test functions and an NP complete problem of test scheduling for Network-on-Chip (NoC). Experimental results prove that the proposed LFMVO is more competitive than its peers in both the quality of the resulting solutions and convergence speed. PMID:27926946
Flexible quality of service model for wireless body area sensor networks.
Liao, Yangzhe; Leeson, Mark S; Higgins, Matthew D
2016-03-01
Wireless body area sensor networks (WBASNs) are becoming an increasingly significant breakthrough technology for smart healthcare systems, enabling improved clinical decision-making in daily medical care. Recently, radio frequency ultra-wideband technology has developed substantially for physiological signal monitoring due to its advantages such as low-power consumption, high transmission data rate, and miniature antenna size. Applications of future ubiquitous healthcare systems offer the prospect of collecting human vital signs, early detection of abnormal medical conditions, real-time healthcare data transmission and remote telemedicine support. However, due to the technical constraints of sensor batteries, the supply of power is a major bottleneck for healthcare system design. Moreover, medium access control (MAC) needs to support reliable transmission links that allow sensors to transmit data safely and stably. In this Letter, the authors provide a flexible quality of service model for ad hoc networks that can support fast data transmission, adaptive schedule MAC control, and energy efficient ubiquitous WBASN networks. Results show that the proposed multi-hop communication ad hoc network model can balance information packet collisions and power consumption. Additionally, wireless communications link in WBASNs can effectively overcome multi-user interference and offer high transmission data rates for healthcare systems.
NASA Technical Reports Server (NTRS)
Craft, R.; Dunn, C.; Mccord, J.; Simeone, L.
1980-01-01
A user guide and programmer documentation is provided for a system of PRIME 400 minicomputer programs. The system was designed to support loading analyses on the Tracking Data Relay Satellite System (TDRSS). The system is a scheduler for various types of data relays (including tape recorder dumps and real time relays) from orbiting payloads to the TDRSS. Several model options are available to statistically generate data relay requirements. TDRSS time lines (representing resources available for scheduling) and payload/TDRSS acquisition and loss of sight time lines are input to the scheduler from disk. Tabulated output from the interactive system includes a summary of the scheduler activities over time intervals specified by the user and overall summary of scheduler input and output information. A history file, which records every event generated by the scheduler, is written to disk to allow further scheduling on remaining resources and to provide data for graphic displays or additional statistical analysis.
Creative employee scheduling in the health information management department.
Hyde, C S
1998-02-01
What effect do schedules have on employees and department activities? Negative effects such as backlogs, poor employee morale, and absenteeism may be due to scheduling practices currently in place. The value of effective employee scheduling practices may be seen in areas of improved productivity. The process of developing schedules should include assessing department areas, understanding operational needs, choosing an option, and implementation. Finding a schedule that meets the needs of managers as well as those of the employees is rewarding. It is a win-win situation, and the benefits can yield increased productivity, decreased turnover, and higher morale.
State-of-the-Art: DTM Generation Using Airborne LIDAR Data
Chen, Ziyue; Gao, Bingbo; Devereux, Bernard
2017-01-01
Digital terrain model (DTM) generation is the fundamental application of airborne Lidar data. In past decades, a large body of studies has been conducted to present and experiment a variety of DTM generation methods. Although great progress has been made, DTM generation, especially DTM generation in specific terrain situations, remains challenging. This research introduces the general principles of DTM generation and reviews diverse mainstream DTM generation methods. In accordance with the filtering strategy, these methods are classified into six categories: surface-based adjustment; morphology-based filtering, triangulated irregular network (TIN)-based refinement, segmentation and classification, statistical analysis and multi-scale comparison. Typical methods for each category are briefly introduced and the merits and limitations of each category are discussed accordingly. Despite different categories of filtering strategies, these DTM generation methods present similar difficulties when implemented in sharply changing terrain, areas with dense non-ground features and complicated landscapes. This paper suggests that the fusion of multi-sources and integration of different methods can be effective ways for improving the performance of DTM generation. PMID:28098810
Dypas: A dynamic payload scheduler for shuttle missions
NASA Technical Reports Server (NTRS)
Davis, Stephen
1988-01-01
Decision and analysis systems have had broad and very practical application areas in the human decision making process. These software systems range from the help sections in simple accounting packages, to the more complex computer configuration programs. Dypas is a decision and analysis system that aids prelaunch shutlle scheduling, and has added functionality to aid the rescheduling done in flight. Dypas is written in Common Lisp on a Symbolics Lisp machine. Dypas differs from other scheduling programs in that it can draw its knowledge from different rule bases and apply them to different rule interpretation schemes. The system has been coded with Flavors, an object oriented extension to Common Lisp on the Symbolics hardware. This allows implementation of objects (experiments) to better match the problem definition, and allows a more coherent solution space to be developed. Dypas was originally developed to test a programmer's aptitude toward Common Lisp and the Symbolics software environment. Since then the system has grown into a large software effort with several programmers and researchers thrown into the effort. Dypas is currently using two expert systems and three inferencing procedures to generate a many object schedule. The paper will review the abilities of Dypas and comment on its functionality.
NASA Technical Reports Server (NTRS)
Malik, Waqar
2016-01-01
Provide an overview of algorithms used in SARDA (Spot and Runway Departure Advisor) HITL (Human-in-the-Loop) simulation for Dallas Fort-Worth International Airport and Charlotte Douglas International airport. Outline a multi-objective dynamic programming (DP) based algorithm that finds the exact solution to the single runway scheduling (SRS) problem, and discuss heuristics to restrict the search space for the DP based algorithm and provide improvements.
NASA Astrophysics Data System (ADS)
Wang, Liping; Wang, Boquan; Zhang, Pu; Liu, Minghao; Li, Chuangang
2017-06-01
The study of reservoir deterministic optimal operation can improve the utilization rate of water resource and help the hydropower stations develop more reasonable power generation schedules. However, imprecise forecasting inflow may lead to output error and hinder implementation of power generation schedules. In this paper, output error generated by the uncertainty of the forecasting inflow was regarded as a variable to develop a short-term reservoir optimal operation model for reducing operation risk. To accomplish this, the concept of Value at Risk (VaR) was first applied to present the maximum possible loss of power generation schedules, and then an extreme value theory-genetic algorithm (EVT-GA) was proposed to solve the model. The cascade reservoirs of Yalong River Basin in China were selected as a case study to verify the model, according to the results, different assurance rates of schedules can be derived by the model which can present more flexible options for decision makers, and the highest assurance rate can reach 99%, which is much higher than that without considering output error, 48%. In addition, the model can greatly improve the power generation compared with the original reservoir operation scheme under the same confidence level and risk attitude. Therefore, the model proposed in this paper can significantly improve the effectiveness of power generation schedules and provide a more scientific reference for decision makers.
In-use catalyst surface area and its relation to HC conversion efficiency and FTP emissions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donahue, K.S.; Sabourin, M.A.; Larson, R.E.
1986-01-01
Surface area data, steady-state hydrocarbon conversion efficiency data, and hydrocarbon emissions results have been determined for catalysts collected by the U.S. Environmental Protection Agency from properly maintained 1981 and 1982 model year vehicles. Catalysts covered in this study were limited to those with three-way-plus-oxidation monolith technologies. Catalyst surface areas were measured using the BET method, conversion efficiencies were measured on an exhaust gas generator, and emissions results were determined using the Urban Driving Schedule of the Federal Test Procedure. Results indicate that correlation of catalyst surface area data with hydrocarbon conversion efficiency data and hydrocarbon emissions results is significant formore » the sample studied.« less
Centralized Routing and Scheduling Using Multi-Channel System Single Transceiver in 802.16d
NASA Astrophysics Data System (ADS)
Al-Hemyari, A.; Noordin, N. K.; Ng, Chee Kyun; Ismail, A.; Khatun, S.
This paper proposes a cross-layer optimized strategy that reduces the effect of interferences from neighboring nodes within a mesh networks. This cross-layer design relies on the routing information in network layer and the scheduling table in medium access control (MAC) layer. A proposed routing algorithm in network layer is exploited to find the best route for all subscriber stations (SS). Also, a proposed centralized scheduling algorithm in MAC layer is exploited to assign a time slot for each possible node transmission. The cross-layer optimized strategy is using multi-channel single transceiver and single channel single transceiver systems for WiMAX mesh networks (WMNs). Each node in WMN has a transceiver that can be tuned to any available channel for eliminating the secondary interference. Among the considered parameters in the performance analysis are interference from the neighboring nodes, hop count to the base station (BS), number of children per node, slot reuse, load balancing, quality of services (QoS), and node identifier (ID). Results show that the proposed algorithms significantly improve the system performance in terms of length of scheduling, channel utilization ratio (CUR), system throughput, and average end to end transmission delay.
Multi-layer service function chaining scheduling based on auxiliary graph in IP over optical network
NASA Astrophysics Data System (ADS)
Li, Yixuan; Li, Hui; Liu, Yuze; Ji, Yuefeng
2017-10-01
Software Defined Optical Network (SDON) can be considered as extension of Software Defined Network (SDN) in optical networks. SDON offers a unified control plane and makes optical network an intelligent transport network with dynamic flexibility and service adaptability. For this reason, a comprehensive optical transmission service, able to achieve service differentiation all the way down to the optical transport layer, can be provided to service function chaining (SFC). IP over optical network, as a promising networking architecture to interconnect data centers, is the most widely used scenarios of SFC. In this paper, we offer a flexible and dynamic resource allocation method for diverse SFC service requests in the IP over optical network. To do so, we firstly propose the concept of optical service function (OSF) and a multi-layer SFC model. OSF represents the comprehensive optical transmission service (e.g., multicast, low latency, quality of service, etc.), which can be achieved in multi-layer SFC model. OSF can also be considered as a special SF. Secondly, we design a resource allocation algorithm, which we call OSF-oriented optical service scheduling algorithm. It is able to address multi-layer SFC optical service scheduling and provide comprehensive optical transmission service, while meeting multiple optical transmission requirements (e.g., bandwidth, latency, availability). Moreover, the algorithm exploits the concept of Auxiliary Graph. Finally, we compare our algorithm with the Baseline algorithm in simulation. And simulation results show that our algorithm achieves superior performance than Baseline algorithm in low traffic load condition.
2004-03-05
KENNEDY SPACE CENTER, FLA. - STS-114 Mission Specialist Soichi Noguchi arrives at KSC aboard a T-38 jet aircraft. He and other crew members are at the Center for familiarization activities with equipment. The mission is Logistics Flight 1, scheduled to deliver the Multi-Purpose Logistics Module carrying supplies and equipment to the Space Station and the external stowage platform.
2004-03-05
KENNEDY SPACE CENTER, FLA. - STS-114 Mission Specialist Stephen Robinson arrives at KSC aboard a T-38 jet aircraft. He and other crew members are at the Center for familiarization activities with equipment. The mission is Logistics Flight 1, scheduled to deliver the Multi-Purpose Logistics Module carrying supplies and equipment to the Space Station and the external stowage platform.
2004-03-05
KENNEDY SPACE CENTER, FLA. - STS-114 Mission Specialist Charles Camarda arrives at KSC aboard a T-38 jet aircraft. He and other crew members are at the Center for familiarization activities with equipment. The mission is Logistics Flight 1, scheduled to deliver the Multi-Purpose Logistics Module carrying supplies and equipment,to the Space Station, and the external stowage platform.
1980-02-01
automatic data exchange ... 56 There are currently 12 Data Systems available: I. Integrated Disbursing and Accounting (IDA) 2. Integrated Program Management...construction project progress through the use of a CPM scheduling and progress reporting system . It automatically generates invoices for payment and payment...posted on the project. Water will be drained daily from tanks of vehicle air brake systems . Rtigging, hooks, pendants and slings will be examined
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-30
... Continental Shelf (OCS), Western Planning Area (WPA) and Central Planning Area (CPA), Oil and Gas Lease Sales... prepared a Draft EIS on oil and gas lease sales tentatively scheduled in 2012-2017 in the WPA and CPA... scheduled for the WPA and five annual areawide lease sales are scheduled for the CPA. The proposed WPA lease...
NextGen Operations in a Simulated NY Area Airspace
NASA Technical Reports Server (NTRS)
Smith, Nancy M.; Parke, Bonny; Lee, Paul; Homola, Jeff; Brasil, Connie; Buckley, Nathan; Cabrall, Chris; Chevalley, Eric; Lin, Cindy; Morey, Susan;
2013-01-01
A human-in-the-loop simulation conducted in the Airspace Operations Laboratory (AOL) at NASA Ames Research Center explored the feasibility of a Next Generation Air Transportation System (NextGen) solution to address airspace and airport capacity limitations in and around the New York metropolitan area. A week-long study explored the feasibility of a new Optimal Profile Descent (OPD) arrival into the airspace as well as a novel application of a Terminal Area Precision Scheduling and Spacing (TAPSS) enhancement to the Traffic Management Advisor (TMA) arrival scheduling tool to coordinate high volume arrival traffic to intersecting runways. In the simulation, four en route sector controllers and four terminal radar approach control (TRACON) controllers managed traffic inbound to Newark International Airport's primary runway, 22L, and its intersecting overflow runway, 11. TAPSS was used to generate independent arrival schedules for each runway and a traffic management coordinator participant adjusted the arrival schedule for each runway 11 aircraft to follow one of the 22L aircraft. TAPSS also provided controller-managed spacing tools (slot markers with speed advisories and timelines) to assist the TRACON controllers in managing the arrivals that were descending on OPDs. Results showed that the tools significantly decreased the occurrence of runway violations (potential go-arounds) when compared with a Baseline condition with no tools. Further, the combined use of the tools with the new OPD produced a peak arrival rate of over 65 aircraft per hour using instrument flight rules (IFR), exceeding the current maximum arrival rate at Newark Liberty International Airport (EWR) of 52 per hour under visual flight rules (VFR). Although the participants rated the workload as relatively low and acceptable both with and without the tools, they rated the tools as reducing their workload further. Safety and coordination were rated by most participants as acceptable in both conditions, although the TRACON Runway Coordinator (TRC) rated neither as acceptable in the Baseline condition. Regarding the role of the TRC, the two TRACON controllers handling the 11 arrivals indicated that the TRC was very much needed in the Baseline condition without tools, but not needed in the condition with tools. This indicates that the tools were providing much of the sequencing and spacing information that the TRC had supplied in the Baseline condition.
Next-generation pushbroom filter radiometers for remote sensing
NASA Astrophysics Data System (ADS)
Tarde, Richard W.; Dittman, Michael G.; Kvaran, Geir E.
2012-09-01
Individual focal plane size, yield, and quality continue to improve, as does the technology required to combine these into large tiled formats. As a result, next-generation pushbroom imagers are replacing traditional scanning technologies in remote sensing applications. Pushbroom architecture has inherently better radiometric sensitivity and significantly reduced payload mass, power, and volume than previous generation scanning technologies. However, the architecture creates challenges achieving the required radiometric accuracy performance. Achieving good radiometric accuracy, including image spectral and spatial uniformity, requires creative optical design, high quality focal planes and filters, careful consideration of on-board calibration sources, and state-of-the-art ground test facilities. Ball Aerospace built the Landsat Data Continuity Mission (LDCM) next-generation Operational Landsat Imager (OLI) payload. Scheduled to launch in 2013, OLI provides imagery consistent with the historical Landsat spectral, spatial, radiometric, and geometric data record and completes the generational technology upgrade from the Enhanced Thematic Mapper (ETM+) whiskbroom technology to modern pushbroom technology afforded by advanced focal planes. We explain how Ball's capabilities allowed producing the innovative next-generational OLI pushbroom filter radiometer that meets challenging radiometric accuracy or calibration requirements. OLI will improve the multi-decadal land surface observation dataset dating back to the 1972 launch of ERTS-1 or Landsat 1.
Solid images generated from UAVs to analyze areas affected by rock falls
NASA Astrophysics Data System (ADS)
Giordan, Daniele; Manconi, Andrea; Allasia, Paolo; Baldo, Marco
2015-04-01
The study of rock fall affected areas is usually based on the recognition of principal joints families and the localization of potential instable sectors. This requires the acquisition of field data, although as the areas are barely accessible and field inspections are often very dangerous. For this reason, remote sensing systems can be considered as suitable alternative. Recently, Unmanned Aerial Vehicles (UAVs) have been proposed as platform to acquire the necessary information. Indeed, mini UAVs (in particular in the multi-rotors configuration) provide versatility for the acquisition from different points of view a large number of high resolution optical images, which can be used to generate high resolution digital models relevant to the study area. By considering the recent development of powerful user-friendly software and algorithms to process images acquired from UAVs, there is now a need to establish robust methodologies and best-practice guidelines for correct use of 3D models generated in the context of rock fall scenarios. In this work, we show how multi-rotor UAVs can be used to survey areas by rock fall during real emergency contexts. We present two examples of application located in northwestern Italy: the San Germano rock fall (Piemonte region) and the Moneglia rock fall (Liguria region). We acquired data from both terrestrial LiDAR and UAV, in order to compare digital elevation models generated with different remote sensing approaches. We evaluate the volume of the rock falls, identify the areas potentially unstable, and recognize the main joints families. The use on is not so developed but probably this approach can be considered the better solution for a structural investigation of large rock walls. We propose a methodology that jointly considers the Structure from Motion (SfM) approach for the generation of 3D solid images, and a geotechnical analysis for the identification of joint families and potential failure planes.
NASA Technical Reports Server (NTRS)
Wang, Yeou-Fang; Baldwin, John
2007-01-01
TIGRAS is client-side software, which provides tracking-station equipment planning, allocation, and scheduling services to the DSMS (Deep Space Mission System). TIGRAS provides functions for schedulers to coordinate the DSN (Deep Space Network) antenna usage time and to resolve the resource usage conflicts among tracking passes, antenna calibrations, maintenance, and system testing activities. TIGRAS provides a fully integrated multi-pane graphical user interface for all scheduling operations. This is a great improvement over the legacy VAX VMS command line user interface. TIGRAS has the capability to handle all DSN resource scheduling aspects from long-range to real time. TIGRAS assists NASA mission operations for DSN tracking of station equipment resource request processes from long-range load forecasts (ten years or longer), to midrange, short-range, and real-time (less than one week) emergency tracking plan changes. TIGRAS can be operated by NASA mission operations worldwide to make schedule requests for the DSN station equipment.
The LHCb software and computing upgrade for Run 3: opportunities and challenges
NASA Astrophysics Data System (ADS)
Bozzi, C.; Roiser, S.; LHCb Collaboration
2017-10-01
The LHCb detector will be upgraded for the LHC Run 3 and will be readout at 30 MHz, corresponding to the full inelastic collision rate, with major implications on the full software trigger and offline computing. If the current computing model and software framework are kept, the data storage capacity and computing power required to process data at this rate, and to generate and reconstruct equivalent samples of simulated events, will exceed the current capacity by at least one order of magnitude. A redesign of the software framework, including scheduling, the event model, the detector description and the conditions database, is needed to fully exploit the computing power of multi-, many-core architectures, and coprocessors. Data processing and the analysis model will also change towards an early streaming of different data types, in order to limit storage resources, with further implications for the data analysis workflows. Fast simulation options will allow to obtain a reasonable parameterization of the detector response in considerably less computing time. Finally, the upgrade of LHCb will be a good opportunity to review and implement changes in the domains of software design, test and review, and analysis workflow and preservation. In this contribution, activities and recent results in all the above areas are presented.
Scheduling algorithms for automatic control systems for technological processes
NASA Astrophysics Data System (ADS)
Chernigovskiy, A. S.; Tsarev, R. Yu; Kapulin, D. V.
2017-01-01
Wide use of automatic process control systems and the usage of high-performance systems containing a number of computers (processors) give opportunities for creation of high-quality and fast production that increases competitiveness of an enterprise. Exact and fast calculations, control computation, and processing of the big data arrays - all of this requires the high level of productivity and, at the same time, minimum time of data handling and result receiving. In order to reach the best time, it is necessary not only to use computing resources optimally, but also to design and develop the software so that time gain will be maximal. For this purpose task (jobs or operations), scheduling techniques for the multi-machine/multiprocessor systems are applied. Some of basic task scheduling methods for the multi-machine process control systems are considered in this paper, their advantages and disadvantages come to light, and also some usage considerations, in case of the software for automatic process control systems developing, are made.
Tang, Wenming; Liu, Guixiong; Li, Yuzhong; Tan, Daji
2017-01-01
High data transmission efficiency is a key requirement for an ultrasonic phased array with multi-group ultrasonic sensors. Here, a novel FIFOs scheduling algorithm was proposed and the data transmission efficiency with hardware technology was improved. This algorithm includes FIFOs as caches for the ultrasonic scanning data obtained from the sensors with the output data in a bandwidth-sharing way, on the basis of which an optimal length ratio of all the FIFOs is achieved, allowing the reading operations to be switched among all the FIFOs without time slot waiting. Therefore, this algorithm enhances the utilization ratio of the reading bandwidth resources so as to obtain higher efficiency than the traditional scheduling algorithms. The reliability and validity of the algorithm are substantiated after its implementation in the field programmable gate array (FPGA) technology, and the bandwidth utilization ratio and the real-time performance of the ultrasonic phased array are enhanced. PMID:29035345
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-23
...--Open Teleconference and/or Web Conference Meetings AGENCY: Rural Housing Service, USDA. ACTION: Notice. SUMMARY: This Notice announces a series of teleconference and/or Web conference meetings regarding the USDA Multi-Family Housing Program. The teleconference and/or Web conference meetings will be scheduled...
Barriers to HIV Medication Adherence as a Function of Regimen Simplification.
Chen, Yiyun; Chen, Kun; Kalichman, Seth C
2017-02-01
Barriers to HIV medication adherence may differ by levels of dosing schedules. The current study examined adherence barriers associated with medication regimen complexity and simplification. A total of 755 people living with HIV currently taking anti-retroviral therapy were recruited from community services in Atlanta, Georgia. Participants completed audio-computer-assisted self-interviews that assessed demographic and behavioral characteristics, provided their HIV viral load obtained from their health care provider, and completed unannounced phone-based pill counts to monitor medication adherence over 1 month. Participants taking a single-tablet regimen (STR) were more likely to be adherent than those taking multi-tablets in a single-dose regimen (single-dose MTR) and those taking multi-tablets in a multi-dose regimen (multi-dose MTR), with no difference between the latter two. Regarding barriers to adherence, individuals taking STR were least likely to report scheduling issues and confusion as reasons for missing doses, but they were equally likely to report multiple lifestyle and logistical barriers to adherence. Adherence interventions may need tailoring to address barriers that are specific to dosing regimens.
Mandel, Jacob E; Morel-Ovalle, Louis; Boas, Franz E; Ziv, Etay; Yarmohammadi, Hooman; Deipolyi, Amy; Mohabir, Heeralall R; Erinjeri, Joseph P
2018-02-20
The purpose of this study is to determine whether a custom Google Maps application can optimize site selection when scheduling outpatient interventional radiology (IR) procedures within a multi-site hospital system. The Google Maps for Business Application Programming Interface (API) was used to develop an internal web application that uses real-time traffic data to determine estimated travel time (ETT; minutes) and estimated travel distance (ETD; miles) from a patient's home to each a nearby IR facility in our hospital system. Hypothetical patient home addresses based on the 33 cities comprising our institution's catchment area were used to determine the optimal IR site for hypothetical patients traveling from each city based on real-time traffic conditions. For 10/33 (30%) cities, there was discordance between the optimal IR site based on ETT and the optimal IR site based on ETD at non-rush hour time or rush hour time. By choosing to travel to an IR site based on ETT rather than ETD, patients from discordant cities were predicted to save an average of 7.29 min during non-rush hour (p = 0.03), and 28.80 min during rush hour (p < 0.001). Using a custom Google Maps application to schedule outpatients for IR procedures can effectively reduce patient travel time when more than one location providing IR procedures is available within the same hospital system.
Optimal domain decomposition strategies
NASA Technical Reports Server (NTRS)
Yoon, Yonghyun; Soni, Bharat K.
1995-01-01
The primary interest of the authors is in the area of grid generation, in particular, optimal domain decomposition about realistic configurations. A grid generation procedure with optimal blocking strategies has been developed to generate multi-block grids for a circular-to-rectangular transition duct. The focus of this study is the domain decomposition which optimizes solution algorithm/block compatibility based on geometrical complexities as well as the physical characteristics of flow field. The progress realized in this study is summarized in this paper.
Yue, Lei; Guan, Zailin; Saif, Ullah; Zhang, Fei; Wang, Hao
2016-01-01
Group scheduling is significant for efficient and cost effective production system. However, there exist setup times between the groups, which require to decrease it by sequencing groups in an efficient way. Current research is focused on a sequence dependent group scheduling problem with an aim to minimize the makespan in addition to minimize the total weighted tardiness simultaneously. In most of the production scheduling problems, the processing time of jobs is assumed as fixed. However, the actual processing time of jobs may be reduced due to "learning effect". The integration of sequence dependent group scheduling problem with learning effects has been rarely considered in literature. Therefore, current research considers a single machine group scheduling problem with sequence dependent setup times and learning effects simultaneously. A novel hybrid Pareto artificial bee colony algorithm (HPABC) with some steps of genetic algorithm is proposed for current problem to get Pareto solutions. Furthermore, five different sizes of test problems (small, small medium, medium, large medium, large) are tested using proposed HPABC. Taguchi method is used to tune the effective parameters of the proposed HPABC for each problem category. The performance of HPABC is compared with three famous multi objective optimization algorithms, improved strength Pareto evolutionary algorithm (SPEA2), non-dominated sorting genetic algorithm II (NSGAII) and particle swarm optimization algorithm (PSO). Results indicate that HPABC outperforms SPEA2, NSGAII and PSO and gives better Pareto optimal solutions in terms of diversity and quality for almost all the instances of the different sizes of problems.
Building large area CZT imaging detectors for a wide-field hard X-ray telescope—ProtoEXIST1
NASA Astrophysics Data System (ADS)
Hong, J.; Allen, B.; Grindlay, J.; Chammas, N.; Barthelemy, S.; Baker, R.; Gehrels, N.; Nelson, K. E.; Labov, S.; Collins, J.; Cook, W. R.; McLean, R.; Harrison, F.
2009-07-01
We have constructed a moderately large area (32cm), fine pixel (2.5 mm pixel, 5 mm thick) CZT imaging detector which constitutes the first section of a detector module (256cm) developed for a balloon-borne wide-field hard X-ray telescope, ProtoEXIST1. ProtoEXIST1 is a prototype for the High Energy Telescope (HET) in the Energetic X-ray imaging Survey Telescope (EXIST), a next generation space-borne multi-wavelength telescope. We have constructed a large (nearly gapless) detector plane through a modularization scheme by tiling of a large number of 2cm×2cm CZT crystals. Our innovative packaging method is ideal for many applications such as coded-aperture imaging, where a large, continuous detector plane is desirable for the optimal performance. Currently we have been able to achieve an energy resolution of 3.2 keV (FWHM) at 59.6 keV on average, which is exceptional considering the moderate pixel size and the number of detectors in simultaneous operation. We expect to complete two modules (512cm) within the next few months as more CZT becomes available. We plan to test the performance of these detectors in a near space environment in a series of high altitude balloon flights, the first of which is scheduled for Fall 2009. These detector modules are the first in a series of progressively more sophisticated detector units and packaging schemes planned for ProtoEXIST2 & 3, which will demonstrate the technology required for the advanced CZT imaging detectors (0.6 mm pixel, 4.5m area) required in EXIST/HET.
NASA Astrophysics Data System (ADS)
Ogawa, Kenta; Konno, Yukiko; Yamamoto, Satoru; Matsunaga, Tsuneo; Tachikawa, Tetsushi; Komoda, Mako
2017-09-01
Hyperspectral Imager Suite (HISUI) is a Japanese future space-borne hyperspectral instrument being developed by Ministry of Economy, Trade, and Industry (METI). HISUI will be launched in 2019 or later onboard International Space Station (ISS) as platform. HISUI has 185 spectral band from 0.4 to 2.5 μm with 20 by 30 m spatial resolution with swath of 20 km. Swath is limited as such, however observations in continental scale area are requested in HISUI mission lifetime of three years. Therefore we are developing a scheduling algorithm to generate effective observation plans. HISUI scheduling algorithm is to generate observation plans automatically based on platform orbit, observation area maps (we say DAR; "Data Acquisition Request" in HISUI project), their priorities, and available resources and limitation of HISUI system such as instrument operation time per orbit and data transfer capability. Then next we need to set adequate DAR before start of HISUI observation, because years of observations are needed to cover continental scale wide area that is difficult to change after the mission started. To address these issues, we have developed observation simulator. The simulator's critical inputs are DAR and the ISS's orbit, HISUI limitations in observation minutes per orbit, data storage and past cloud coverage data for term of HISUI observations (3 years). Then the outputs of simulator are coverage map of each day. Areas with cloud free image are accumulated for the term of observation up to three years. We have successfully tested the simulator and tentative DAR and found that it is possible to estimate coverage for each of requests for the mission lifetime.
On-board emergent scheduling of autonomous spacecraft payload operations
NASA Technical Reports Server (NTRS)
Lindley, Craig A.
1994-01-01
This paper describes a behavioral competency level concerned with emergent scheduling of spacecraft payload operations. The level is part of a multi-level subsumption architecture model for autonomous spacecraft, and it functions as an action selection system for processing a spacecraft commands that can be considered as 'plans-as-communication'. Several versions of the selection mechanism are described, and their robustness is qualitatively compared.
NASA Technical Reports Server (NTRS)
2000-01-01
The Multi-Purpose Logistics Module (MPLM) Leonardo, seen here, is one of two in the Space Station Processing Facility. The other is named Raffaello. Both MPLMs are components built by Italy for the International Space Station. Leonardo is scheduled on mission STS-102, the 8th flight to the Space Station early in 2001. Raffaello is scheduled on mission STS-100, the 9th flight, later in 2001.
NASA Technical Reports Server (NTRS)
2000-01-01
The Multi-Purpose Logistics Module (MPLM) Raffaello, seen here, is one of two in the Space Station Processing Facility. The other is named Leonardo. Both MPLMs are components built by Italy for the International Space Station. Raffaello is scheduled on mission STS-100, the 9th flight to the Space Station in 2001. Leonardo is scheduled on an earlier mission, STS-102, the 8th flight early in 2001.
Efficient Synthesis of Graph Methods: a Dynamically Scheduled Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minutoli, Marco; Castellana, Vito G.; Tumeo, Antonino
RDF databases naturally map to a graph representation and employ languages, such as SPARQL, that implements queries as graph pattern matching routines. Graph methods exhibit an irregular behavior: they present unpredictable, fine-grained data accesses, and are synchronization inten- sive. Graph data structures expose large amounts of dy- namic parallelism, but are difficult to partition without gen- erating load unbalance. In this paper, we present a novel ar- chitecture to improve the synthesis of graph methods. Our design addresses the issues of these algorithms with two com- ponents: a Dynamic Task Scheduler (DTS), which reduces load unbalance and maximize resource utilization,more » and a Hi- erarchical Memory Interface controller (HMI), which pro- vides support for concurrent memory operations on multi- ported/multi-banked shared memories. We evaluate our ap- proach by generating the accelerators for a set of SPARQL queries from the Lehigh University Benchmark (LUBM). We first analyze the load unbalance of these queries, showing that execution time among tasks can differ even of order of magnitudes. We then synthesize the queries and com- pare the performance of the resulting accelerators against the current state of the art. Experimental results show that our solution provides a speedup over the serial implementa- tion close to the theoretical maximum and a speedup up to 3.45 over a baseline parallel implementation. We conclude our study by exploring the design space to achieve maximum memory channels utilization. The best design used at least three of the four memory channels for more than 90% of the execution time.« less
Hybrid scheduling mechanisms for Next-generation Passive Optical Networks based on network coding
NASA Astrophysics Data System (ADS)
Zhao, Jijun; Bai, Wei; Liu, Xin; Feng, Nan; Maier, Martin
2014-10-01
Network coding (NC) integrated into Passive Optical Networks (PONs) is regarded as a promising solution to achieve higher throughput and energy efficiency. To efficiently support multimedia traffic under this new transmission mode, novel NC-based hybrid scheduling mechanisms for Next-generation PONs (NG-PONs) including energy management, time slot management, resource allocation, and Quality-of-Service (QoS) scheduling are proposed in this paper. First, we design an energy-saving scheme that is based on Bidirectional Centric Scheduling (BCS) to reduce the energy consumption of both the Optical Line Terminal (OLT) and Optical Network Units (ONUs). Next, we propose an intra-ONU scheduling and an inter-ONU scheduling scheme, which takes NC into account to support service differentiation and QoS assurance. The presented simulation results show that BCS achieves higher energy efficiency under low traffic loads, clearly outperforming the alternative NC-based Upstream Centric Scheduling (UCS) scheme. Furthermore, BCS is shown to provide better QoS assurance.
2012-01-01
Background Study-based global health interventions, especially those that are conducted on an international or multi-site basis, frequently require site-specific adaptations in order to (1) respond to socio-cultural differences in risk determinants, (2) to make interventions more relevant to target population needs, and (3) in recognition of ‘global health diplomacy' issues. We report on the adaptations development, approval and implementation process from the Project Accept voluntary counseling and testing, community mobilization and post-test support services intervention. Methods We reviewed all relevant documentation collected during the study intervention period (e.g. monthly progress reports; bi-annual steering committee presentations) and conducted a series of semi-structured interviews with project directors and between 12 and 23 field staff at each study site in South Africa, Zimbabwe, Thailand and Tanzania during 2009. Respondents were asked to describe (1) the adaptations development and approval process and (2) the most successful site-specific adaptations from the perspective of facilitating intervention implementation. Results Across sites, proposed adaptations were identified by field staff and submitted to project directors for review on a formally planned basis. The cross-site intervention sub-committee then ensured fidelity to the study protocol before approval. Successfully-implemented adaptations included: intervention delivery adaptations (e.g. development of tailored counseling messages for immigrant labour groups in South Africa) political, environmental and infrastructural adaptations (e.g. use of local community centers as VCT venues in Zimbabwe); religious adaptations (e.g. dividing clients by gender in Muslim areas of Tanzania); economic adaptations (e.g. co-provision of income generating skills classes in Zimbabwe); epidemiological adaptations (e.g. provision of ‘youth-friendly’ services in South Africa, Zimbabwe and Tanzania), and social adaptations (e.g. modification of terminology to local dialects in Thailand: and adjustment of service delivery schedules to suit seasonal and daily work schedules across sites). Conclusions Adaptation selection, development and approval during multi-site global health research studies should be a planned process that maintains fidelity to the study protocol. The successful implementation of appropriate site-specific adaptations may have important implications for intervention implementation, from both a service uptake and a global health diplomacy perspective. PMID:22716131
Generating Variable and Random Schedules of Reinforcement Using Microsoft Excel Macros
ERIC Educational Resources Information Center
Bancroft, Stacie L.; Bourret, Jason C.
2008-01-01
Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time.…
Scheduling multimedia services in cloud computing environment
NASA Astrophysics Data System (ADS)
Liu, Yunchang; Li, Chunlin; Luo, Youlong; Shao, Yanling; Zhang, Jing
2018-02-01
Currently, security is a critical factor for multimedia services running in the cloud computing environment. As an effective mechanism, trust can improve security level and mitigate attacks within cloud computing environments. Unfortunately, existing scheduling strategy for multimedia service in the cloud computing environment do not integrate trust mechanism when making scheduling decisions. In this paper, we propose a scheduling scheme for multimedia services in multi clouds. At first, a novel scheduling architecture is presented. Then, We build a trust model including both subjective trust and objective trust to evaluate the trust degree of multimedia service providers. By employing Bayesian theory, the subjective trust degree between multimedia service providers and users is obtained. According to the attributes of QoS, the objective trust degree of multimedia service providers is calculated. Finally, a scheduling algorithm integrating trust of entities is proposed by considering the deadline, cost and trust requirements of multimedia services. The scheduling algorithm heuristically hunts for reasonable resource allocations and satisfies the requirement of trust and meets deadlines for the multimedia services. Detailed simulated experiments demonstrate the effectiveness and feasibility of the proposed trust scheduling scheme.
Multi-Variant/Capability Next Generation Troop Seat (M-V/C NGTS)
2009-01-01
John Plaga , Work Unit Manager MARK M. HOFFMAN Deputy Chief Biomechanics Branch Biosciences and Protection Division Human...John A. Plaga a. REPORT U b. ABSTRACT U c. THIS PAGE U SAR 20 19b. TELEPHONE NUMBER (include area
Short-term scheduling of an open-pit mine with multiple objectives
NASA Astrophysics Data System (ADS)
Blom, Michelle; Pearce, Adrian R.; Stuckey, Peter J.
2017-05-01
This article presents a novel algorithm for the generation of multiple short-term production schedules for an open-pit mine, in which several objectives, of varying priority, characterize the quality of each solution. A short-term schedule selects regions of a mine site, known as 'blocks', to be extracted in each week of a planning horizon (typically spanning 13 weeks). Existing tools for constructing these schedules use greedy heuristics, with little optimization. To construct a single schedule in which infrastructure is sufficiently utilized, with production grades consistently close to a desired target, a planner must often run these heuristics many times, adjusting parameters after each iteration. A planner's intuition and experience can evaluate the relative quality and mineability of different schedules in a way that is difficult to automate. Of interest to a short-term planner is the generation of multiple schedules, extracting available ore and waste in varying sequences, which can then be manually compared. This article presents a tool in which multiple, diverse, short-term schedules are constructed, meeting a range of common objectives without the need for iterative parameter adjustment.
Computer-aided resource planning and scheduling for radiological services
NASA Astrophysics Data System (ADS)
Garcia, Hong-Mei C.; Yun, David Y.; Ge, Yiqun; Khan, Javed I.
1996-05-01
There exists tremendous opportunity in hospital-wide resource optimization based on system integration. This paper defines the resource planning and scheduling requirements integral to PACS, RIS and HIS integration. An multi-site case study is conducted to define the requirements. A well-tested planning and scheduling methodology, called Constrained Resource Planning model, has been applied to the chosen problem of radiological service optimization. This investigation focuses on resource optimization issues for minimizing the turnaround time to increase clinical efficiency and customer satisfaction, particularly in cases where the scheduling of multiple exams are required for a patient. How best to combine the information system efficiency and human intelligence in improving radiological services is described. Finally, an architecture for interfacing a computer-aided resource planning and scheduling tool with the existing PACS, HIS and RIS implementation is presented.
Projecting Future Scheduled Airline Demand, Schedules and NGATS Benefits Using TSAM
NASA Technical Reports Server (NTRS)
Dollyhigh, Samuel; Smith, Jeremy; Viken, Jeff; Trani, Antonio; Baik, Hojong; Hinze, Nickolas; Ashiabor, Senanu
2006-01-01
The Transportation Systems Analysis Model (TSAM) developed by Virginia Tech s Air Transportation Systems Lab and NASA Langley can provide detailed analysis of the effects on the demand for air travel of a full range of NASA and FAA aviation projects. TSAM has been used to project the passenger demand for very light jet (VLJ) air taxi service, scheduled airline demand growth and future schedules, Next Generation Air Transportation System (NGATS) benefits, and future passenger revenues for the Airport and Airway Trust Fund. TSAM can project the resulting demand when new vehicles and/or technology is inserted into the long distance (100 or more miles one-way) transportation system, as well as, changes in demand as a result of fare yield increases or decreases, airport transit times, scheduled flight times, ticket taxes, reductions or increases in flight delays, and so on. TSAM models all long distance travel in the contiguous U.S. and determines the mode choice of the traveler based on detailed trip costs, travel time, schedule frequency, purpose of the trip (business or non-business), and household income level of the traveler. Demand is modeled at the county level, with an airport choice module providing up to three airports as part of the mode choice. Future enplanements at airports can be projected for different scenarios. A Fratar algorithm and a schedule generator are applied to generate future flight schedules. This paper presents the application of TSAM to modeling future scheduled air passenger demand and resulting airline schedules, the impact of NGATS goals and objectives on passenger demand, along with projections for passenger fee receipts for several scenarios for the FAA Airport and Airway Trust Fund.
Exact and Heuristic Algorithms for Runway Scheduling
NASA Technical Reports Server (NTRS)
Malik, Waqar A.; Jung, Yoon C.
2016-01-01
This paper explores the Single Runway Scheduling (SRS) problem with arrivals, departures, and crossing aircraft on the airport surface. Constraints for wake vortex separations, departure area navigation separations and departure time window restrictions are explicitly considered. The main objective of this research is to develop exact and heuristic based algorithms that can be used in real-time decision support tools for Air Traffic Control Tower (ATCT) controllers. The paper provides a multi-objective dynamic programming (DP) based algorithm that finds the exact solution to the SRS problem, but may prove unusable for application in real-time environment due to large computation times for moderate sized problems. We next propose a second algorithm that uses heuristics to restrict the search space for the DP based algorithm. A third algorithm based on a combination of insertion and local search (ILS) heuristics is then presented. Simulation conducted for the east side of Dallas/Fort Worth International Airport allows comparison of the three proposed algorithms and indicates that the ILS algorithm performs favorably in its ability to find efficient solutions and its computation times.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gali, Emmanuel; Eidenbenz, Stephan; Mniszewski, Sue
The United States' Department of Homeland Security aims to model, simulate, and analyze critical infrastructure and their interdependencies across multiple sectors such as electric power, telecommunications, water distribution, transportation, etc. We introduce ActivitySim, an activity simulator for a population of millions of individual agents each characterized by a set of demographic attributes that is based on US census data. ActivitySim generates daily schedules for each agent that consists of a sequence of activities, such as sleeping, shopping, working etc., each being scheduled at a geographic location, such as businesses or private residences that is appropriate for the activity type andmore » for the personal situation of the agent. ActivitySim has been developed as part of a larger effort to understand the interdependencies among national infrastructure networks and their demand profiles that emerge from the different activities of individuals in baseline scenarios as well as emergency scenarios, such as hurricane evacuations. We present the scalable software engineering principles underlying ActivitySim, the socia-technical modeling paradigms that drive the activity generation, and proof-of-principle results for a scenario in the Twin Cities, MN area of 2.6 M agents.« less
Autonomously generating operations sequences for a Mars Rover using AI-based planning
NASA Technical Reports Server (NTRS)
Sherwood, Rob; Mishkin, Andrew; Estlin, Tara; Chien, Steve; Backes, Paul; Cooper, Brian; Maxwell, Scott; Rabideau, Gregg
2001-01-01
This paper discusses a proof-of-concept prototype for ground-based automatic generation of validated rover command sequences from highlevel science and engineering activities. This prototype is based on ASPEN, the Automated Scheduling and Planning Environment. This Artificial Intelligence (AI) based planning and scheduling system will automatically generate a command sequence that will execute within resource constraints and satisfy flight rules.
2004-03-05
KENNEDY SPACE CENTER, FLA. - STS-114 Mission Specialist Soichi Noguchi is happy to be back at KSC after arriving aboard a T-38 jet aircraft. He and other crew members are at the Center for familiarization activities with equipment. The mission is Logistics Flight 1, scheduled to deliver the Multi-Purpose Logistics Module carrying supplies and equipment to the Space Station and the external stowage platform.
2004-03-05
KENNEDY SPACE CENTER, FLA. - STS-114 Mission Commander Eileen Collins is pleased to be back at KSC after arriving aboard a T-38 jet aircraft. She and other crew members are at the Center for familiarization activities with equipment. The mission is Logistics Flight 1, scheduled to deliver to the Space Station the external stowage platform and the Multi-Purpose Logistics Module with supplies and equipment.
2004-03-05
KENNEDY SPACE CENTER, FLA. - STS-114 Pilot Jim Kelly is pleased to be back at KSC after arriving aboard a T-38 jet aircraft. He and other crew members are at the Center for familiarization activities with equipment. The mission is Logistics Flight 1, scheduled to deliver the Multi-Purpose Logistics Module carrying supplies and equipment to the Space Station and the external stowage platform.
2004-03-05
KENNEDY SPACE CENTER, FLA. - STS-114 Mission Specialist Andrew Thomas is pleased to be back at KSC after arriving aboard a T-38 jet aircraft. He and other crew members are at the Center for familiarization activities with equipment. The mission is Logistics Flight 1, scheduled to deliver to the Space Station the external stowage platform and the Multi-Purpose Logistics Module with supplies and equipment.
2004-03-05
KENNEDY SPACE CENTER, FLA. - STS-114 Mission Specialist Wendy Lawrence is pleased to be back at KSC after arriving aboard a T-38 jet aircraft. She and other crew members are at the Center for familiarization activities with equipment. The mission is Logistics Flight 1, scheduled to deliver the Multi-Purpose Logistics Module carrying supplies and equipment to the Space Station and the external stowage platform.
Propagating Resource Constraints Using Mutual Exclusion Reasoning
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Sanchez, Romeo; Do, Minh B.; Clancy, Daniel (Technical Monitor)
2001-01-01
One of the most recent techniques for propagating resource constraints in Constraint Based scheduling is Energy Constraint. This technique focuses in precedence based scheduling, where precedence relations are taken into account rather than the absolute position of activities. Although, this particular technique proved to be efficient on discrete unary resources, it provides only loose bounds for jobs using discrete multi-capacity resources. In this paper we show how mutual exclusion reasoning can be used to propagate time bounds for activities using discrete resources. We show that our technique based on critical path analysis and mutex reasoning is just as effective on unary resources, and also shows that it is more effective on multi-capacity resources, through both examples and empirical study.
Multiresource allocation and scheduling for periodic soft real-time applications
NASA Astrophysics Data System (ADS)
Gopalan, Kartik; Chiueh, Tzi-cker
2001-12-01
Real-time applications that utilize multiple system resources, such as CPU, disks, and network links, require coordinated scheduling of these resources in order to meet their end-to-end performance requirements. Most state-of-the-art operating systems support independent resource allocation and deadline-driven scheduling but lack coordination among multiple heterogeneous resources. This paper describes the design and implementation of an Integrated Real-time Resource Scheduler (IRS) that performs coordinated allocation and scheduling of multiple heterogeneous resources on the same machine for periodic soft real-time application. The principal feature of IRS is a heuristic multi-resource allocation algorithm that reserves multiple resources for real-time applications in a manner that can maximize the number of applications admitted into the system in the long run. At run-time, a global scheduler dispatches the tasks of the soft real-time application to individual resource schedulers according to the precedence constraints between tasks. The individual resource schedulers, which could be any deadline based schedulers, can make scheduling decisions locally and yet collectively satisfy a real-time application's performance requirements. The tightness of overall timing guarantees is ultimately determined by the properties of individual resource schedulers. However, IRS maximizes overall system resource utilization efficiency by coordinating deadline assignment across multiple tasks in a soft real-time application.
Online Optimization Method for Operation of Generators in a Micro Grid
NASA Astrophysics Data System (ADS)
Hayashi, Yasuhiro; Miyamoto, Hideki; Matsuki, Junya; Iizuka, Toshio; Azuma, Hitoshi
Recently a lot of studies and developments about distributed generator such as photovoltaic generation system, wind turbine generation system and fuel cell have been performed under the background of the global environment issues and deregulation of the electricity market, and the technique of these distributed generators have progressed. Especially, micro grid which consists of several distributed generators, loads and storage battery is expected as one of the new operation system of distributed generator. However, since precipitous load fluctuation occurs in micro grid for the reason of its smaller capacity compared with conventional power system, high-accuracy load forecasting and control scheme to balance of supply and demand are needed. Namely, it is necessary to improve the precision of operation in micro grid by observing load fluctuation and correcting start-stop schedule and output of generators online. But it is not easy to determine the operation schedule of each generator in short time, because the problem to determine start-up, shut-down and output of each generator in micro grid is a mixed integer programming problem. In this paper, the authors propose an online optimization method for the optimal operation schedule of generators in micro grid. The proposed method is based on enumeration method and particle swarm optimization (PSO). In the proposed method, after picking up all unit commitment patterns of each generators satisfied with minimum up time and minimum down time constraint by using enumeration method, optimal schedule and output of generators are determined under the other operational constraints by using PSO. Numerical simulation is carried out for a micro grid model with five generators and photovoltaic generation system in order to examine the validity of the proposed method.
NASA Technical Reports Server (NTRS)
Gipson, John
2010-01-01
In this note I give an overview of the VLBI scheduling software sked. I describe some of the algorithms used in automatic scheduling and some sked commands which have been introduced at users requests. I also give a cookbook for generating some schedules.
Low-level radwaste storage facility at Hope Creek and Salem Generating Stations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oyen, L.C.; Lee, K.; Bravo, R.
Following the January 1, 1993, closure of the radwaste disposal facilities at Beatty, Nevada, and Richland, Washington (to waste generators outside the compact), only Barnwell, South Carolina, is open to waste generators in most states. Barnwell is scheduled to stay open to waste generators outside the Southeast Compact until June 30, 1994. Continued delays in opening regional radwaste disposal facilities have forced most nuclear utilities to consider on-site storage of low-level radwaste. Public Service Electric and Gas Company (PSE G) considered several different radwaste storage options before selecting the design based on the steel-frame and metal-siding building design described inmore » the Electric Power Research Institute's (EPRI's) TR-100298 Vol. 2, Project 3800 report. The storage facility will accommodate waste generated by Salem units 1 and 2 and Hope Creek unit 1 for a 5-yr period and will be located within their common protected area.« less
Design and Scheduling of Microgrids using Benders Decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagarajan, Adarsh; Ayyanar, Raja
2016-11-21
The distribution feeder laterals in a distribution feeder with relatively high PV generation as compared to the load can be operated as microgrids to achieve reliability, power quality and economic benefits. However, renewable resources are intermittent and stochastic in nature. A novel approach for sizing and scheduling an energy storage system and microturbine for reliable operation of microgrids is proposed. The size and schedule of an energy storage system and microturbine are determined using Benders' decomposition, considering PV generation as a stochastic resource.
Wind-Friendly Flexible Ramping Product Design in Multi-Timescale Power System Operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Mingjian; Zhang, Jie; Wu, Hongyu
With increasing wind power penetration in the electricity grid, system operators are recognizing the need for additional flexibility, and some are implementing new ramping products as a type of ancillary service. However, wind is generally thought of as causing the need for ramping services, not as being a potential source for the service. In this paper, a multi-timescale unit commitment and economic dispatch model is developed to consider the wind power ramping product (WPRP). An optimized swinging door algorithm with dynamic programming is applied to identify and forecast wind power ramps (WPRs). Designed as positive characteristics of WPRs, the WPRPmore » is then integrated into the multi-timescale dispatch model that considers new objective functions, ramping capacity limits, active power limits, and flexible ramping requirements. Numerical simulations on the modified IEEE 118-bus system show the potential effectiveness of WPRP in increasing the economic efficiency of power system operations with high levels of wind power penetration. It is found that WPRP not only reduces the production cost by using less ramping reserves scheduled by conventional generators, but also possibly enhances the reliability of power system operations. Moreover, wind power forecasts play an important role in providing high-quality WPRP service.« less
2011-11-17
CAPE CANAVERAL, Fla. -- In the Vertical Integration Facility at Space Launch Complex-41 on Cape Canaveral Air Force Station, spacecraft technicians install the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission on the Curiosity rover. The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. Curiosity, MSL's car-sized rover, has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is scheduled for Nov. 25. For more information, visit http://www.nasa.gov/msl. Photo credit: Department of Energy/Idaho National Laboratory
NASA Astrophysics Data System (ADS)
Li, Pai; Huang, Yuehui; Jia, Yanbing; Liu, Jichun; Niu, Yi
2018-02-01
Abstract . This article has studies on the generation investment decision in the background of global energy interconnection. Generation investment decision model considering the multiagent benefit is proposed. Under the back-ground of global energy Interconnection, generation investors in different clean energy base not only compete with other investors, but also facing being chosen by the power of the central area, therefor, constructing generation investment decision model considering multiagent benefit can be close to meet the interests demands. Using game theory, the complete information game model is adopted to solve the strategies of different subjects in equilibrium state.
GUST LAT Multiwavelength Planning
NASA Technical Reports Server (NTRS)
Thompson, D. J.
2004-01-01
Because gamma-ray astrophysics profits in powerful ways from multi-wavelength studies, the GLAST Large Area Telescope (LAT) Collaboration has started multiwavelength planning well before the scheduled 2007 launch. Many aspects of this program are of direct interest to observers using VERITAS and other atmospheric Cerenkov telescopes, whose capabilities complement those of GLAST. This talk with describe some of the current developmental concepts for GLAST LAT multiwavelength work, including release of data for transient sources, nearly-continuous monitoring of selected time-variable sources, pulsar timing, follow-on observations for source identification, coordinated blazar campaigns, and cross-calibration with other high-energy telescopes. Although few details are firm at this stage of preparation for GLAST, the LAT Collaboration looks forward to cooperation with a broad cross-section of the multiwave-length community. The GLAST Large Area Telescope is an international effort, with U.S. funding provided by the Department of Energy and NASA.
Scheduling techniques in the Request Oriented Scheduling Engine (ROSE)
NASA Technical Reports Server (NTRS)
Zoch, David R.
1991-01-01
Scheduling techniques in the ROSE are presented in the form of the viewgraphs. The following subject areas are covered: agenda; ROSE summary and history; NCC-ROSE task goals; accomplishments; ROSE timeline manager; scheduling concerns; current and ROSE approaches; initial scheduling; BFSSE overview and example; and summary.
Network Control Center User Planning System (NCC UPS)
NASA Astrophysics Data System (ADS)
Dealy, Brian
1991-09-01
NCC UPS is presented in the form of the viewgraphs. The following subject areas are covered: UPS overview; NCC UPS role; major NCC UPS functional requirements; interactive user access levels; UPS interfaces; interactive user subsystem; interface navigation; scheduling screen hierarchy; interactive scheduling input panels; autogenerated schedule request panel; schedule data tabular display panel; schedule data graphic display panel; graphic scheduling aid design; and schedule data graphic display.
Network Control Center User Planning System (NCC UPS)
NASA Technical Reports Server (NTRS)
Dealy, Brian
1991-01-01
NCC UPS is presented in the form of the viewgraphs. The following subject areas are covered: UPS overview; NCC UPS role; major NCC UPS functional requirements; interactive user access levels; UPS interfaces; interactive user subsystem; interface navigation; scheduling screen hierarchy; interactive scheduling input panels; autogenerated schedule request panel; schedule data tabular display panel; schedule data graphic display panel; graphic scheduling aid design; and schedule data graphic display.
MAG traffic generator study : survey data from Arizona State University
DOT National Transportation Integrated Search
1994-12-01
The Maricopa Association of Governments (MAG) is responsible for the travel demand models used to forecast multi-modal travel behavior in the Phoenix metropolitan area. The main campus of Arizona State University (ASU), located in Tempe, is one of th...
NASA Astrophysics Data System (ADS)
Alanis Pena, Antonio Alejandro
Major commercial electricity generation is done by burning fossil fuels out of which coal-fired power plants produce a substantial quantity of electricity worldwide. The United States has large reserves of coal, and it is cheaply available, making it a good choice for the generation of electricity on a large scale. However, one major problem associated with using coal for combustion is that it produces a group of pollutants known as nitrogen oxides (NO x). NOx are strong oxidizers and contribute to ozone formation and respiratory illness. The Environmental Protection Agency (EPA) regulates the quantity of NOx emitted to the atmosphere in the United States. One technique coal-fired power plants use to reduce NOx emissions is Selective Catalytic Reduction (SCR). SCR uses layers of catalyst that need to be added or changed to maintain the required performance. Power plants do add or change catalyst layers during temporary shutdowns, but it is expensive. However, many companies do not have only one power plant, but instead they can have a fleet of coal-fired power plants. A fleet of power plants can use EPA cap and trade programs to have an outlet NOx emission below the allowances for the fleet. For that reason, the main aim of this research is to develop an SCR management mathematical optimization methods that, with a given set of scheduled outages for a fleet of power plants, minimizes the total cost of the entire fleet of power plants and also maintain outlet NO x below the desired target for the entire fleet. We use a multi commodity network flow problem (MCFP) that creates edges that represent all the SCR catalyst layers for each plant. This MCFP is relaxed because it does not consider average daily NOx constraint, and it is solved by a binary integer program. After that, we add the average daily NOx constraint to the model with a schedule elimination constraint (MCFPwSEC). The MCFPwSEC eliminates, one by one, the solutions that do not satisfy the average daily NOx constraint and the worst NH 3 slip until it finds the solution that satisfies that requirement. We introduce an algorithm called heuristic MCFPwSEC (HMCFPwSEC). When HMCFPwSEC algorithm starts, we calculate the cost of the edges estimating the average NH3 slip level, but after we have a schedule that satisfies the average daily NOx constraint and the worst NH3 slip, we update the cost of the edges with the average NH3 slip for this schedule. We repeat this process until we have the solution. Because HMCFPwSEC does not guarantee optimality, we compare its results with SGO, which is optimal, using computational experiments. The results between both models are very similar, the only important difference is the time to solve each model. Then, a fleet HMCFPwSEC (FHMCFPwSEC) uses HMCFPwSEC to create the SCR management plan for each plant of the fleet, with a discrete NOx emissions value for each plant. FHMCFPwSEC repeats this process with different discrete levels of NOx emissions, for each plant, in order to create a new problem with schedules with different cost and NO x emissions for each plant of the fleet. Finally, FHMCFPwSEC solves this new problem with a binary integer program, in order to satisfy a NO x emission value for the fleet that also minimizes the total cost for the fleet, and using each plant once. FHMCFPwSEC can work with single cut and also with multi-cut methods. Because FHMCFPwSEC does not guarantee optimality, we compare its results with fleet SGO (FSGO) using computational experiments. The results between both models are very similar, the only important difference is the time to solve each model. In the experiments, FHMCFPwSEC multi-cut targeting new layer always uses less time than FSGO.
LLMapReduce: Multi-Level Map-Reduce for High Performance Data Analysis
2016-05-23
LLMapReduce works with several schedulers such as SLURM, Grid Engine and LSF. Keywords—LLMapReduce; map-reduce; performance; scheduler; Grid Engine ...SLURM; LSF I. INTRODUCTION Large scale computing is currently dominated by four ecosystems: supercomputing, database, enterprise , and big data [1...interconnects [6]), High performance math libraries (e.g., BLAS [7, 8], LAPACK [9], ScaLAPACK [10]) designed to exploit special processing hardware, High
Periodic, On-Demand, and User-Specified Information Reconciliation
NASA Technical Reports Server (NTRS)
Kolano, Paul
2007-01-01
Automated sequence generation (autogen) signifies both a process and software used to automatically generate sequences of commands to operate various spacecraft. Autogen requires fewer workers than are needed for older manual sequence-generation processes and reduces sequence-generation times from weeks to minutes. The autogen software comprises the autogen script plus the Activity Plan Generator (APGEN) program. APGEN can be used for planning missions and command sequences. APGEN includes a graphical user interface that facilitates scheduling of activities on a time line and affords a capability to automatically expand, decompose, and schedule activities.
Multi-time scale control of demand flexibility in smart distribution networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhattarai, Bishnu; Myers, Kurt; Bak-Jensen, Birgitte
This study presents a multi-timescale control strategy to deploy demand flexibilities of electric vehicles (EV) for providing system balancing and local congestion management by simultaneously ensuring economic benefits to participating actors. First, the EV charging problem from consumer, aggregator, and grid operator’s perspective is investigated. A hierarchical control architecture (HCA) comprising scheduling, coordinative, and adaptive layers is then designed to realize their coordinative goal. This is realized by integrating a multi-time scale control, which works from a day-ahead scheduling up to real-time adaptive control. The performance of the developed method is investigated with high EV penetration in a typical distributionmore » network. The simulation results demonstrates that HCA exploit EV flexibility to solve grid unbalancing and congestions with simultaneous maximization of economic benefits by ensuring EV participation to day-ahead, balancing, and regulation markets. For the given network configuration and pricing structure, HCA ensures the EV owners to get paid up to 5 times the cost they were paying without control.« less
Multi-time scale control of demand flexibility in smart distribution networks
Bhattarai, Bishnu; Myers, Kurt; Bak-Jensen, Birgitte; ...
2017-01-01
This study presents a multi-timescale control strategy to deploy demand flexibilities of electric vehicles (EV) for providing system balancing and local congestion management by simultaneously ensuring economic benefits to participating actors. First, the EV charging problem from consumer, aggregator, and grid operator’s perspective is investigated. A hierarchical control architecture (HCA) comprising scheduling, coordinative, and adaptive layers is then designed to realize their coordinative goal. This is realized by integrating a multi-time scale control, which works from a day-ahead scheduling up to real-time adaptive control. The performance of the developed method is investigated with high EV penetration in a typical distributionmore » network. The simulation results demonstrates that HCA exploit EV flexibility to solve grid unbalancing and congestions with simultaneous maximization of economic benefits by ensuring EV participation to day-ahead, balancing, and regulation markets. For the given network configuration and pricing structure, HCA ensures the EV owners to get paid up to 5 times the cost they were paying without control.« less
Optimal Energy Management for Microgrids
NASA Astrophysics Data System (ADS)
Zhao, Zheng
Microgrid is a recent novel concept in part of the development of smart grid. A microgrid is a low voltage and small scale network containing both distributed energy resources (DERs) and load demands. Clean energy is encouraged to be used in a microgrid for economic and sustainable reasons. A microgrid can have two operational modes, the stand-alone mode and grid-connected mode. In this research, a day-ahead optimal energy management for a microgrid under both operational modes is studied. The objective of the optimization model is to minimize fuel cost, improve energy utilization efficiency and reduce gas emissions by scheduling generations of DERs in each hour on the next day. Considering the dynamic performance of battery as Energy Storage System (ESS), the model is featured as a multi-objectives and multi-parametric programming constrained by dynamic programming, which is proposed to be solved by using the Advanced Dynamic Programming (ADP) method. Then, factors influencing the battery life are studied and included in the model in order to obtain an optimal usage pattern of battery and reduce the correlated cost. Moreover, since wind and solar generation is a stochastic process affected by weather changes, the proposed optimization model is performed hourly to track the weather changes. Simulation results are compared with the day-ahead energy management model. At last, conclusions are presented and future research in microgrid energy management is discussed.
Dall'Osso, F.; Dominey-Howes, D.; Moore, C.; Summerhayes, S.; Withycombe, G.
2014-01-01
Approximately 85% of Australia's population live along the coastal fringe, an area with high exposure to extreme inundations such as tsunamis. However, to date, no Probabilistic Tsunami Hazard Assessments (PTHA) that include inundation have been published for Australia. This limits the development of appropriate risk reduction measures by decision and policy makers. We describe our PTHA undertaken for the Sydney metropolitan area. Using the NOAA NCTR model MOST (Method for Splitting Tsunamis), we simulate 36 earthquake-generated tsunamis with annual probabilities of 1:100, 1:1,000 and 1:10,000, occurring under present and future predicted sea level conditions. For each tsunami scenario we generate a high-resolution inundation map of the maximum water level and flow velocity, and we calculate the exposure of buildings and critical infrastructure. Results indicate that exposure to earthquake-generated tsunamis is relatively low for present events, but increases significantly with higher sea level conditions. The probabilistic approach allowed us to undertake a comparison with an existing storm surge hazard assessment. Interestingly, the exposure to all the simulated tsunamis is significantly lower than that for the 1:100 storm surge scenarios, under the same initial sea level conditions. The results have significant implications for multi-risk and emergency management in Sydney. PMID:25492514
Dall'Osso, F; Dominey-Howes, D; Moore, C; Summerhayes, S; Withycombe, G
2014-12-10
Approximately 85% of Australia's population live along the coastal fringe, an area with high exposure to extreme inundations such as tsunamis. However, to date, no Probabilistic Tsunami Hazard Assessments (PTHA) that include inundation have been published for Australia. This limits the development of appropriate risk reduction measures by decision and policy makers. We describe our PTHA undertaken for the Sydney metropolitan area. Using the NOAA NCTR model MOST (Method for Splitting Tsunamis), we simulate 36 earthquake-generated tsunamis with annual probabilities of 1:100, 1:1,000 and 1:10,000, occurring under present and future predicted sea level conditions. For each tsunami scenario we generate a high-resolution inundation map of the maximum water level and flow velocity, and we calculate the exposure of buildings and critical infrastructure. Results indicate that exposure to earthquake-generated tsunamis is relatively low for present events, but increases significantly with higher sea level conditions. The probabilistic approach allowed us to undertake a comparison with an existing storm surge hazard assessment. Interestingly, the exposure to all the simulated tsunamis is significantly lower than that for the 1:100 storm surge scenarios, under the same initial sea level conditions. The results have significant implications for multi-risk and emergency management in Sydney.
Vehicle Scheduling Schemes for Commercial and Emergency Logistics Integration
Li, Xiaohui; Tan, Qingmei
2013-01-01
In modern logistics operations, large-scale logistics companies, besides active participation in profit-seeking commercial business, also play an essential role during an emergency relief process by dispatching urgently-required materials to disaster-affected areas. Therefore, an issue has been widely addressed by logistics practitioners and caught researchers' more attention as to how the logistics companies achieve maximum commercial profit on condition that emergency tasks are effectively and performed satisfactorily. In this paper, two vehicle scheduling models are proposed to solve the problem. One is a prediction-related scheme, which predicts the amounts of disaster-relief materials and commercial business and then accepts the business that will generate maximum profits; the other is a priority-directed scheme, which, firstly groups commercial and emergency business according to priority grades and then schedules both types of business jointly and simultaneously by arriving at the maximum priority in total. Moreover, computer-based simulations are carried out to evaluate the performance of these two models by comparing them with two traditional disaster-relief tactics in China. The results testify the feasibility and effectiveness of the proposed models. PMID:24391724
Vehicle scheduling schemes for commercial and emergency logistics integration.
Li, Xiaohui; Tan, Qingmei
2013-01-01
In modern logistics operations, large-scale logistics companies, besides active participation in profit-seeking commercial business, also play an essential role during an emergency relief process by dispatching urgently-required materials to disaster-affected areas. Therefore, an issue has been widely addressed by logistics practitioners and caught researchers' more attention as to how the logistics companies achieve maximum commercial profit on condition that emergency tasks are effectively and performed satisfactorily. In this paper, two vehicle scheduling models are proposed to solve the problem. One is a prediction-related scheme, which predicts the amounts of disaster-relief materials and commercial business and then accepts the business that will generate maximum profits; the other is a priority-directed scheme, which, firstly groups commercial and emergency business according to priority grades and then schedules both types of business jointly and simultaneously by arriving at the maximum priority in total. Moreover, computer-based simulations are carried out to evaluate the performance of these two models by comparing them with two traditional disaster-relief tactics in China. The results testify the feasibility and effectiveness of the proposed models.
Optimal Scheduling Method of Controllable Loads in DC Smart Apartment Building
NASA Astrophysics Data System (ADS)
Shimoji, Tsubasa; Tahara, Hayato; Matayoshi, Hidehito; Yona, Atsushi; Senjyu, Tomonobu
2015-12-01
From the perspective of global warming suppression and the depletion of energy resources, renewable energies, such as the solar collector (SC) and photovoltaic generation (PV), have been gaining attention in worldwide. Houses or buildings with PV and heat pumps (HPs) are recently being used in residential areas widely due to the time of use (TOU) electricity pricing scheme which is essentially inexpensive during middle-night and expensive during day-time. If fixed batteries and electric vehicles (EVs) can be introduced in the premises, the electricity cost would be even more reduced. While, if the occupants arbitrarily use these controllable loads respectively, power demand in residential buildings may fluctuate in the future. Thus, an optimal operation of controllable loads such as HPs, batteries and EV should be scheduled in the buildings in order to prevent power flow from fluctuating rapidly. This paper proposes an optimal scheduling method of controllable loads, and the purpose is not only the minimization of electricity cost for the consumers, but also suppression of fluctuation of power flow on the power supply side. Furthermore, a novel electricity pricing scheme is also suggested in this paper.
NASA Astrophysics Data System (ADS)
Issa, S. M.; Shehhi, B. Al
2012-07-01
Landfill sites receive 92% of total annual solid waste produced by municipalities in the emirate of Abu Dhabi. In this study, candidate sites for an appropriate landfill location for the Abu Dhabi municipal area are determined by integrating geographic information systems (GIS) and multi-criteria evaluation (MCE) analysis. To identify appropriate landfill sites, eight input map layers including proximity to urban areas, proximity to wells and water table depth, geology and topography, proximity to touristic and archeological sites, distance from roads network, distance from drainage networks, and land slope are used in constraint mapping. A final map was generated which identified potential areas showing suitability for the location of the landfill site. Results revealed that 30% of the study area was identified as highly suitable, 25% as suitable, and 45% as unsuitable. The selection of the final landfill site, however, requires further field research.
A Comparison of Center/TRACON Automation System and Airline Time of Arrival Predictions
NASA Technical Reports Server (NTRS)
Heere, Karen R.; Zelenka, Richard E.
2000-01-01
Benefits from information sharing between an air traffic service provider and a major air carrier are evaluated. Aircraft arrival time schedules generated by the NASA/FAA Center/TRACON Automation System (CTAS) were provided to the American Airlines System Operations Control Center in Fort Worth, Texas, during a field trial of a specialized CTAS display. A statistical analysis indicates that the CTAS schedules, based on aircraft trajectories predicted from real-time radar and weather data, are substantially more accurate than the traditional airline arrival time estimates, constructed from flight plans and en route crew updates. The improvement offered by CTAS is especially advantageous during periods of heavy traffic and substantial terminal area delay, allowing the airline to avoid large predictive errors with serious impact on the efficiency and profitability of flight operations.
SCOUSE: Semi-automated multi-COmponent Universal Spectral-line fitting Engine
NASA Astrophysics Data System (ADS)
Henshaw, J. D.; Longmore, S. N.; Kruijssen, J. M. D.; Davies, B.; Bally, J.; Barnes, A.; Battersby, C.; Burton, M.; Cunningham, M. R.; Dale, J. E.; Ginsburg, A.; Immer, K.; Jones, P. A.; Kendrew, S.; Mills, E. A. C.; Molinari, S.; Moore, T. J. T.; Ott, J.; Pillai, T.; Rathborne, J.; Schilke, P.; Schmiedeke, A.; Testi, L.; Walker, D.; Walsh, A.; Zhang, Q.
2016-01-01
The Semi-automated multi-COmponent Universal Spectral-line fitting Engine (SCOUSE) is a spectral line fitting algorithm that fits Gaussian files to spectral line emission. It identifies the spatial area over which to fit the data and generates a grid of spectral averaging areas (SAAs). The spatially averaged spectra are fitted according to user-provided tolerance levels, and the best fit is selected using the Akaike Information Criterion, which weights the chisq of a best-fitting solution according to the number of free-parameters. A more detailed inspection of the spectra can be performed to improve the fit through an iterative process, after which SCOUSE integrates the new solutions into the solution file.
Nanowires from dirty multi-crystalline Si for hydrogen generation
NASA Astrophysics Data System (ADS)
Li, Xiaopeng; Schweizer, Stefan L.; Sprafke, Alexander; Wehrspohn, Ralf B.
2013-09-01
Silicon nanowires are considered as a promising architecture for solar energy conversion systems. By metal assisted chemical etching of multi-crystalline upgraded metallurgical silicon (UMG-Si), large areas of silicon nanowires (SiNWs) with high quality can be produced on the mother substrates. These areas show a low reflectance comparable to black silicon. More interestingly, we find that various metal impurities inside UMG-Si are removed due to the etching through element analysis. A prototype cell was built to test the photoelectrochemical (PEC) properties of UMG-SiNWs for water splitting. The on-set potential for hydrogen evolution was much reduced, and the photocurrent density showed an increment of 35% in comparison with a `dirty' UMG-Si wafer.
Scheduling Future Water Supply Investments Under Uncertainty
NASA Astrophysics Data System (ADS)
Huskova, I.; Matrosov, E. S.; Harou, J. J.; Kasprzyk, J. R.; Reed, P. M.
2014-12-01
Uncertain hydrological impacts of climate change, population growth and institutional changes pose a major challenge to planning of water supply systems. Planners seek optimal portfolios of supply and demand management schemes but also when to activate assets whilst considering many system goals and plausible futures. Incorporation of scheduling into the planning under uncertainty problem strongly increases its complexity. We investigate some approaches to scheduling with many-objective heuristic search. We apply a multi-scenario many-objective scheduling approach to the Thames River basin water supply system planning problem in the UK. Decisions include which new supply and demand schemes to implement, at what capacity and when. The impact of different system uncertainties on scheme implementation schedules are explored, i.e. how the choice of future scenarios affects the search process and its outcomes. The activation of schemes is influenced by the occurrence of extreme hydrological events in the ensemble of plausible scenarios and other factors. The approach and results are compared with a previous study where only the portfolio problem is addressed (without scheduling).
Compilation time analysis to minimize run-time overhead in preemptive scheduling on multiprocessors
NASA Astrophysics Data System (ADS)
Wauters, Piet; Lauwereins, Rudy; Peperstraete, J.
1994-10-01
This paper describes a scheduling method for hard real-time Digital Signal Processing (DSP) applications, implemented on a multi-processor. Due to the very high operating frequencies of DSP applications (typically hundreds of kHz) runtime overhead should be kept as small as possible. Because static scheduling introduces very little run-time overhead it is used as much as possible. Dynamic pre-emption of tasks is allowed if and only if it leads to better performance in spite of the extra run-time overhead. We essentially combine static scheduling with dynamic pre-emption using static priorities. Since we are dealing with hard real-time applications we must be able to guarantee at compile-time that all timing requirements will be satisfied at run-time. We will show that our method performs at least as good as any static scheduling method. It also reduces the total amount of dynamic pre-emptions compared with run time methods like deadline monotonic scheduling.
Zhang, Rui
2017-01-01
The traditional way of scheduling production processes often focuses on profit-driven goals (such as cycle time or material cost) while tending to overlook the negative impacts of manufacturing activities on the environment in the form of carbon emissions and other undesirable by-products. To bridge the gap, this paper investigates an environment-aware production scheduling problem that arises from a typical paint shop in the automobile manufacturing industry. In the studied problem, an objective function is defined to minimize the emission of chemical pollutants caused by the cleaning of painting devices which must be performed each time before a color change occurs. Meanwhile, minimization of due date violations in the downstream assembly shop is also considered because the two shops are interrelated and connected by a limited-capacity buffer. First, we have developed a mixed-integer programming formulation to describe this bi-objective optimization problem. Then, to solve problems of practical size, we have proposed a novel multi-objective particle swarm optimization (MOPSO) algorithm characterized by problem-specific improvement strategies. A branch-and-bound algorithm is designed for accurately assessing the most promising solutions. Finally, extensive computational experiments have shown that the proposed MOPSO is able to match the solution quality of an exact solver on small instances and outperform two state-of-the-art multi-objective optimizers in literature on large instances with up to 200 cars. PMID:29295603
Tiled architecture of a CNN-mostly IP system
NASA Astrophysics Data System (ADS)
Spaanenburg, Lambert; Malki, Suleyman
2009-05-01
Multi-core architectures have been popularized with the advent of the IBM CELL. On a finer grain the problems in scheduling multi-cores have already existed in the tiled architectures, such as the EPIC and Da Vinci. It is not easy to evaluate the performance of a schedule on such architecture as historical data are not available. One solution is to compile algorithms for which an optimal schedule is known by analysis. A typical example is an algorithm that is already defined in terms of many collaborating simple nodes, such as a Cellular Neural Network (CNN). A simple node with a local register stack together with a 'rotating wheel' internal communication mechanism has been proposed. Though the basic CNN allows for a tiled implementation of a tiled algorithm on a tiled structure, a practical CNN system will have to disturb this regularity by the additional need for arithmetical and logical operations. Arithmetic operations are needed for instance to accommodate for low-level image processing, while logical operations are needed to fork and merge different data streams without use of the external memory. It is found that the 'rotating wheel' internal communication mechanism still handles such mechanisms without the need for global control. Overall the CNN system provides for a practical network size as implemented on a FPGA, can be easily used as embedded IP and provides a clear benchmark for a multi-core compiler.
A dynamic scheduling algorithm for singe-arm two-cluster tools with flexible processing times
NASA Astrophysics Data System (ADS)
Li, Xin; Fung, Richard Y. K.
2018-02-01
This article presents a dynamic algorithm for job scheduling in two-cluster tools producing multi-type wafers with flexible processing times. Flexible processing times mean that the actual times for processing wafers should be within given time intervals. The objective of the work is to minimize the completion time of the newly inserted wafer. To deal with this issue, a two-cluster tool is decomposed into three reduced single-cluster tools (RCTs) in a series based on a decomposition approach proposed in this article. For each single-cluster tool, a dynamic scheduling algorithm based on temporal constraints is developed to schedule the newly inserted wafer. Three experiments have been carried out to test the dynamic scheduling algorithm proposed, comparing with the results the 'earliest starting time' heuristic (EST) adopted in previous literature. The results show that the dynamic algorithm proposed in this article is effective and practical.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
This appendix summarizes building characteristics used to determine heating and cooling loads for each of the five building types in each of the four regions. For the selected five buildings, the following data are attached: new and existing construction characteristics; new and existing construction thermal resistance; floor plan and elevation; people load schedule; lighting load schedule; appliance load schedule; ventilation schedule; and hot water use schedule. For the five building types (single family, apartment buildings, commercial buildings, office buildings, and schools), data are compiled in 10 appendices. These are Building Characteristics; Alternate Energy Sources and Energy Conservation Techniques Description, Costs,more » Fuel Price Scenarios; Life Cycle Cost Model; Simulation Models; Solar Heating/Cooling System; Condensed Weather; Single and Multi-Family Dwelling Characteristics and Energy Conservation Techniques; Mixed Strategies for Energy Conservation and Alternative Energy Utilization in Buildings. An extensive bibliography is given in the final appendix. (MCW)« less
NASA Astrophysics Data System (ADS)
Pichierri, Manuele; Hajnsek, Irena
2015-04-01
In this work, the potential of multi-baseline Pol-InSAR for crop parameter estimation (e.g. crop height and extinction coefficients) is explored. For this reason, a novel Oriented Volume over Ground (OVoG) inversion scheme is developed, which makes use of multi-baseline observables to estimate the whole stack of model parameters. The proposed algorithm has been initially validated on a set of randomly-generated OVoG scenarios, to assess its stability over crop structure changes and its robustness against volume decorrelation and other decorrelation sources. Then, it has been applied to a collection of multi-baseline repeat-pass SAR data, acquired over a rural area in Germany by DLR's F-SAR.
Innovative Contamination Certification of Multi-Mission Flight Hardware
NASA Technical Reports Server (NTRS)
Hansen, Patricia A.; Hughes, David W.; Montt, Kristina M.; Triolo, Jack J.
1998-01-01
Maintaining contamination certification of multi-mission flight hardware is an innovative approach to controlling mission costs. Methods for assessing ground induced degradation between missions have been employed by the Hubble Space Telescope (HST) Project for the multi-mission (servicing) hardware. By maintaining the cleanliness of the hardware between missions, and by controlling the materials added to the hardware during modification and refurbishment both project funding for contamination recertification and schedule have been significantly reduced. These methods will be discussed and HST hardware data will be presented.
Innovative Contamination Certification of Multi-Mission Flight Hardware
NASA Technical Reports Server (NTRS)
Hansen, Patricia A.; Hughes, David W.; Montt, Kristina M.; Triolo, Jack J.
1999-01-01
Maintaining contamination certification of multi-mission flight hardware is an innovative approach to controlling mission costs. Methods for assessing ground induced degradation between missions have been employed by the Hubble Space Telescope (HST) Project for the multi-mission (servicing) hardware. By maintaining the cleanliness of the hardware between missions, and by controlling the materials added to the hardware during modification and refurbishment both project funding for contamination recertification and schedule have been significantly reduced. These methods will be discussed and HST hardware data will be presented.
Joint operations planning for space surveillance missions on the MSX satellite
NASA Technical Reports Server (NTRS)
Stokes, Grant; Good, Andrew
1994-01-01
The Midcourse Space Experiment (MSX) satellite, sponsored by BMDO, is intended to gather broad-band phenomenology data on missiles, plumes, naturally occurring earthlimb backgrounds and deep space backgrounds. In addition the MSX will be used to conduct functional demonstrations of space-based space surveillance. The JHU/Applied Physics Laboratory (APL), located in Laurel, MD, is the integrator and operator of the MSX satellite. APL will conduct all operations related to the MSX and is charged with the detailed operations planning required to implement all of the experiments run on the MSX except the space surveillance experiments. The non-surveillance operations are generally amenable to being defined months ahead of time and being scheduled on a monthly basis. Lincoln Laboratory, Massachusetts Institute of Technology (LL), located in Lexington, MA, is the provider of one of the principle MSX instruments, the Space-Based Visible (SBV) sensor, and the agency charged with implementing the space surveillance demonstrations on the MSX. The planning timelines for the space surveillance demonstrations are fundamentally different from those for the other experiments. They are generally amenable to being scheduled on a monthly basis, but the specific experiment sequence and pointing must be refined shortly before execution. This allocation of responsibilities to different organizations implies the need for a joint mission planning system for conducting space surveillance demonstrations. This paper details the iterative, joint planning system, based on passing responsibility for generating MSX commands for surveillance operations from APL to LL for specific scheduled operations. The joint planning system, including the generation of a budget for spacecraft resources to be used for surveillance events, has been successfully demonstrated during ground testing of the MSX and is being validated for MSX launch within the year. The planning system developed for the MSX forms a model possibly applicable to developing distributed mission planning systems for other multi-use satellites.
NASA Astrophysics Data System (ADS)
Jiang, Huaiguang
With the evolution of energy and power systems, the emerging Smart Grid (SG) is mainly featured by distributed renewable energy generations, demand-response control and huge amount of heterogeneous data sources. Widely distributed synchrophasor sensors, such as phasor measurement units (PMUs) and fault disturbance recorders (FDRs), can record multi-modal signals, for power system situational awareness and renewable energy integration. An effective and economical approach is proposed for wide-area security assessment. This approach is based on wavelet analysis for detecting and locating the short-term and long-term faults in SG, using voltage signals collected by distributed synchrophasor sensors. A data-driven approach for fault detection, identification and location is proposed and studied. This approach is based on matching pursuit decomposition (MPD) using Gaussian atom dictionary, hidden Markov model (HMM) of real-time frequency and voltage variation features, and fault contour maps generated by machine learning algorithms in SG systems. In addition, considering the economic issues, the placement optimization of distributed synchrophasor sensors is studied to reduce the number of the sensors without affecting the accuracy and effectiveness of the proposed approach. Furthermore, because the natural hazards is a critical issue for power system security, this approach is studied under different types of faults caused by natural hazards. A fast steady-state approach is proposed for voltage security of power systems with a wind power plant connected. The impedance matrix can be calculated by the voltage and current information collected by the PMUs. Based on the impedance matrix, locations in SG can be identified, where cause the greatest impact on the voltage at the wind power plants point of interconnection. Furthermore, because this dynamic voltage security assessment method relies on time-domain simulations of faults at different locations, the proposed approach is feasible, convenient and effective. Conventionally, wind energy is highly location-dependent. Many desirable wind resources are located in rural areas without direct access to the transmission grid. By connecting MW-scale wind turbines or wind farms to the distributions system of SG, the cost of building long transmission facilities can be avoid and wind power supplied to consumers can be greatly increased. After the effective wide area monitoring (WAM) approach is built, an event-driven control strategy is proposed for renewable energy integration. This approach is based on support vector machine (SVM) predictor and multiple-input and multiple-output (MIMO) model predictive control (MPC) on linear time-invariant (LTI) and linear time-variant (LTV) systems. The voltage condition of the distribution system is predicted by the SVM classifier using synchrophasor measurement data. The controllers equipped with wind turbine generators are triggered by the prediction results. Both transmission level and distribution level are designed based on this proposed approach. Considering economic issues in the power system, a statistical scheduling approach to economic dispatch and energy reserves is proposed. The proposed approach focuses on minimizing the overall power operating cost with considerations of renewable energy uncertainty and power system security. The hybrid power system scheduling is formulated as a convex programming problem to minimize power operating cost, taking considerations of renewable energy generation, power generation-consumption balance and power system security. A genetic algorithm based approach is used for solving the minimization of the power operating cost. In addition, with technology development, it can be predicted that the renewable energy such as wind turbine generators and PV panels will be pervasively located in distribution systems. The distribution system is an unbalanced system, which contains single-phase, two-phase and three-phase loads, and distribution lines. The complex configuration brings a challenge to power flow calculation. A topology analysis based iterative approach is used to solve this problem. In this approach, a self-adaptive topology recognition method is used to analyze the distribution system, and the backward/forward sweep algorithm is used to generate the power flow results. Finally, for the numerical simulations, the IEEE 14-bus, 30-bus, 39-bus and 118-bus systems are studied for fault detection, identification and location. Both transmission level and distribution level models are employed with the proposed control strategy for voltage stability of renewable energy integration. The simulation results demonstrate the effectiveness of the proposed methods. The IEEE 24-bus reliability test system (IEEE-RTS), which is commonly used for evaluating the price stability and reliability of power system, is used as the test bench for verifying and evaluating system performance of the proposed scheduling approach.
Public data and open source tools for multi-assay genomic investigation of disease.
Kannan, Lavanya; Ramos, Marcel; Re, Angela; El-Hachem, Nehme; Safikhani, Zhaleh; Gendoo, Deena M A; Davis, Sean; Gomez-Cabrero, David; Castelo, Robert; Hansen, Kasper D; Carey, Vincent J; Morgan, Martin; Culhane, Aedín C; Haibe-Kains, Benjamin; Waldron, Levi
2016-07-01
Molecular interrogation of a biological sample through DNA sequencing, RNA and microRNA profiling, proteomics and other assays, has the potential to provide a systems level approach to predicting treatment response and disease progression, and to developing precision therapies. Large publicly funded projects have generated extensive and freely available multi-assay data resources; however, bioinformatic and statistical methods for the analysis of such experiments are still nascent. We review multi-assay genomic data resources in the areas of clinical oncology, pharmacogenomics and other perturbation experiments, population genomics and regulatory genomics and other areas, and tools for data acquisition. Finally, we review bioinformatic tools that are explicitly geared toward integrative genomic data visualization and analysis. This review provides starting points for accessing publicly available data and tools to support development of needed integrative methods. © The Author 2015. Published by Oxford University Press.
Gauger, Paul G; Davis, Janice W; Orr, Peter J
2002-09-01
Administration of graduate medical education programs has become more difficult as compliance with ACGME work guidelines has assumed increased importance. These guidelines have caused many changes in the resident work environment, including the emergence of complicated cross-cover arrangements. Many participating residents (each with his or her own individual scheduling requirements) usually generate these schedules. Accordingly, schedules are often not submitted in a timely fashion and they may not be in compliance with the ACGME guidelines for maximum on-call assignments and mandatory days off. Our objective was the establishment of a Web-based system that guides residents in creating on-call schedules that follow ACGME guidelines while still allowing maximum flexibility -- thus allowing each resident to maintain an internal locus of control. A versatile and scalable system with password-protected user (resident) and administrator interfaces was created. An entire academic year is included, and past months and years are automatically archived. The residents log on within the first 15 days of the preceding month and choose their positions in a schedule template. They then make adjustments while receiving immediate summary feedback on compliance with ACGME guidelines. The schedule is electronically submitted to the educational administrator for final approval. If a cross-cover system is required, the program automatically generates an optimal schedule using both of the approved participating service schedules. The residents then have an additional five-day period to make adjustments in the cross-cover schedule while still receiving compliance feedback. The administrator again provides final approval electronically. The communication interface automatically pages or e-mails the residents when schedules are updated or approved. Since the information exists in a relational database, simple reporting tools are included to extract the information necessary to generate records for institutional GME management. Implementation of this program has been met with great enthusiasm from the institutional stakeholders. Specifically, residents have embraced the ability to directly control their schedules and have gained appreciation for the regulatory matrix in which they function. Institutional administrators have praised the improvement in compliance and the ease of documentation. We anticipate that the system will also meet with approval from reviewing regulatory bodies, as it generates and stores accurate information about the resident work environment. This program is robust and versatile enough to be modified for any GME training program in the country.
Installation of Ohio's First Electrolysis-Based Hydrogen Fueling Station
NASA Technical Reports Server (NTRS)
Scheidegger, Brianne T.; Lively, Michael L.
2012-01-01
This paper describes progress made towards the installation of a hydrogen fueling station in Northeast Ohio. In collaboration with several entities in the Northeast Ohio area, the NASA Glenn Research Center is installing a hydrogen fueling station that uses electrolysis to generate hydrogen on-site. The installation of this station is scheduled for the spring of 2012 at the Greater Cleveland Regional Transit Authority s Hayden bus garage in East Cleveland. This will be the first electrolysis-based hydrogen fueling station in Ohio.
ERIC Educational Resources Information Center
Pugliese, Cara E.; Kenworthy, Lauren; Bal, Vanessa Hus; Wallace, Gregory L.; Yerys, Benjamin E.; Maddox, Brenna B.; White, Susan W.; Popal, Haroon; Armour, Anna Chelsea; Miller, Judith; Herrington, John D.; Schultz, Robert T.; Martin, Alex; Anthony, Laura Gutermuth
2015-01-01
Recent updates have been proposed to the Autism Diagnostic Observation Schedule-2 Module 4 diagnostic algorithm. This new algorithm, however, has not yet been validated in an independent sample without intellectual disability (ID). This multi-site study compared the original and revised algorithms in individuals with ASD without ID. The revised…
McGerald, Genevieve; Dvorkin, Ronald; Levy, David; Lovell-Rose, Stephanie; Sharma, Adhi
2009-06-01
Prescriptions for controlled substances decrease when regulatory barriers are put in place. The converse has not been studied. The objective was to determine whether a less complicated prescription writing process is associated with a change in the prescribing patterns of controlled substances in the emergency department (ED). The authors conducted a retrospective nonconcurrent cohort study of all patients seen in an adult ED between April 19, 2005, and April 18, 2007, who were discharged with a prescription. Prior to April 19, 2006, a specialized prescription form stored in a locked cabinet was obtained from the nursing staff to write a prescription for benzodiazepines or Schedule II opioids. After April 19, 2006, New York State mandated that all prescriptions, regardless of schedule classification, be generated on a specialized bar-coded prescription form. The main outcome of the study was to compare the proportion of Schedule III-V opioids to Schedule II opioids and benzodiazepines prescribed in the ED before and after the introduction of a less cumbersome prescription writing process. Of the 26,638 charts reviewed, 2.1% of the total number of prescriptions generated were for a Schedule II controlled opioid before the new system was implemented compared to 13.6% after (odds ratio [OR] = 7.3, 95% confidence interval [CI] = 6.4 to 8.4). The corresponding percentages for Schedule III-V opioids were 29.9% to 18.1% (OR = 0.52, 95% CI = 0.49 to 0.55) and for benzodiazepines 1.4% to 3.9% (OR = 2.8, 95% CI = 2.4 to 3.4). Patients were more likely to receive a prescription for a Schedule II opioid or a benzodiazepine after a more streamlined computer-generated prescription writing process was introduced in this ED. (c) 2009 by the Society for Academic Emergency Medicine.
A low delay transmission method of multi-channel video based on FPGA
NASA Astrophysics Data System (ADS)
Fu, Weijian; Wei, Baozhi; Li, Xiaobin; Wang, Quan; Hu, Xiaofei
2018-03-01
In order to guarantee the fluency of multi-channel video transmission in video monitoring scenarios, we designed a kind of video format conversion method based on FPGA and its DMA scheduling for video data, reduces the overall video transmission delay.In order to sace the time in the conversion process, the parallel ability of FPGA is used to video format conversion. In order to improve the direct memory access (DMA) writing transmission rate of PCIe bus, a DMA scheduling method based on asynchronous command buffer is proposed. The experimental results show that this paper designs a low delay transmission method based on FPGA, which increases the DMA writing transmission rate by 34% compared with the existing method, and then the video overall delay is reduced to 23.6ms.
NASA Astrophysics Data System (ADS)
Wang, Chun; Ji, Zhicheng; Wang, Yan
2017-07-01
In this paper, multi-objective flexible job shop scheduling problem (MOFJSP) was studied with the objects to minimize makespan, total workload and critical workload. A variable neighborhood evolutionary algorithm (VNEA) was proposed to obtain a set of Pareto optimal solutions. First, two novel crowded operators in terms of the decision space and object space were proposed, and they were respectively used in mating selection and environmental selection. Then, two well-designed neighborhood structures were used in local search, which consider the problem characteristics and can hold fast convergence. Finally, extensive comparison was carried out with the state-of-the-art methods specially presented for solving MOFJSP on well-known benchmark instances. The results show that the proposed VNEA is more effective than other algorithms in solving MOFJSP.
PIONEER VENUS 2 MULTI-PROBE PARACHUTE TESTS IN THE VAB SHOWS OPEN PARACHUTE
NASA Technical Reports Server (NTRS)
1975-01-01
A parachute system, designed to carry an instrument-laden probe down through the dense atmosphere of torrid, cloud-shrouded Venus, was tested in KSC's Vehicle Assembly Building. The tests are in preparation for a Pioneer multi-probe mission to Venus scheduled for launch from KSC in 1978. Full-scale (12-foot diameter) parachutes with simulated pressure vessels weighing up to 45 pounds were dropped from heights of up to 450 feet tot he floor of the VAB where the impact was cushioned by a honeycomb cardboard impact arrestor. The VAB offers an ideal, wind-free testing facility at no additional construction cost and was used for similar tests of the parachute system for the twin Viking spacecraft scheduled for launch toward Mars in August.
PIONEER VENUS 2 MULTI-PROBE PARACHUTE TESTS IN VAB WITH PARACHUTE HOISTED HIGH
NASA Technical Reports Server (NTRS)
1975-01-01
A parachute system, designed to carry an instrument-laden probe down through the dense atmosphere of torrid, cloud-shrouded Venus, was tested in KSC's Vehicle Assembly Building. The tests are in preparation for a Pioneer multi-probe mission to Venus scheduled for launch from KSC in 1978. Full-scale (12-foot diameter) parachutes with simulated pressure vessels weighing up to 45 pounds were dropped from heights of up to 450 feet tot he floor of the VAB where the impact was cushioned by a honeycomb cardboard impact arrestor. The VAB offers an ideal, wind-free testing facility at no additional construction cost and was used for similar tests of the parachute system for the twin Viking spacecraft scheduled for launch toward Mars in August.
PIONEER VENUS 2 MULTI-PROBE PARACHUTE TESTS IN VAB PRIOR TO ATTACHING PRESSURE VESSEL
NASA Technical Reports Server (NTRS)
1975-01-01
A parachute system, designed to carry an instrument-laden probe down through the dense atmosphere of torrid, cloud-shrouded Venus, was tested in KSC's Vehicle Assembly Building. The tests are in preparation for a Pioneer multi-probe mission to Venus scheduled for launch from KSC in 1978. Full-scale (12-foot diameter) parachutes with simulated pressure vessels weighing up to 45 pounds were dropped from heights of up to 450 feet tot he floor of the VAB where the impact was cushioned by a honeycomb cardboard impact arrestor. The VAB offers an ideal, wind-free testing facility at no additional construction cost and was used for similar tests of the parachute system for the twin Viking spacecraft scheduled for launch toward Mars in August.
PIONEER VENUS 2 MULTI-PROBE PARACHUTE TESTS IN THE VEHICLE ASSEMBLY BUILDING
NASA Technical Reports Server (NTRS)
1975-01-01
A parachute system, designed to carry an instrument-laden probe down through the dense atmosphere of torrid, cloud-shrouded Venus, was tested in KSC's Vehicle Assembly Building. The tests are in preparation for a Pioneer multi-probe mission to Venus scheduled for launch from KSC in 1978. Full-scale (12-foot diameter) parachutes with simulated pressure vessels weighing up to 45 pounds were dropped from heights of up to 450 feet tot he floor of the VAB where the impact was cushioned by a honeycomb cardboard impact arrestor. The VAB offers an ideal, wind-free testing facility at no additional construction cost and was used for similar tests of the parachute system for the twin Viking spacecraft scheduled for launch toward Mars in August.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castellana, Vito G.; Tumeo, Antonino; Ferrandi, Fabrizio
Emerging applications such as data mining, bioinformatics, knowledge discovery, social network analysis are irregular. They use data structures based on pointers or linked lists, such as graphs, unbalanced trees or unstructures grids, which generates unpredictable memory accesses. These data structures usually are large, but difficult to partition. These applications mostly are memory bandwidth bounded and have high synchronization intensity. However, they also have large amounts of inherent dynamic parallelism, because they potentially perform a task for each one of the element they are exploring. Several efforts are looking at accelerating these applications on hybrid architectures, which integrate general purpose processorsmore » with reconfigurable devices. Some solutions, which demonstrated significant speedups, include custom-hand tuned accelerators or even full processor architectures on the reconfigurable logic. In this paper we present an approach for the automatic synthesis of accelerators from C, targeted at irregular applications. In contrast to typical High Level Synthesis paradigms, which construct a centralized Finite State Machine, our approach generates dynamically scheduled hardware components. While parallelism exploitation in typical HLS-generated accelerators is usually bound within a single execution flow, our solution allows concurrently running multiple execution flow, thus also exploiting the coarser grain task parallelism of irregular applications. Our approach supports multiple, multi-ported and distributed memories, and atomic memory operations. Its main objective is parallelizing as many memory operations as possible, independently from their execution time, to maximize the memory bandwidth utilization. This significantly differs from current HLS flows, which usually consider a single memory port and require precise scheduling of memory operations. A key innovation of our approach is the generation of a memory interface controller, which dynamically maps concurrent memory accesses to multiple ports. We present a case study on a typical irregular kernel, Graph Breadth First search (BFS), exploring different tradeoffs in terms of parallelism and number of memories.« less
Developing algorithm for the critical care physician scheduling
NASA Astrophysics Data System (ADS)
Lee, Hyojun; Pah, Adam; Amaral, Luis; Northwestern Memorial Hospital Collaboration
Understanding the social network has enabled us to quantitatively study social phenomena such as behaviors in adoption and propagation of information. However, most work has been focusing on networks of large heterogeneous communities, and little attention has been paid to how work-relevant information spreads within networks of small and homogeneous groups of highly trained individuals, such as physicians. Within the professionals, the behavior patterns and the transmission of information relevant to the job are dependent not only on the social network between the employees but also on the schedules and teams that work together. In order to systematically investigate the dependence of the spread of ideas and adoption of innovations on a work-environment network, we sought to construct a model for the interaction network of critical care physicians at Northwestern Memorial Hospital (NMH) based on their work schedules. We inferred patterns and hidden rules from past work schedules such as turnover rates. Using the characteristics of the work schedules of the physicians and their turnover rates, we were able to create multi-year synthetic work schedules for a generic intensive care unit. The algorithm for creating shift schedules can be applied to other schedule dependent networks ARO1.
A planning language for activity scheduling
NASA Technical Reports Server (NTRS)
Zoch, David R.; Lavallee, David; Weinstein, Stuart; Tong, G. Michael
1991-01-01
Mission planning and scheduling of spacecraft operations are becoming more complex at NASA. Described here are a mission planning process; a robust, flexible planning language for spacecraft and payload operations; and a software scheduling system that generates schedules based on planning language inputs. The mission planning process often involves many people and organizations. Consequently, a planning language is needed to facilitate communication, to provide a standard interface, and to represent flexible requirements. The software scheduling system interprets the planning language and uses the resource, time duration, constraint, and alternative plan flexibilities to resolve scheduling conflicts.
Designing an optimal software intensive system acquisition: A game theoretic approach
NASA Astrophysics Data System (ADS)
Buettner, Douglas John
The development of schedule-constrained software-intensive space systems is challenging. Case study data from national security space programs developed at the U.S. Air Force Space and Missile Systems Center (USAF SMC) provide evidence of the strong desire by contractors to skip or severely reduce software development design and early defect detection methods in these schedule-constrained environments. The research findings suggest recommendations to fully address these issues at numerous levels. However, the observations lead us to investigate modeling and theoretical methods to fundamentally understand what motivated this behavior in the first place. As a result, Madachy's inspection-based system dynamics model is modified to include unit testing and an integration test feedback loop. This Modified Madachy Model (MMM) is used as a tool to investigate the consequences of this behavior on the observed defect dynamics for two remarkably different case study software projects. Latin Hypercube sampling of the MMM with sample distributions for quality, schedule and cost-driven strategies demonstrate that the higher cost and effort quality-driven strategies provide consistently better schedule performance than the schedule-driven up-front effort-reduction strategies. Game theory reasoning for schedule-driven engineers cutting corners on inspections and unit testing is based on the case study evidence and Austin's agency model to describe the observed phenomena. Game theory concepts are then used to argue that the source of the problem and hence the solution to developers cutting corners on quality for schedule-driven system acquisitions ultimately lies with the government. The game theory arguments also lead to the suggestion that the use of a multi-player dynamic Nash bargaining game provides a solution for our observed lack of quality game between the government (the acquirer) and "large-corporation" software developers. A note is provided that argues this multi-player dynamic Nash bargaining game also provides the solution to Freeman Dyson's problem, for a way to place a label of good or bad on systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cox, R.G.
Much controversy surrounds government regulation of routing and scheduling of Hazardous Materials Transportation (HMT). Increases in operating costs must be balanced against expected benefits from local HMT bans and curfews when promulgating or preempting HMT regulations. Algorithmic approaches for evaluating HMT routing and scheduling regulatory policy are described. A review of current US HMT regulatory policy is presented to provide a context for the analysis. Next, a multiobjective shortest path algorithm to find the set of efficient routes under conflicting objectives is presented. This algorithm generates all efficient routes under any partial ordering in a single pass through the network.more » Also, scheduling algorithms are presented to estimate the travel time delay due to HMT curfews along a route. Algorithms are presented assuming either deterministic or stochastic travel times between curfew cities and also possible rerouting to avoid such cities. These algorithms are applied to the case study of US highway transport of spent nuclear fuel from reactors to permanent repositories. Two data sets were used. One data set included the US Interstate Highway System (IHS) network with reactor locations, possible repository sites, and 150 heavily populated areas (HPAs). The other data set contained estimates of the population residing with 0.5 miles of the IHS and the Eastern US. Curfew delay is dramatically reduced by optimally scheduling departure times unless inter-HPA travel times are highly uncertain. Rerouting shipments to avoid HPAs is a less efficient approach to reducing delay.« less
Large Scale Multi-area Static/Dynamic Economic Dispatch using Nature Inspired Optimization
NASA Astrophysics Data System (ADS)
Pandit, Manjaree; Jain, Kalpana; Dubey, Hari Mohan; Singh, Rameshwar
2017-04-01
Economic dispatch (ED) ensures that the generation allocation to the power units is carried out such that the total fuel cost is minimized and all the operating equality/inequality constraints are satisfied. Classical ED does not take transmission constraints into consideration, but in the present restructured power systems the tie-line limits play a very important role in deciding operational policies. ED is a dynamic problem which is performed on-line in the central load dispatch centre with changing load scenarios. The dynamic multi-area ED (MAED) problem is more complex due to the additional tie-line, ramp-rate and area-wise power balance constraints. Nature inspired (NI) heuristic optimization methods are gaining popularity over the traditional methods for complex problems. This work presents the modified particle swarm optimization (PSO) based techniques where parameter automation is effectively used for improving the search efficiency by avoiding stagnation to a sub-optimal result. This work validates the performance of the PSO variants with traditional solver GAMS for single as well as multi-area economic dispatch (MAED) on three test cases of a large 140-unit standard test system having complex constraints.
A criterion autoscheduler for long range planning
NASA Technical Reports Server (NTRS)
Sponsler, Jeffrey L.
1994-01-01
A constraint-based scheduling system called SPIKE is used to create long-term schedules for the Hubble Space Telescope. A meta-level scheduler called the Criterion Autoscheduler for Long range planning (CASL) was created to guide SPIKE's schedule generation according to the agenda of the planning scientists. It is proposed that sufficient flexibility exists in a schedule to allow high level planning heuristics to be applied without adversely affected crucial constraints such as spacecraft efficiency. This hypothesis is supported by test data which is described.
NASA Astrophysics Data System (ADS)
Anh, N. K.; Phonekeo, V.; My, V. C.; Duong, N. D.; Dat, P. T.
2014-02-01
In recent years, Vietnamese economy has been growing up rapidly and caused serious environmental quality plunging, especially in industrial and mining areas. It brings an enormous threat to a socially sustainable development and the health of human beings. Environmental quality assessment and protection are complex and dynamic processes, since it involves spatial information from multi-sector, multi-region and multi-field sources and needs complicated data processing. Therefore, an effective environmental protection information system is needed, in which considerable factors hidden in the complex relationships will become clear and visible. In this paper, the authors present the methodology which was used to generate environmental hazard maps which are applied to the integration of Analytic Hierarchy Process (AHP) and Geographical Information system (GIS). We demonstrate the results that were obtained from the study area in Dong Trieu district. This research study has contributed an overall perspective of environmental quality and identified the devastated areas where the administration urgently needs to establish an appropriate policy to improve and protect the environment.
Development of Watch Schedule Using Rules Approach
NASA Astrophysics Data System (ADS)
Jurkevicius, Darius; Vasilecas, Olegas
The software for schedule creation and optimization solves a difficult, important and practical problem. The proposed solution is an online employee portal where administrator users can create and manage watch schedules and employee requests. Each employee can login with his/her own account and see his/her assignments, manage requests, etc. Employees set as administrators can perform the employee scheduling online, manage requests, etc. This scheduling software allows users not only to see the initial and optimized watch schedule in a simple and understandable form, but also to create special rules and criteria and input their business. The system using rules automatically will generate watch schedule.
2011-11-17
CAPE CANAVERAL, Fla. -- In the Vertical Integration Facility at Space Launch Complex-41 on Cape Canaveral Air Force Station, the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission is uncovered during preparations to install it on MSL's Curiosity rover. The mesh container, known as the "gorilla cage," is suspended above the generator as it is lifted off the MMRTG's support base. The cage protects the MMRTG during transport and allows any excess heat generated to dissipate into the air. The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. Curiosity, MSL's car-sized rover, has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is scheduled for Nov. 25. For more information, visit http://www.nasa.gov/msl. Photo credit: Department of Energy/Idaho National Laboratory
2011-11-17
CAPE CANAVERAL, Fla. -- In the Vertical Integration Facility at Space Launch Complex-41 on Cape Canaveral Air Force Station, spacecraft technicians guide the mesh container protecting the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission as a crane lifts it from around the generator. The container, known as the "gorilla cage," protects the MMRTG during transport and allows any excess heat generated to dissipate into the air. Next, the MMRTG will be installed on MSL's Curiosity rover. The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. Curiosity, MSL's car-sized rover, has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is scheduled for Nov. 25. For more information, visit http://www.nasa.gov/msl. Photo credit: Department of Energy/Idaho National Laboratory
2011-11-17
CAPE CANAVERAL, Fla. -- At Space Launch Complex-41 on Cape Canaveral Air Force Station, spacecraft technicians in the Vertical Integration Facility prepare to install the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission on the Curiosity rover. The MMRTG is enclosed in a protective mesh container, known as the "gorilla cage," which protects it during transport and allows any excess heat generated to dissipate into the air. The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. Curiosity, MSL's car-sized rover, has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is scheduled for Nov. 25. For more information, visit http://www.nasa.gov/msl. Photo credit: Department of Energy/Idaho National Laboratory
Bat Surveys of Retired Facilitiies Scheduled for Demolition by Washington Closure Hanford
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gano, K. A.; Lucas, J. G.; Lindsey, C. T.
2011-06-30
This project was conducted to evaluate buildings and facilities remaining in the Washington Closure Hanford (WCH) deactivation, decontamination, decommissioning, and demolition schedule for bat roost sites. The project began in spring of 2009 and was concluded in spring of 2011. A total of 196 buildings and facilities were evaluated for the presence of bat roosting sites. The schedule for the project was prioritized to accommodate the demolition schedule. As the surveys were completed, the results were provided to the project managers to facilitate planning and project completion. The surveys took place in the 300 Area, 400 Area, 100-H, 100-D, 100-N,more » and 100-B/C Area. This report is the culmination of all the bat surveys and summarizes the findings by area and includes recommended mitigation actions where bat roosts were found.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Yosep; Choi, Junhyun; Tong, Meiping, E-mail: tongmeiping@iee.pku.edu.cn
2014-04-01
Millimeter-sized spherical silica foams (SSFs) with hierarchical multi-modal pore structure featuring high specific surface area and ordered mesoporous frameworks were successfully prepared using aqueous agar addition, foaming and drop-in-oil processes. The pore-related properties of the prepared spherical silica (SSs) and SSFs were systematically characterized by field emission-scanning electron microscopy (FE-SEM), transmission electron microscopy (TEM), small-angle X-ray diffraction (SAXRD), Hg intrusion porosimetry, and N{sub 2} adsorption–desorption isotherm measurements. Improvements in the BET surface area and total pore volume were observed at 504 m{sup 2} g{sup −1} and 5.45 cm{sup 3} g{sup −1}, respectively, after an agar addition and foaming process. Despitemore » the increase in the BET surface area, the mesopore wall thickness and the pore size of the mesopores generated from the block copolymer with agar addition were unchanged based on the SAXRD, TEM, and BJH methods. The SSFs prepared in the present study were confirmed to have improved BET surface area and micropore volume through the agar loading, and to exhibit interconnected 3-dimensional network macropore structure leading to the enhancement of total porosity and BET surface area via the foaming process. - Highlights: • Millimeter-sized spherical silica foams (SSFs) are successfully prepared. • SSFs exhibit high BET surface area and ordered hierarchical pore structure. • Agar addition improves BET surface area and micropore volume of SSFs. • Foaming process generates interconnected 3-D network macropore structure of SSFs.« less
Scheduling the resident 80-hour work week: an operations research algorithm.
Day, T Eugene; Napoli, Joseph T; Kuo, Paul C
2006-01-01
The resident 80-hour work week requires that programs now schedule duty hours. Typically, scheduling is performed in an empirical "trial-and-error" fashion. However, this is a classic "scheduling" problem from the field of operations research (OR). It is similar to scheduling issues that airlines must face with pilots and planes routing through various airports at various times. The authors hypothesized that an OR approach using iterative computer algorithms could provide a rational scheduling solution. Institution-specific constraints of the residency problem were formulated. A total of 56 residents are rotating through 4 hospitals. Additional constraints were dictated by the Residency Review Committee (RRC) rules or the specific surgical service. For example, at Hospital 1, during the weekday hours between 6 am and 6 pm, there will be a PGY4 or PGY5 and a PGY2 or PGY3 on-duty to cover Service "A." A series of equations and logic statements was generated to satisfy all constraints and requirements. These were restated in the Optimization Programming Language used by the ILOG software suite for solving mixed integer programming problems. An integer programming solution was generated to this resource-constrained assignment problem. A total of 30,900 variables and 12,443 constraints were required. A total of man-hours of programming were used; computer run-time was 25.9 hours. A weekly schedule was generated for each resident that satisfied the RRC regulations while fulfilling all stated surgical service requirements. Each required between 64 and 80 weekly resident duty hours. The authors conclude that OR is a viable approach to schedule resident work hours. This technique is sufficiently robust to accommodate changes in resident numbers, service requirements, and service and hospital rotations.
Optimization of nas lemoore scheduling to support a growing aircraft population
2017-03-01
requirements, and, without knowing the other squadrons’ flight plans , creates his or her squadron’s flight schedule. Figure 2 illustrates the process each...Lemoore, they do not communicate their flight schedules among themselves; hence, the daily flight plan generated by each squadron is independently...manual process for aircraft flight scheduling at Naval Air Station (NAS) Lemoore accommodates the independent needs of 16 fighter resident squadrons as
Wireless Sensor Network Metrics for Real-Time Systems
2009-05-20
to compute the probability of end-to-end packet delivery as a function of latency, the expected radio energy consumption on the nodes from relaying... schedules for WSNs. Particularly, we focus on the impact scheduling has on path diversity, using short repeating schedules and Greedy Maximal Matching...a greedy algorithm for constructing a mesh routing topology. Finally, we study the implications of using distributed scheduling schemes to generate
Multi-exciton emission from solitary dopant states of carbon nanotubes.
Ma, Xuedan; Hartmann, Nicolai F; Velizhanin, Kirill A; Baldwin, Jon K S; Adamska, Lyudmyla; Tretiak, Sergei; Doorn, Stephen K; Htoon, Han
2017-11-02
By separating the photons from slow and fast decays of single and multi-exciton states in a time gated 2 nd order photon correlation experiment, we show that solitary oxygen dopant states of single-walled carbon nanotubes (SWCNTs) allow emission of photon pairs with efficiencies as high as 44% of single exciton emission. Our pump dependent time resolved photoluminescence (PL) studies further reveal diffusion-limited exciton-exciton annihilation as the key process that limits the emission of multi-excitons at high pump fluences. We further postulate that creation of additional permanent exciton quenching sites occurring under intense laser irradiation leads to permanent PL quenching. With this work, we bring out multi-excitonic processes of solitary dopant states as a new area to be explored for potential applications in lasing and entangled photon generation.
Generation of Look-Up Tables for Dynamic Job Shop Scheduling Decision Support Tool
NASA Astrophysics Data System (ADS)
Oktaviandri, Muchamad; Hassan, Adnan; Mohd Shaharoun, Awaluddin
2016-02-01
Majority of existing scheduling techniques are based on static demand and deterministic processing time, while most job shop scheduling problem are concerned with dynamic demand and stochastic processing time. As a consequence, the solutions obtained from the traditional scheduling technique are ineffective wherever changes occur to the system. Therefore, this research intends to develop a decision support tool (DST) based on promising artificial intelligent that is able to accommodate the dynamics that regularly occur in job shop scheduling problem. The DST was designed through three phases, i.e. (i) the look-up table generation, (ii) inverse model development and (iii) integration of DST components. This paper reports the generation of look-up tables for various scenarios as a part in development of the DST. A discrete event simulation model was used to compare the performance among SPT, EDD, FCFS, S/OPN and Slack rules; the best performances measures (mean flow time, mean tardiness and mean lateness) and the job order requirement (inter-arrival time, due dates tightness and setup time ratio) which were compiled into look-up tables. The well-known 6/6/J/Cmax Problem from Muth and Thompson (1963) was used as a case study. In the future, the performance measure of various scheduling scenarios and the job order requirement will be mapped using ANN inverse model.
Belke, Terry W; Pierce, W David
2016-12-01
Rats responded on a multiple variable-ratio (VR) 10 VR 10 schedule of reinforcement in which lever pressing was reinforced by the opportunity to run in a wheel for 30s in both the changed (manipulated) and unchanged components. To generate positive contrast, the schedule of reinforcement in the changed component was shifted to extinction; to generate negative contrast, the schedule was shifted to VR 3. With the shift to extinction in the changed component, wheel-running and local lever-pressing rates increased in the unchanged component, a result supporting positive contrast; however, the shift to a VR 3 schedule in the changed component showed no evidence of negative contrast in the unaltered setting, only wheel running decreased in the unchanged component. Changes in wheel-running rates across components were consistent in showing a compensation effect, depending on whether the schedule manipulation increased or decreased opportunities for wheel running in the changed component. These findings are the first to demonstrate positive behavioral contrast on a multiple schedule with wheel running as reinforcement in both components. Copyright © 2016 Elsevier B.V. All rights reserved.
Software defined multi-OLT passive optical network for flexible traffic allocation
NASA Astrophysics Data System (ADS)
Zhang, Shizong; Gu, Rentao; Ji, Yuefeng; Zhang, Jiawei; Li, Hui
2016-10-01
With the rapid growth of 4G mobile network and vehicular network services mobile terminal users have increasing demand on data sharing among different radio remote units (RRUs) and roadside units (RSUs). Meanwhile, commercial video-streaming, video/voice conference applications delivered through peer-to-peer (P2P) technology are still keep on stimulating the sharp increment of bandwidth demand in both business and residential subscribers. However, a significant issue is that, although wavelength division multiplexing (WDM) and orthogonal frequency division multiplexing (OFDM) technology have been proposed to fulfil the ever-increasing bandwidth demand in access network, the bandwidth of optical fiber is not unlimited due to the restriction of optical component properties and modulation/demodulation technology, and blindly increase the wavelength cannot meet the cost-sensitive characteristic of the access network. In this paper, we propose a software defined multi-OLT PON architecture to support efficient scheduling of access network traffic. By introducing software defined networking technology and wavelength selective switch into TWDM PON system in central office, multiple OLTs can be considered as a bandwidth resource pool and support flexible traffic allocation for optical network units (ONUs). Moreover, under the configuration of the control plane, ONUs have the capability of changing affiliation between different OLTs under different traffic situations, thus the inter-OLT traffic can be localized and the data exchange pressure of the core network can be released. Considering this architecture is designed to be maximum following the TWDM PON specification, the existing optical distribution network (ODN) investment can be saved and conventional EPON/GPON equipment can be compatible with the proposed architecture. What's more, based on this architecture, we propose a dynamic wavelength scheduling algorithm, which can be deployed as an application on control plane and achieve effective scheduling OLT wavelength resources between different OLTs based on various traffic situation. Simulation results show that, by using the scheduling algorithm, network traffic between different OLTs can be optimized effectively, and the wavelength utilization of the multi-OLT system can be improved due to the flexible wavelength scheduling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoddard, Larry; Galluzzo, Geoff; Andrew, Daniel
The Department of Energy’s (DOE’s) Office of Renewable Power (ORP) has been tasked to provide effective program management and strategic direction for all of the DOE’s Energy Efficiency & Renewable Energy’s (EERE’s) renewable power programs. The ORP’s efforts to accomplish this mission are aligned with national energy policies, DOE strategic planning, EERE’s strategic planning, Congressional appropriation, and stakeholder advice. ORP is supported by three renewable energy offices, of which one is the Solar Energy Technology Office (SETO) whose SunShot Initiative has a mission to accelerate research, development and large scale deployment of solar technologies in the United States. SETO hasmore » a goal of reducing the cost of Concentrating Solar Power (CSP) by 75 percent of 2010 costs by 2020 to reach parity with base-load energy rates, and 30 percent further reductions by 2030. The SunShot Initiative is promoting the implementation of high temperature CSP with thermal energy storage allowing generation during high demand hours. The SunShot Initiative has funded significant research and development work on component testing, with attention to high temperature molten salts, heliostats, receiver designs, and high efficiency high temperature supercritical CO 2 (sCO2) cycles. DOE retained Black & Veatch to support SETO’s SunShot Initiative for CSP solar power tower technology in the following areas: 1. Concept definition, including costs and schedule, of a flexible test facility to be used to test and prove components in part to support financing. 2. Concept definition, including costs and schedule, of an integrated high temperature molten salt (MS) facility with thermal energy storage and with a supercritical CO 2 cycle generating approximately 10MWe. 3. Concept definition, including costs and schedule, of an integrated high temperature falling particle facility with thermal energy storage and with a supercritical CO 2 cycle generating approximately 10MWe. This report addresses the concept definition of the sCO2 power generation system, a sub-set of items 2 and 3 above. Other reports address the balance of items 1 to 3 above as well as the MS/sCO2 integrated 10MWe facility, Item 2.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vydyanathan, Naga; Krishnamoorthy, Sriram; Sabin, Gerald M.
2009-08-01
Complex parallel applications can often be modeled as directed acyclic graphs of coarse-grained application-tasks with dependences. These applications exhibit both task- and data-parallelism, and combining these two (also called mixedparallelism), has been shown to be an effective model for their execution. In this paper, we present an algorithm to compute the appropriate mix of task- and data-parallelism required to minimize the parallel completion time (makespan) of these applications. In other words, our algorithm determines the set of tasks that should be run concurrently and the number of processors to be allocated to each task. The processor allocation and scheduling decisionsmore » are made in an integrated manner and are based on several factors such as the structure of the taskgraph, the runtime estimates and scalability characteristics of the tasks and the inter-task data communication volumes. A locality conscious scheduling strategy is used to improve inter-task data reuse. Evaluation through simulations and actual executions of task graphs derived from real applications as well as synthetic graphs shows that our algorithm consistently generates schedules with lower makespan as compared to CPR and CPA, two previously proposed scheduling algorithms. Our algorithm also produces schedules that have lower makespan than pure taskand data-parallel schedules. For task graphs with known optimal schedules or lower bounds on the makespan, our algorithm generates schedules that are closer to the optima than other scheduling approaches.« less
NASA Astrophysics Data System (ADS)
D'Addabbo, Annarita; Refice, Alberto; Lovergine, Francesco; Pasquariello, Guido
2016-04-01
Flooding is one of the most frequent and expansive natural hazard. High-resolution flood mapping is an essential step in the monitoring and prevention of inundation hazard, both to gain insight into the processes involved in the generation of flooding events, and from the practical point of view of the precise assessment of inundated areas. Remote sensing data are recognized to be useful in this respect, thanks to the high resolution and regular revisit schedules of state-of-the-art satellites, moreover offering a synoptic overview of the extent of flooding. In particular, Synthetic Aperture Radar (SAR) data present several favorable characteristics for flood mapping, such as their relative insensitivity to the meteorological conditions during acquisitions, as well as the possibility of acquiring independently of solar illumination, thanks to the active nature of the radar sensors [1]. However, flood scenarios are typical examples of complex situations in which different factors have to be considered to provide accurate and robust interpretation of the situation on the ground: the presence of many land cover types, each one with a particular signature in presence of flood, requires modelling the behavior of different objects in the scene in order to associate them to flood or no flood conditions [2]. Generally, the fusion of multi-temporal, multi-sensor, multi-resolution and/or multi-platform Earth observation image data, together with other ancillary information, seems to have a key role in the pursuit of a consistent interpretation of complex scenes. In the case of flooding, distance from the river, terrain elevation, hydrologic information or some combination thereof can add useful information to remote sensing data. Suitable methods, able to manage and merge different kind of data, are so particularly needed. In this work, a fully automatic tool, based on Bayesian Networks (BNs) [3] and able to perform data fusion, is presented. It supplies flood maps describing the dynamics of each analysed event, combining time series of images, acquired by different sensors, with ancillary information. Some experiments have been performed by combining multi-temporal SAR intensity images, InSAR coherence and optical data, with geomorphic and other ground information. The tool has been tested on different flood events occurred in the Basilicata region (Italy) during the last years, showing good capabilities of identification of a large area interested by the flood phenomenon, partially overcoming the obstacle constituted by the presence of scattering/coherence classes corresponding to different land cover types, which respond differently to the presence of water and to inundation evolution [1] A. Refice et al, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 7, no. 7, pp. 2711-2722, 2014. [2] L. Pulvirenti et al., IEEE Trans. Geosci. Rem. Sens., Vol. PP, pp. 1- 13, 2015. [3] A. D'Addabbo et al., "A Bayesian Network for Flood Detection combining SAR Imagery and Ancillary Data," IEEE Trans. Geosci. Rem. Sens., in press.
Kesterton, Amy J; Cabral de Mello, Meena
2010-09-24
This review investigates the effectiveness of interventions aimed at generating demand for and use of sexual and reproductive health (SRH) services by young people; and interventions aimed at generating wider community support for their use. Reports and publications were found in the peer-reviewed and grey literature through academic search engines; web searches; the bibliographies of known conference proceedings and papers; and consultation with experts. The studies were reviewed against a set of inclusion criteria and those that met these were explored in more depth. The evidence-base for interventions aimed at both generating demand and community support for SRH services for young people was found under-developed and many available studies do not provide strong evidence. However, the potential of several methods to increase youth uptake has been demonstrated, this includes the linking of school education programs with youth friendly services, life skills approaches and social marketing and franchising. There is also evidence that the involvement of key community gatekeepers such as parents and religious leaders is vital to generating wider community support. In general a combined multi-component approach seems most promising with several success stories to build on. Many areas for further research have been highlighted and there is a great need for more rigorous evaluation of programmes in this area. In particular, further evaluation of individual components within a multi-component approach is needed to elucidate the most effective interventions.
2010-01-01
Background This review investigates the effectiveness of interventions aimed at generating demand for and use of sexual and reproductive health (SRH) services by young people; and interventions aimed at generating wider community support for their use. Methods Reports and publications were found in the peer-reviewed and grey literature through academic search engines; web searches; the bibliographies of known conference proceedings and papers; and consultation with experts. The studies were reviewed against a set of inclusion criteria and those that met these were explored in more depth. Results The evidence-base for interventions aimed at both generating demand and community support for SRH services for young people was found under-developed and many available studies do not provide strong evidence. However, the potential of several methods to increase youth uptake has been demonstrated, this includes the linking of school education programs with youth friendly services, life skills approaches and social marketing and franchising. There is also evidence that the involvement of key community gatekeepers such as parents and religious leaders is vital to generating wider community support. In general a combined multi-component approach seems most promising with several success stories to build on. Conclusions Many areas for further research have been highlighted and there is a great need for more rigorous evaluation of programmes in this area. In particular, further evaluation of individual components within a multi-component approach is needed to elucidate the most effective interventions. PMID:20863411
NASA Astrophysics Data System (ADS)
Klug, P.; Schlenz, F.; Hank, T.; Migdall, S.; Weiß, I.; Danner, M.; Bach, H.; Mauser, W.
2016-08-01
The analysis system developed in the frame of the M4Land project (Model based, Multi-temporal, Multi scale and Multi sensorial retrieval of continuous land management information) has proven its capabilities of classifying crop type and creating products on the intensity of agricultural production using optical remote sensing data from Landsat and RapidEye. In this study, Sentinel-2 data is used for the first time together with Landsat 7 ETM+ and 8 OLI data within the M4Land analysis system to derive continuously crop type and the agricultural intensity of fields in an area north of Munich, Germany and the year 2015.
Energy Balance of Rural Ecosystems In India
NASA Astrophysics Data System (ADS)
Chhabra, A.; Madhava Rao, V.; Hermon, R. R.; Garg, A.; Nag, T.; Bhaskara Rao, N.; Sharma, A.; Parihar, J. S.
2014-11-01
India is predominantly an agricultural and rural country. Across the country, the villages vary in geographical location, area, human and livestock population, availability of resources, agricultural practices, livelihood patterns etc. This study presents an estimation of net energy balance resulting from primary production vis-a-vis energy consumption through various components in a "Rural Ecosystem". Seven sites located in different agroclimatic regions of India were studied. An end use energy accounting "Rural Energy Balance Model" is developed for input-output analysis of various energy flows of production, consumption, import and export through various components of crop, trees outside forest plantations, livestock, rural households, industry or trade within the village system boundary. An integrated approach using field, ancillary, GIS and high resolution IRS-P6 Resourcesat-2 LISS IV data is adopted for generation of various model inputs. The primary and secondary field data collection of various energy uses at household and village level were carried out using structured schedules and questionnaires. High resolution multi-temporal Resourcesat-2 LISS IV data (2013-14) was used for generating landuse/landcover maps and estimation of above-ground Trees Outside Forests phytomass. The model inputs were converted to energy equivalents using country-specific energy conversion factors. A comprehensive geotagged database of sampled households and available resources at each study site was also developed in ArcGIS framework. Across the study sites, the estimated net energy balance ranged from -18.8 Terra Joules (TJ) in a high energy consuming Hodka village, Gujarat to 224.7 TJ in an agriculture, aquaculture and plantation intensive Kollaparru village, Andhra Pradesh. The results indicate that the net energy balance of a Rural Ecosystem is largely driven by primary production through crops and natural vegetation. This study provides a significant insight to policy relevant recommendations for Energy Sustainable Rural India.
Pigeons' Choices between Fixed-Interval and Random-Interval Schedules: Utility of Variability?
ERIC Educational Resources Information Center
Andrzejewski, Matthew E.; Cardinal, Claudia D.; Field, Douglas P.; Flannery, Barbara A.; Johnson, Michael; Bailey, Kathleen; Hineline, Philip N.
2005-01-01
Pigeons' choosing between fixed-interval and random-interval schedules of reinforcement was investigated in three experiments using a discrete-trial procedure. In all three experiments, the random-interval schedule was generated by sampling a probability distribution at an interval (and in multiples of the interval) equal to that of the…
A Market-Based Approach to Multi-factory Scheduling
NASA Astrophysics Data System (ADS)
Vytelingum, Perukrishnen; Rogers, Alex; MacBeth, Douglas K.; Dutta, Partha; Stranjak, Armin; Jennings, Nicholas R.
In this paper, we report on the design of a novel market-based approach for decentralised scheduling across multiple factories. Specifically, because of the limitations of scheduling in a centralised manner - which requires a center to have complete and perfect information for optimality and the truthful revelation of potentially commercially private preferences to that center - we advocate an informationally decentralised approach that is both agile and dynamic. In particular, this work adopts a market-based approach for decentralised scheduling by considering the different stakeholders representing different factories as self-interested, profit-motivated economic agents that trade resources for the scheduling of jobs. The overall schedule of these jobs is then an emergent behaviour of the strategic interaction of these trading agents bidding for resources in a market based on limited information and their own preferences. Using a simple (zero-intelligence) bidding strategy, we empirically demonstrate that our market-based approach achieves a lower bound efficiency of 84%. This represents a trade-off between a reasonable level of efficiency (compared to a centralised approach) and the desirable benefits of a decentralised solution.
Scheduling multirobot operations in manufacturing by truncated Petri nets
NASA Astrophysics Data System (ADS)
Chen, Qin; Luh, J. Y.
1995-08-01
Scheduling of operational sequences in manufacturing processes is one of the important problems in automation. Methods of applying Petri nets to model and analyze the problem with constraints on precedence relations, multiple resources allocation, etc. have been available in literature. Searching for an optimum schedule can be implemented by combining the branch-and-bound technique with the execution of the timed Petri net. The process usually produces a large Petri net which is practically not manageable. This disadvantage, however, can be handled by a truncation technique which divides the original large Petri net into several smaller size subnets. The complexity involved in the analysis of each subnet individually is greatly reduced. However, when the locally optimum schedules of the resulting subnets are combined together, it may not yield an overall optimum schedule for the original Petri net. To circumvent this problem, algorithms are developed based on the concepts of Petri net execution and modified branch-and-bound process. The developed technique is applied to a multi-robot task scheduling problem of the manufacturing work cell.
An Online Scheduling Algorithm with Advance Reservation for Large-Scale Data Transfers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balman, Mehmet; Kosar, Tevfik
Scientific applications and experimental facilities generate massive data sets that need to be transferred to remote collaborating sites for sharing, processing, and long term storage. In order to support increasingly data-intensive science, next generation research networks have been deployed to provide high-speed on-demand data access between collaborating institutions. In this paper, we present a practical model for online data scheduling in which data movement operations are scheduled in advance for end-to-end high performance transfers. In our model, data scheduler interacts with reservation managers and data transfer nodes in order to reserve available bandwidth to guarantee completion of jobs that aremore » accepted and confirmed to satisfy preferred time constraint given by the user. Our methodology improves current systems by allowing researchers and higher level meta-schedulers to use data placement as a service where theycan plan ahead and reserve the scheduler time in advance for their data movement operations. We have implemented our algorithm and examined possible techniques for incorporation into current reservation frameworks. Performance measurements confirm that the proposed algorithm is efficient and scalable.« less
Yu, Dantong; Katramatos, Dimitrios; Sim, Alexander; Shoshani, Arie
2014-04-22
A cross-domain network resource reservation scheduler configured to schedule a path from at least one end-site includes a management plane device configured to monitor and provide information representing at least one of functionality, performance, faults, and fault recovery associated with a network resource; a control plane device configured to at least one of schedule the network resource, provision local area network quality of service, provision local area network bandwidth, and provision wide area network bandwidth; and a service plane device configured to interface with the control plane device to reserve the network resource based on a reservation request and the information from the management plane device. Corresponding methods and computer-readable medium are also disclosed.
Aerosol generation and measurement of multi-wall carbon nanotubes
NASA Astrophysics Data System (ADS)
Myojo, Toshihiko; Oyabu, Takako; Nishi, Kenichiro; Kadoya, Chikara; Tanaka, Isamu; Ono-Ogasawara, Mariko; Sakae, Hirokazu; Shirai, Tadashi
2009-01-01
Mass production of some kinds of carbon nanotubes (CNT) is now imminent, but little is known about the risk associated with their exposure. It is important to assess the propensity of the CNT to release particles into air for its risk assessment. In this study, we conducted aerosolization of a multi-walled CNT (MWCNT) to assess several aerosol measuring instruments. A Palas RBG-1000 aerosol generator applied mechanical stress to the MWCNT by a rotating brush at feed rates ranging from 2 to 20 mm/h, which the MWCNT was fed to a two-component fluidized bed. The fluidized bed aerosol generator was used to disperse the MWCNT aerosol once more. We monitored the generated MWCNT aerosol concentrations based on number, area, and mass using a condensation particle counter and nanoparticle surface area monitor. Also we quantified carbon mass in MWCNT aerosol samples by a carbon monitor. The shape of aerosolized MWCNT fibers was observed by a scanning electron microscope (SEM). The MWCNT was well dispersed by our system. We found isolated MWCNT fibers in the aerosols by SEM and the count median lengths of MWCNT fibers were 4-6 μm. The MWCNT was quantified by the carbon monitor with a modified condition based on the NIOSH analytical manual. The MWCNT aerosol concentration (EC mass base) was 4 mg/m3 at 2 mm/h in this study.
Reactive Scheduling in Multipurpose Batch Plants
NASA Astrophysics Data System (ADS)
Narayani, A.; Shaik, Munawar A.
2010-10-01
Scheduling is an important operation in process industries for improving resource utilization resulting in direct economic benefits. It has a two-fold objective of fulfilling customer orders within the specified time as well as maximizing the plant profit. Unexpected disturbances such as machine breakdown, arrival of rush orders and cancellation of orders affect the schedule of the plant. Reactive scheduling is generation of a new schedule which has minimum deviation from the original schedule in spite of the occurrence of unexpected events in the plant operation. Recently, Shaik & Floudas (2009) proposed a novel unified model for short-term scheduling of multipurpose batch plants using unit-specific event-based continuous time representation. In this paper, we extend the model of Shaik & Floudas (2009) to handle reactive scheduling.
NASA Technical Reports Server (NTRS)
Thipphavong, Jane; Landry, Steven J.
2005-01-01
The Multi-center Traffic Management Advisor (McTMA) provides a platform for regional or national traffic flow management, by allowing long-range cooperative time-based metering to constrained resources, such as airports or air traffic control center boundaries. Part of the demand for resources is made up of proposed departures, whose actual departure time is difficult to predict. For this reason, McTMA does not schedule the departures in advance, but rather relies on traffic managers to input their requested departure time. Because this happens only a short while before the aircraft's actual departure, McTMA is unable to accurately predict the amount of delay airborne aircraft will need to take in order to accommodate the departures. The proportion of demand which is made up by such proposed departures increases as the horizon over which metering occurs gets larger. This study provides an initial analysis of the severity of this problem in a 400-500 nautical mile metering horizon and discusses potential solutions to accommodate these departures. The challenge is to smoothly incorporate departures with the airborne stream while not excessively delaying the departures.' In particular, three solutions are reviewed: (1) scheduling the departures at their proposed departure time; (2) not scheduling the departures in advance; and (3) scheduling the departures at some time in the future based on an estimated error in their proposed time. The first solution is to have McTMA to automatically schedule the departures at their proposed departure times. Since the proposed departure times are indicated in their flight times in advance, this method is the simplest, but studies have shown that these proposed times are often incorrect2 The second option is the current practice, which avoids these inaccuracies by only scheduling aircraft when a confirmed prediction of departure time is obtained from the tower of the departure airport. Lastly, McTMA can schedule the departures at a predicted departure time based on statistical data of past departure time performance. It has been found that departures usually have a wheels-up time after their indicated proposed departure time, as shown in Figure 1. Hence, the departures were scheduled at a time in the future based on the mean error in proposed departure times for their airport.
GeMS: Gemini Mcao System: current status and commissioning plans
NASA Astrophysics Data System (ADS)
Boccas, Maxime; Rigaut, François; Gratadour, Damien; d'Orgeville, Céline; Bec, Matthieu; Daruich, Felipe; Perez, Gabriel; Arriagada, Gustavo; Bombino, Stacy; Carter, Chris; Cavedoni, Chas; Collao, Fabian; Collins, Paul; Diaz, Pablo; Ebbers, Angelic; Galvez, Ramon; Gausachs, Gaston; Hardash, Steve; James, Eric; Karewicz, Stan; Lazo, Manuel; Maltes, Diego; Mouser, Ron; Rogers, Rolando; Rojas, Roberto; Sheehan, Michael; Trancho, Gelys; Vergara, Vicente; Vucina, Tomislav
2008-07-01
The Gemini Multi-Conjugate Adaptive Optics project was launched in April 1999 to become the Gemini South AO facility in Chile. The system includes 5 laser guide stars, 3 natural guide stars and 3 deformable mirrors optically conjugated at 0, 4.5 and 9km to achieve near-uniform atmospheric compensation over a 1 arc minute square field of view. Sub-contracted systems with vendors were started as early as October 2001 and were all delivered by July 2007, but for the 50W laser (due around September 2008). The in-house development began in January 2006, and is expected to be completed by the end of 2008 to continue with integration and testing (I&T) on the telescope. The on-sky commissioning phase is scheduled to start during the first half of 2009. In this general overview, we will first describe the status of each subsystem with their major requirements, risk areas and achieved performance. Next we will present our plan to complete the project by reviewing the remaining steps through I&T and commissioning on the telescope, both during day-time and at night-time. Finally, we will summarize some management activities like schedules, resources and conclude with some lessons learned.
SPIKE: AI scheduling techniques for Hubble Space Telescope
NASA Astrophysics Data System (ADS)
Johnston, Mark D.
1991-09-01
AI (Artificial Intelligence) scheduling techniques for HST are presented in the form of the viewgraphs. The following subject areas are covered: domain; HST constraint timescales; HTS scheduling; SPIKE overview; SPIKE architecture; constraint representation and reasoning; use of suitability functions by scheduling agent; SPIKE screen example; advantages of suitability function framework; limiting search and constraint propagation; scheduling search; stochastic search; repair methods; implementation; and status.
2018-01-01
In this work, a multi-hop string network with a single sink node is analyzed. A periodic optimal scheduling for TDMA operation that considers the characteristic long propagation delay of the underwater acoustic channel is presented. This planning of transmissions is obtained with the help of a new geometrical method based on a 2D lattice in the space-time domain. In order to evaluate the performance of this optimal scheduling, two service policies have been compared: FIFO and Round-Robin. Simulation results, including achievable throughput, packet delay, and queue length, are shown. The network fairness has also been quantified with the Gini index. PMID:29462966
JIGSAW: Preference-directed, co-operative scheduling
NASA Technical Reports Server (NTRS)
Linden, Theodore A.; Gaw, David
1992-01-01
Techniques that enable humans and machines to cooperate in the solution of complex scheduling problems have evolved out of work on the daily allocation and scheduling of Tactical Air Force resources. A generalized, formal model of these applied techniques is being developed. It is called JIGSAW by analogy with the multi-agent, constructive process used when solving jigsaw puzzles. JIGSAW begins from this analogy and extends it by propagating local preferences into global statistics that dynamically influence the value and variable ordering decisions. The statistical projections also apply to abstract resources and time periods--allowing more opportunities to find a successful variable ordering by reserving abstract resources and deferring the choice of a specific resource or time period.
NASA Astrophysics Data System (ADS)
Liu, Jiping; Kang, Xiaochen; Dong, Chun; Xu, Shenghua
2017-12-01
Surface area estimation is a widely used tool for resource evaluation in the physical world. When processing large scale spatial data, the input/output (I/O) can easily become the bottleneck in parallelizing the algorithm due to the limited physical memory resources and the very slow disk transfer rate. In this paper, we proposed a stream tilling approach to surface area estimation that first decomposed a spatial data set into tiles with topological expansions. With these tiles, the one-to-one mapping relationship between the input and the computing process was broken. Then, we realized a streaming framework towards the scheduling of the I/O processes and computing units. Herein, each computing unit encapsulated a same copy of the estimation algorithm, and multiple asynchronous computing units could work individually in parallel. Finally, the performed experiment demonstrated that our stream tilling estimation can efficiently alleviate the heavy pressures from the I/O-bound work, and the measured speedup after being optimized have greatly outperformed the directly parallel versions in shared memory systems with multi-core processors.
Proportional fair scheduling algorithm based on traffic in satellite communication system
NASA Astrophysics Data System (ADS)
Pan, Cheng-Sheng; Sui, Shi-Long; Liu, Chun-ling; Shi, Yu-Xin
2018-02-01
In the satellite communication network system, in order to solve the problem of low system capacity and user fairness in multi-user access to satellite communication network in the downlink, combined with the characteristics of user data service, an algorithm study on throughput capacity and user fairness scheduling is proposed - Proportional Fairness Algorithm Based on Traffic(B-PF). The algorithm is improved on the basis of the proportional fairness algorithm in the wireless communication system, taking into account the user channel condition and caching traffic information. The user outgoing traffic is considered as the adjustment factor of the scheduling priority and presents the concept of traffic satisfaction. Firstly,the algorithm calculates the priority of the user according to the scheduling algorithm and dispatches the users with the highest priority. Secondly, when a scheduled user is the business satisfied user, the system dispatches the next priority user. The simulation results show that compared with the PF algorithm, B-PF can improve the system throughput, the business satisfaction and fairness.
Scheduling time-critical graphics on multiple processors
NASA Technical Reports Server (NTRS)
Meyer, Tom W.; Hughes, John F.
1995-01-01
This paper describes an algorithm for the scheduling of time-critical rendering and computation tasks on single- and multiple-processor architectures, with minimal pipelining. It was developed to manage scientific visualization scenes consisting of hundreds of objects, each of which can be computed and displayed at thousands of possible resolution levels. The algorithm generates the time-critical schedule using progressive-refinement techniques; it always returns a feasible schedule and, when allowed to run to completion, produces a near-optimal schedule which takes advantage of almost the entire multiple-processor system.
Automating Mid- and Long-Range Scheduling for NASA's Deep Space Network
NASA Technical Reports Server (NTRS)
Johnston, Mark D.; Tran, Daniel; Arroyo, Belinda; Sorensen, Sugi; Tay, Peter; Carruth, Butch; Coffman, Adam; Wallace, Mike
2012-01-01
NASA has recently deployed a new mid-range scheduling system for the antennas of the Deep Space Network (DSN), called Service Scheduling Software, or S(sup 3). This system is architected as a modern web application containing a central scheduling database integrated with a collaborative environment, exploiting the same technologies as social web applications but applied to a space operations context. This is highly relevant to the DSN domain since the network schedule of operations is developed in a peer-to-peer negotiation process among all users who utilize the DSN (representing 37 projects including international partners and ground-based science and calibration users). The initial implementation of S(sup 3) is complete and the system has been operational since July 2011. S(sup 3) has been used for negotiating schedules since April 2011, including the baseline schedules for three launching missions in late 2011. S(sup 3) supports a distributed scheduling model, in which changes can potentially be made by multiple users based on multiple schedule "workspaces" or versions of the schedule. This has led to several challenges in the design of the scheduling database, and of a change proposal workflow that allows users to concur with or to reject proposed schedule changes, and then counter-propose with alternative or additional suggested changes. This paper describes some key aspects of the S(sup 3) system and lessons learned from its operational deployment to date, focusing on the challenges of multi-user collaborative scheduling in a practical and mission-critical setting. We will also describe the ongoing project to extend S(sup 3) to encompass long-range planning, downtime analysis, and forecasting, as the next step in developing a single integrated DSN scheduling tool suite to cover all time ranges.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edjabou, Maklawe Essonanawe, E-mail: vine@env.dtu.dk; Jensen, Morten Bang; Götze, Ramona
Highlights: • Tiered approach to waste sorting ensures flexibility and facilitates comparison of solid waste composition data. • Food and miscellaneous wastes are the main fractions contributing to the residual household waste. • Separation of food packaging from food leftovers during sorting is not critical for determination of the solid waste composition. - Abstract: Sound waste management and optimisation of resource recovery require reliable data on solid waste generation and composition. In the absence of standardised and commonly accepted waste characterisation methodologies, various approaches have been reported in literature. This limits both comparability and applicability of the results. In thismore » study, a waste sampling and sorting methodology for efficient and statistically robust characterisation of solid waste was introduced. The methodology was applied to residual waste collected from 1442 households distributed among 10 individual sub-areas in three Danish municipalities (both single and multi-family house areas). In total 17 tonnes of waste were sorted into 10–50 waste fractions, organised according to a three-level (tiered approach) facilitating comparison of the waste data between individual sub-areas with different fractionation (waste from one municipality was sorted at “Level III”, e.g. detailed, while the two others were sorted only at “Level I”). The results showed that residual household waste mainly contained food waste (42 ± 5%, mass per wet basis) and miscellaneous combustibles (18 ± 3%, mass per wet basis). The residual household waste generation rate in the study areas was 3–4 kg per person per week. Statistical analyses revealed that the waste composition was independent of variations in the waste generation rate. Both, waste composition and waste generation rates were statistically similar for each of the three municipalities. While the waste generation rates were similar for each of the two housing types (single-family and multi-family house areas), the individual percentage composition of food waste, paper, and glass was significantly different between the housing types. This indicates that housing type is a critical stratification parameter. Separating food leftovers from food packaging during manual sorting of the sampled waste did not have significant influence on the proportions of food waste and packaging materials, indicating that this step may not be required.« less
Science with the VLA Sky Survey (VLASS)
NASA Astrophysics Data System (ADS)
Murphy, Eric J.; Baum, Stefi Alison; Brandt, W. Niel; Chandler, Claire J.; Clarke, Tracy E.; Condon, James J.; Cordes, James M.; Deustua, Susana E.; Dickinson, Mark; Gugliucci, Nicole E.; Hallinan, Gregg; Hodge, Jacqueline; Lang, Cornelia C.; Law, Casey J.; Lazio, Joseph; Mao, Sui Ann; Myers, Steven T.; Osten, Rachel A.; Richards, Gordon T.; Strauss, Michael A.; White, Richard L.; Zauderer, Bevin; Extragalactic Science Working Group, Galactic Science Working Group, Transient Science Working Group
2015-01-01
The Very Large Array Sky Survey (VLASS) was initiated to develop and carry out a new generation large radio sky survey using the recently upgraded Karl G. Jansky Very Large Array. The proposed VLASS is a modern, multi-tiered survey with the VLA designed to provide a broad, cohesive science program with forefront scientific impact, capable of generating unexpected scientific discoveries, generating involvement from all astronomical communities, and leaving a lasting legacy value for decades.VLASS will observe from 2-4 GHz and is structured to combine comprehensive all sky coverage with sequentially deeper coverage in carefully identified parts of the sky, including the Galactic plane, and will be capable of informing time domain studies. This approach enables both focused and wide ranging scientific discovery through the coupling of deeper narrower tiers with increasing sky coverage at shallower depths, addressing key science issues and providing a statistical interpretational framework. Such an approach provides both astronomers and the citizen scientist with information for every accessible point of the radio sky, while simultaneously addressing fundamental questions about the nature and evolution of astrophysical objects.VLASS will follow the evolution of galaxies and their central black hole engines, measure the strength and topology of cosmic magnetic fields, unveil hidden explosions throughout the Universe, and chart our galaxy for stellar remnants and ionized bubbles. Multi-wavelength communities studying rare objects, the Galaxy, radio transients, or galaxy evolution out to the peak of the cosmic star formation rate density will equally benefit from VLASS.Early drafts of the VLASS proposal are available at the VLASS website (https://science.nrao.edu/science/surveys/vlass/vlass), and the final proposal will be posted in early January 2015 for community comment before undergoing review in March 2015. Upon approval, VLASS would then be on schedule to start observing in 2016.
NASA Astrophysics Data System (ADS)
Zhu, Kai-Jian; Li, Jun-Feng; Baoyin, He-Xi
2010-01-01
In case of an emergency like the Wenchuan earthquake, it is impossible to observe a given target on earth by immediately launching new satellites. There is an urgent need for efficient satellite scheduling within a limited time period, so we must find a way to reasonably utilize the existing satellites to rapidly image the affected area during a short time period. Generally, the main consideration in orbit design is satellite coverage with the subsatellite nadir point as a standard of reference. Two factors must be taken into consideration simultaneously in orbit design, i.e., the maximum observation coverage time and the minimum orbital transfer fuel cost. The local time of visiting the given observation sites must satisfy the solar radiation requirement. When calculating the operational orbit elements as optimal parameters to be evaluated, we obtain the minimum objective function by comparing the results derived from the primer vector theory with those derived from the Hohmann transfer because the operational orbit for observing the disaster area with impulse maneuvers is considered in this paper. The primer vector theory is utilized to optimize the transfer trajectory with three impulses and the Hohmann transfer is utilized for coplanar and small inclination of non-coplanar cases. Finally, we applied this method in a simulation of the rescue mission at Wenchuan city. The results of optimizing orbit design with a hybrid PSO and DE algorithm show that the primer vector and Hohmann transfer theory proved to be effective methods for multi-object orbit optimization.
Magnetospheric MultiScale (MMS) System Manager
NASA Technical Reports Server (NTRS)
Schiff, Conrad; Maher, Francis Alfred; Henely, Sean Philip; Rand, David
2014-01-01
The Magnetospheric MultiScale (MMS) mission is an ambitious NASA space science mission in which 4 spacecraft are flown in tight formation about a highly elliptical orbit. Each spacecraft has multiple instruments that measure particle and field compositions in the Earths magnetosphere. By controlling the members relative motion, MMS can distinguish temporal and spatial fluctuations in a way that a single spacecraft cannot.To achieve this control, 2 sets of four maneuvers, distributed evenly across the spacecraft must be performed approximately every 14 days. Performing a single maneuver on an individual spacecraft is usually labor intensive and the complexity becomes clearly increases with four. As a result, the MMS flight dynamics team turned to the System Manager to put the routine or error-prone under machine control freeing the analysts for activities that require human judgment.The System Manager is an expert system that is capable of handling operations activities associated with performing MMS maneuvers. As an expert system, it can work off a known schedule, launching jobs based on a one-time occurrence or on a set reoccurring schedule. It is also able to detect situational changes and use event-driven programming to change schedules, adapt activities, or call for help.
75 FR 29324 - Preferred Supplier Program (PSP)
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-25
... of the Navy, Acquisition and Logistics Management (DASN (A&LM)), is soliciting comments that the...; in the areas of cost, schedule, performance, quality, and business relations would be granted... exemplary performance, at the corporate level, in the areas of cost, schedule, performance, quality, and...
75 FR 28788 - Preferred Supplier Program (PSP)
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-24
... of the Navy, Acquisition and Logistics Management (DASN (A&LM)), is soliciting comments that the...; in the areas of cost, schedule, performance, quality, and business relations would be granted... demonstrated exemplary performance, at the corporate level, in the areas of cost, schedule, performance...
A New Model for Solving Time-Cost-Quality Trade-Off Problems in Construction
Fu, Fang; Zhang, Tao
2016-01-01
A poor quality affects project makespan and its total costs negatively, but it can be recovered by repair works during construction. We construct a new non-linear programming model based on the classic multi-mode resource constrained project scheduling problem considering repair works. In order to obtain satisfactory quality without a high increase of project cost, the objective is to minimize total quality cost which consists of the prevention cost and failure cost according to Quality-Cost Analysis. A binary dependent normal distribution function is adopted to describe the activity quality; Cumulative quality is defined to determine whether to initiate repair works, according to the different relationships among activity qualities, namely, the coordinative and precedence relationship. Furthermore, a shuffled frog-leaping algorithm is developed to solve this discrete trade-off problem based on an adaptive serial schedule generation scheme and adjusted activity list. In the program of the algorithm, the frog-leaping progress combines the crossover operator of genetic algorithm and a permutation-based local search. Finally, an example of a construction project for a framed railway overpass is provided to examine the algorithm performance, and it assist in decision making to search for the appropriate makespan and quality threshold with minimal cost. PMID:27911939
Software for Planning Scientific Activities on Mars
NASA Technical Reports Server (NTRS)
Ai-Chang, Mitchell; Bresina, John; Jonsson, Ari; Hsu, Jennifer; Kanefsky, Bob; Morris, Paul; Rajan, Kanna; Yglesias, Jeffrey; Charest, Len; Maldague, Pierre
2003-01-01
Mixed-Initiative Activity Plan Generator (MAPGEN) is a ground-based computer program for planning and scheduling the scientific activities of instrumented exploratory robotic vehicles, within the limitations of available resources onboard the vehicle. MAPGEN is a combination of two prior software systems: (1) an activity-planning program, APGEN, developed at NASA s Jet Propulsion Laboratory and (2) the Europa planner/scheduler from NASA Ames Research Center. MAPGEN performs all of the following functions: Automatic generation of plans and schedules for scientific and engineering activities; Testing of hypotheses (or what-if analyses of various scenarios); Editing of plans; Computation and analysis of resources; and Enforcement and maintenance of constraints, including resolution of temporal and resource conflicts among planned activities. MAPGEN can be used in either of two modes: one in which the planner/scheduler is turned off and only the basic APGEN functionality is utilized, or one in which both component programs are used to obtain the full planning, scheduling, and constraint-maintenance functionality.
Cost-efficient scheduling of FAST observations
NASA Astrophysics Data System (ADS)
Luo, Qi; Zhao, Laiping; Yu, Ce; Xiao, Jian; Sun, Jizhou; Zhu, Ming; Zhong, Yi
2018-03-01
A cost-efficient schedule for the Five-hundred-meter Aperture Spherical radio Telescope (FAST) requires to maximize the number of observable proposals and the overall scientific priority, and minimize the overall slew-cost generated by telescope shifting, while taking into account the constraints including the astronomical objects visibility, user-defined observable times, avoiding Radio Frequency Interference (RFI). In this contribution, first we solve the problem of maximizing the number of observable proposals and scientific priority by modeling it as a Minimum Cost Maximum Flow (MCMF) problem. The optimal schedule can be found by any MCMF solution algorithm. Then, for minimizing the slew-cost of the generated schedule, we devise a maximally-matchable edges detection-based method to reduce the problem size, and propose a backtracking algorithm to find the perfect matching with minimum slew-cost. Experiments on a real dataset from NASA/IPAC Extragalactic Database (NED) show that, the proposed scheduler can increase the usage of available times with high scientific priority and reduce the slew-cost significantly in a very short time.
Artificial Immune Algorithm for Subtask Industrial Robot Scheduling in Cloud Manufacturing
NASA Astrophysics Data System (ADS)
Suma, T.; Murugesan, R.
2018-04-01
The current generation of manufacturing industry requires an intelligent scheduling model to achieve an effective utilization of distributed manufacturing resources, which motivated us to work on an Artificial Immune Algorithm for subtask robot scheduling in cloud manufacturing. This scheduling model enables a collaborative work between the industrial robots in different manufacturing centers. This paper discussed two optimizing objectives which includes minimizing the cost and load balance of industrial robots through scheduling. To solve these scheduling problems, we used the algorithm based on Artificial Immune system. The parameters are simulated with MATLAB and the results compared with the existing algorithms. The result shows better performance than existing.
Rules and Self-Rules: Effects of Variation upon Behavioral Sensitivity to Change
ERIC Educational Resources Information Center
Baumann, Ana A.; Abreu-Rodrigues, Josele; da Silva Souza, Alessandra
2009-01-01
Four experiments compared the effects of self-rules and rules, and varied and specific schedules of reinforcement. Participants were first exposed to either several schedules (varied groups) or to one schedule (specific groups) and either were asked to generate rules (self-rule groups), were provided rules (rule groups), or were not asked nor…
NASA Technical Reports Server (NTRS)
Tavana, Madjid
2003-01-01
The primary driver for developing missions to send humans to other planets is to generate significant scientific return. NASA plans human planetary explorations with an acceptable level of risk consistent with other manned operations. Space exploration risks can not be completely eliminated. Therefore, an acceptable level of cost, technical, safety, schedule, and political risks and benefits must be established for exploratory missions. This study uses a three-dimensional multi-criteria decision making model to identify the risks and benefits associated with three alternative mission architecture operations concepts for the human exploration of Mars identified by the Mission Operations Directorate at Johnson Space Center. The three alternatives considered in this study include split, combo lander, and dual scenarios. The model considers the seven phases of the mission including: 1) Earth Vicinity/Departure; 2) Mars Transfer; 3) Mars Arrival; 4) Planetary Surface; 5) Mars Vicinity/Departure; 6) Earth Transfer; and 7) Earth Arrival. Analytic Hierarchy Process (AHP) and subjective probability estimation are used to captures the experts belief concerning the risks and benefits of the three alternative scenarios through a series of sequential, rational, and analytical processes.
2011-11-17
CAPE CANAVERAL, Fla. -- In the Vertical Integration Facility at Space Launch Complex-41 on Cape Canaveral Air Force Station, a turning fixture lowers the multi-mission radioisotope thermoelectric generator (MMRTG) for NASA's Mars Science Laboratory (MSL) mission toward the radioisotope power system integration cart (RIC). Once the MMRTG is secured on the cart, it will be installed on the Curiosity rover. The MMRTG will generate the power needed for the mission from the natural decay of plutonium-238, a non-weapons-grade form of the radioisotope. Heat given off by this natural decay will provide constant power through the day and night during all seasons. Curiosity, MSL's car-sized rover, has 10 science instruments designed to search for signs of life, including methane, and help determine if the gas is from a biological or geological source. Waste heat from the MMRTG will be circulated throughout the rover system to keep instruments, computers, mechanical devices and communications systems within their operating temperature ranges. Launch of MSL aboard a United Launch Alliance Atlas V rocket is scheduled for Nov. 25. For more information, visit http://www.nasa.gov/msl. Photo credit: Department of Energy/Idaho National Laboratory
Estimation of urban runoff and water quality using remote sensing and artificial intelligence.
Ha, S R; Park, S Y; Park, D H
2003-01-01
Water quality and quantity of runoff are strongly dependent on the landuse and landcover (LULC) criteria. In this study, we developed a more improved parameter estimation procedure for the environmental model using remote sensing (RS) and artificial intelligence (AI) techniques. Landsat TM multi-band (7bands) and Korea Multi-Purpose Satellite (KOMPSAT) panchromatic data were selected for input data processing. We employed two kinds of artificial intelligence techniques, RBF-NN (radial-basis-function neural network) and ANN (artificial neural network), to classify LULC of the study area. A bootstrap resampling method, a statistical technique, was employed to generate the confidence intervals and distribution of the unit load. SWMM was used to simulate the urban runoff and water quality and applied to the study watershed. The condition of urban flow and non-point contaminations was simulated with rainfall-runoff and measured water quality data. The estimated total runoff, peak time, and pollutant generation varied considerably according to the classification accuracy and percentile unit load applied. The proposed procedure would efficiently be applied to water quality and runoff simulation in a rapidly changing urban area.
A Model for Generating Multi-hazard Scenarios
NASA Astrophysics Data System (ADS)
Lo Jacomo, A.; Han, D.; Champneys, A.
2017-12-01
Communities in mountain areas are often subject to risk from multiple hazards, such as earthquakes, landslides, and floods. Each hazard has its own different rate of onset, duration, and return period. Multiple hazards tend to complicate the combined risk due to their interactions. Prioritising interventions for minimising risk in this context is challenging. We developed a probabilistic multi-hazard model to help inform decision making in multi-hazard areas. The model is applied to a case study region in the Sichuan province in China, using information from satellite imagery and in-situ data. The model is not intended as a predictive model, but rather as a tool which takes stakeholder input and can be used to explore plausible hazard scenarios over time. By using a Monte Carlo framework and varrying uncertain parameters for each of the hazards, the model can be used to explore the effect of different mitigation interventions aimed at reducing the disaster risk within an uncertain hazard context.
Shiftwork Scheduling for the 1990s.
ERIC Educational Resources Information Center
Coleman, Richard M.
1989-01-01
The author discusses the problems of scheduling shift work, touching on such topics as employee desires, health requirements, and business needs. He presents a method for developing shift schedules that addresses these three areas. Implementation hints are also provided. (CH)
NASA Technical Reports Server (NTRS)
Logan, J. R.; Pulvermacher, M. K.
1991-01-01
Range Scheduling Aid (RSA) is presented in the form of the viewgraphs. The following subject areas are covered: satellite control network; current and new approaches to range scheduling; MITRE tasking; RSA features; RSA display; constraint based analytic capability; RSA architecture; and RSA benefits.
Developing optimal nurses work schedule using integer programming
NASA Astrophysics Data System (ADS)
Shahidin, Ainon Mardhiyah; Said, Mohd Syazwan Md; Said, Noor Hizwan Mohamad; Sazali, Noor Izatie Amaliena
2017-08-01
Time management is the art of arranging, organizing and scheduling one's time for the purpose of generating more effective work and productivity. Scheduling is the process of deciding how to commit resources between varieties of possible tasks. Thus, it is crucial for every organization to have a good work schedule for their staffs. The job of Ward nurses at hospitals runs for 24 hours every day. Therefore, nurses will be working using shift scheduling. This study is aimed to solve the nurse scheduling problem at an emergency ward of a private hospital. A 7-day work schedule for 7 consecutive weeks satisfying all the constraints set by the hospital will be developed using Integer Programming. The work schedule for the nurses obtained gives an optimal solution where all the constraints are being satisfied successfully.
Electric Vehicles Charging Scheduling Strategy Considering the Uncertainty of Photovoltaic Output
NASA Astrophysics Data System (ADS)
Wei, Xiangxiang; Su, Su; Yue, Yunli; Wang, Wei; He, Luobin; Li, Hao; Ota, Yutaka
2017-05-01
The rapid development of electric vehicles and distributed generation bring new challenges to security and economic operation of the power system, so the collaborative research of the EVs and the distributed generation have important significance in distribution network. Under this background, an EVs charging scheduling strategy considering the uncertainty of photovoltaic(PV) output is proposed. The characteristics of EVs charging are analysed first. A PV output prediction method is proposed with a PV database then. On this basis, an EVs charging scheduling strategy is proposed with the goal to satisfy EVs users’ charging willingness and decrease the power loss in distribution network. The case study proves that the proposed PV output prediction method can predict the PV output accurately and the EVs charging scheduling strategy can reduce the power loss and stabilize the fluctuation of the load in distributed network.
PLAN-IT-2: The next generation planning and scheduling tool
NASA Technical Reports Server (NTRS)
Eggemeyer, William C.; Cruz, Jennifer W.
1990-01-01
PLAN-IT is a scheduling program which has been demonstrated and evaluated in a variety of scheduling domains. The capability enhancements being made for the next generation of PLAN-IT, called PLAN-IT-2 is discussed. PLAN-IT-2 represents a complete rewrite of the original PLAN-IT incorporating major changes as suggested by the application experiences with the original PLAN-IT. A few of the enhancements described are additional types of constraints, such as states and resettable-depletables (batteries), dependencies between constraints, multiple levels of activity planning during the scheduling process, pattern constraint searching for opportunities as opposed to just minimizing the amount of conflicts, additional customization construction features for display and handling of diverse multiple time systems, and reduction in both the size and the complexity for creating the knowledge-base to address the different problem domains.
EPA Sets Schedule to Improve Visibility in the Nation's Most Treasured Natural Areas
EPA issued a schedule to act on more than 40 state pollution reduction plans that will improve visibility in national parks and wilderness areas and protect public health from the damaging effects of the pollutants that cause regional haze.
Linear-parameter-varying gain-scheduled control of aerospace systems
NASA Astrophysics Data System (ADS)
Barker, Jeffrey Michael
The dynamics of many aerospace systems vary significantly as a function of flight condition. Robust control provides methods of guaranteeing performance and stability goals across flight conditions. In mu-syntthesis, changes to the dynamical system are primarily treated as uncertainty. This method has been successfully applied to many control problems, and here is applied to flutter control. More recently, two techniques for generating robust gain-scheduled controller have been developed. Linear fractional transformation (LFT) gain-scheduled control is an extension of mu-synthesis in which the plant and controller are explicit functions of parameters measurable in real-time. This LFT gain-scheduled control technique is applied to the Benchmark Active Control Technology (BACT) wing, and compared with mu-synthesis control. Linear parameter-varying (LPV) gain-scheduled control is an extension of Hinfinity control to parameter varying systems. LPV gain-scheduled control directly incorporates bounds on the rate of change of the scheduling parameters, and often reduces conservatism inherent in LFT gain-scheduled control. Gain-scheduled LPV control of the BACT wing compares very favorably with the LFT controller. Gain-scheduled LPV controllers are generated for the lateral-directional and longitudinal axes of the Innovative Control Effectors (ICE) aircraft and implemented in nonlinear simulations and real-time piloted nonlinear simulations. Cooper-Harper and pilot-induced oscillation ratings were obtained for an initial design, a reference aircraft and a redesign. Piloted simulation results for the initial LPV gain-scheduled control of the ICE aircraft are compared with results for a conventional fighter aircraft in discrete pitch and roll angle tracking tasks. The results for the redesigned controller are significantly better than both the previous LPV controller and the conventional aircraft.
300 Area treated effluent disposal facility sampling schedule
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loll, C.M.
1994-10-11
This document is the interface between the 300 Area Liquid Effluent Process Engineering (LEPE) group and the Waste Sampling and Characterization Facility (WSCF), concerning process control samples. It contains a schedule for process control samples at the 300 Area TEDF which describes the parameters to be measured, the frequency of sampling and analysis, the sampling point, and the purpose for each parameter.
A quantum physical design flow using ILP and graph drawing
NASA Astrophysics Data System (ADS)
Yazdani, Maryam; Saheb Zamani, Morteza; Sedighi, Mehdi
2013-10-01
Implementing large-scale quantum circuits is one of the challenges of quantum computing. One of the central challenges of accurately modeling the architecture of these circuits is to schedule a quantum application and generate the layout while taking into account the cost of communications and classical resources as well as the maximum exploitable parallelism. In this paper, we present and evaluate a design flow for arbitrary quantum circuits in ion trap technology. Our design flow consists of two parts. First, a scheduler takes a description of a circuit and finds the best order for the execution of its quantum gates using integer linear programming regarding the classical resources (qubits) and instruction dependencies. Then a layout generator receives the schedule produced by the scheduler and generates a layout for this circuit using a graph-drawing algorithm. Our experimental results show that the proposed flow decreases the average latency of quantum circuits by about 11 % for a set of attempted benchmarks and by about 9 % for another set of benchmarks compared with the best in literature.
Planning and Scheduling for Environmental Sensor Networks
NASA Astrophysics Data System (ADS)
Frank, J. D.
2005-12-01
Environmental Sensor Networks are a new way of monitoring the environment. They comprise autonomous sensor nodes in the environment that record real-time data, which is retrieved, analyzed, integrated with other data sets (e.g. satellite images, GIS, process models) and ultimately lead to scientific discoveries. Sensor networks must operate within time and resource constraints. Sensors have limited onboard memory, energy, computational power, communications windows and communications bandwidth. The value of data will depend on when, where and how it was collected, how detailed the data is, how long it takes to integrate the data, and how important the data was to the original scientific question. Planning and scheduling of sensor networks is necessary for effective, safe operations in the face of these constraints. For example, power bus limitations may preclude sensors from simultaneously collecting data and communicating without damaging the sensor; planners and schedulers can ensure these operations are ordered so that they do not happen simultaneously. Planning and scheduling can also ensure best use of the sensor network to maximize the value of collected science data. For example, if data is best recorded using a particular camera angle but it is costly in time and energy to achieve this, planners and schedulers can search for times when time and energy are available to achieve the optimal camera angle. Planning and scheduling can handle uncertainty in the problem specification; planners can be re-run when new information is made available, or can generate plans that include contingencies. For example, if bad weather may prevent the collection of data, a contingent plan can check lighting conditions and turn off data collection to save resources if lighting is not ideal. Both mobile and immobile sensors can benefit from planning and scheduling. For example, data collection on otherwise passive sensors can be halted to preserve limited power and memory resources and to reduce the costs of communication. Planning and scheduling is generally a heavy consumer of time, memory and energy resources. This means careful thought must be given to how much planning and scheduling should be done on the sensors themselves, and how much to do elsewhere. The difficulty of planning and scheduling is exacerbated when reasoning about uncertainty. More time, memory and energy is needed to solve such problems, leading either to more expensive sensors, or suboptimal plans. For example, scientifically interesting events may happen at random times, making it difficult to ensure that sufficient resources are availanble. Since uncertainty is usually lowest in proximity to the sensors themselves, this argues for planning and scheduling onboard the sensors. However, cost minimization dictates sensors be kept as simple as possible, reducing the amount of planning and scheduling they can do themselves. Furthermore, coordinating each sensor's independent plans can be difficult. In the full presentation, we will critically review the planning and scheduling systems used by previously fielded sensor networks. We do so primarily from the perspective of the computational sciences, with a focus on taming computational complexity when operating sensor networks. The case studies are derived from sensor networks based on UAVs, satellites, and planetary rovers. Planning and scheduling considerations include multi-sensor coordination, optimizing science value, onboard power management, onboard memory, planning movement actions to acquire data, and managing communications.These case studies offer lessons for future designs of environmental sensor networks.
Research on crude oil storage and transportation based on optimization algorithm
NASA Astrophysics Data System (ADS)
Yuan, Xuhua
2018-04-01
At present, the optimization theory and method have been widely used in the optimization scheduling and optimal operation scheme of complex production systems. Based on C++Builder 6 program development platform, the theoretical research results are implemented by computer. The simulation and intelligent decision system of crude oil storage and transportation inventory scheduling are designed. The system includes modules of project management, data management, graphics processing, simulation of oil depot operation scheme. It can realize the optimization of the scheduling scheme of crude oil storage and transportation system. A multi-point temperature measuring system for monitoring the temperature field of floating roof oil storage tank is developed. The results show that by optimizing operating parameters such as tank operating mode and temperature, the total transportation scheduling costs of the storage and transportation system can be reduced by 9.1%. Therefore, this method can realize safe and stable operation of crude oil storage and transportation system.
NASA Astrophysics Data System (ADS)
Liu, Q.
2011-09-01
At first, research advances on radiation transfer modeling on multi-scale remote sensing data are presented: after a general overview of remote sensing radiation transfer modeling, several recent research advances are presented, including leaf spectrum model (dPROS-PECT), vegetation canopy BRDF models, directional thermal infrared emission models(TRGM, SLEC), rugged mountains area radiation models, and kernel driven models etc. Then, new methodologies on land surface parameters inversion based on multi-source remote sensing data are proposed. The land surface Albedo, leaf area index, temperature/emissivity, and surface net radiation etc. are taken as examples. A new synthetic land surface parameter quantitative remote sensing product generation system is designed and the software system prototype will be demonstrated. At last, multi-scale field experiment campaigns, such as the field campaigns in Gansu and Beijing, China will be introduced briefly. The ground based, tower based, and airborne multi-angular measurement system have been built to measure the directional reflectance, emission and scattering characteristics from visible, near infrared, thermal infrared and microwave bands for model validation and calibration. The remote sensing pixel scale "true value" measurement strategy have been designed to gain the ground "true value" of LST, ALBEDO, LAI, soil moisture and ET etc. at 1-km2 for remote sensing product validation.
Design and control of a vertical takeoff and landing fixed-wing unmanned aerial vehicle
NASA Astrophysics Data System (ADS)
Malang, Yasir
With the goal of extending capabilities of multi-rotor unmanned aerial vehicles (UAVs) for wetland conservation missions, a novel hybrid aircraft design consisting of four tilting rotors and a fixed wing is designed and built. The tilting rotors and nonlinear aerodynamic effects introduce a control challenge for autonomous flight, and the research focus is to develop and validate an autonomous transition flight controller. The overall controller structure consists of separate cascaded Proportional Integral Derivative (PID) controllers whose gains are scheduled according to the rotors' tilt angle. A control mechanism effectiveness factor is used to mix the multi-rotor and fixed-wing control actuators during transition. A nonlinear flight dynamics model is created and transition stability is shown through MATLAB simulations, which proves gain-scheduled control is a good fit for tilt-rotor aircraft. Experiments carried out using the prototype UAV validate simulation results for VTOL and tilted-rotor flight.
Li, Desheng
2014-01-01
This paper proposes a novel variant of cooperative quantum-behaved particle swarm optimization (CQPSO) algorithm with two mechanisms to reduce the search space and avoid the stagnation, called CQPSO-DVSA-LFD. One mechanism is called Dynamic Varying Search Area (DVSA), which takes charge of limiting the ranges of particles' activity into a reduced area. On the other hand, in order to escape the local optima, Lévy flights are used to generate the stochastic disturbance in the movement of particles. To test the performance of CQPSO-DVSA-LFD, numerical experiments are conducted to compare the proposed algorithm with different variants of PSO. According to the experimental results, the proposed method performs better than other variants of PSO on both benchmark test functions and the combinatorial optimization issue, that is, the job-shop scheduling problem.
Multi-resolution model-based traffic sign detection and tracking
NASA Astrophysics Data System (ADS)
Marinas, Javier; Salgado, Luis; Camplani, Massimo
2012-06-01
In this paper we propose an innovative approach to tackle the problem of traffic sign detection using a computer vision algorithm and taking into account real-time operation constraints, trying to establish intelligent strategies to simplify as much as possible the algorithm complexity and to speed up the process. Firstly, a set of candidates is generated according to a color segmentation stage, followed by a region analysis strategy, where spatial characteristic of previously detected objects are taken into account. Finally, temporal coherence is introduced by means of a tracking scheme, performed using a Kalman filter for each potential candidate. Taking into consideration time constraints, efficiency is achieved two-fold: on the one side, a multi-resolution strategy is adopted for segmentation, where global operation will be applied only to low-resolution images, increasing the resolution to the maximum only when a potential road sign is being tracked. On the other side, we take advantage of the expected spacing between traffic signs. Namely, the tracking of objects of interest allows to generate inhibition areas, which are those ones where no new traffic signs are expected to appear due to the existence of a TS in the neighborhood. The proposed solution has been tested with real sequences in both urban areas and highways, and proved to achieve higher computational efficiency, especially as a result of the multi-resolution approach.
Strategies GeoCape Intelligent Observation Studies @ GSFC
NASA Technical Reports Server (NTRS)
Cappelaere, Pat; Frye, Stu; Moe, Karen; Mandl, Dan; LeMoigne, Jacqueline; Flatley, Tom; Geist, Alessandro
2015-01-01
This presentation provides information a summary of the tradeoff studies conducted for GeoCape by the GSFC team in terms of how to optimize GeoCape observation efficiency. Tradeoffs include total ground scheduling with simple priorities, ground scheduling with cloud forecast, ground scheduling with sub-area forecast, onboard scheduling with onboard cloud detection and smart onboard scheduling and onboard image processing. The tradeoffs considered optimzing cost, downlink bandwidth and total number of images acquired.
Interactive experimenters' planning procedures and mission control
NASA Technical Reports Server (NTRS)
Desjardins, R. L.
1973-01-01
The computerized mission control and planning system routinely generates a 24-hour schedule in one hour of operator time by including time dimensions into experimental planning procedures. Planning is validated interactively as it is being generated segment by segment in the frame of specific event times. The planner simply points a light pen at the time mark of interest on the time line for entering specific event times into the schedule.
Engineering Technician Standards.
ERIC Educational Resources Information Center
National Aeronautics and Space Administration, Hampton, VA. Langley Research Center.
The booklet describes the program offerings, requirements, training, and pay schedules of the Langley Research Center Technician Training Program. Training schedules and the duties expected upon completion of each of the training areas are specified, along with on-the-job and academic requirements. The areas of training are: engineering draftsman,…
Automatic Command Sequence Generation
NASA Technical Reports Server (NTRS)
Fisher, Forest; Gladded, Roy; Khanampompan, Teerapat
2007-01-01
Automatic Sequence Generator (Autogen) Version 3.0 software automatically generates command sequences for the Mars Reconnaissance Orbiter (MRO) and several other JPL spacecraft operated by the multi-mission support team. Autogen uses standard JPL sequencing tools like APGEN, ASP, SEQGEN, and the DOM database to automate the generation of uplink command products, Spacecraft Command Message Format (SCMF) files, and the corresponding ground command products, DSN Keywords Files (DKF). Autogen supports all the major multi-mission mission phases including the cruise, aerobraking, mapping/science, and relay mission phases. Autogen is a Perl script, which functions within the mission operations UNIX environment. It consists of two parts: a set of model files and the autogen Perl script. Autogen encodes the behaviors of the system into a model and encodes algorithms for context sensitive customizations of the modeled behaviors. The model includes knowledge of different mission phases and how the resultant command products must differ for these phases. The executable software portion of Autogen, automates the setup and use of APGEN for constructing a spacecraft activity sequence file (SASF). The setup includes file retrieval through the DOM (Distributed Object Manager), an object database used to store project files. This step retrieves all the needed input files for generating the command products. Depending on the mission phase, Autogen also uses the ASP (Automated Sequence Processor) and SEQGEN to generate the command product sent to the spacecraft. Autogen also provides the means for customizing sequences through the use of configuration files. By automating the majority of the sequencing generation process, Autogen eliminates many sequence generation errors commonly introduced by manually constructing spacecraft command sequences. Through the layering of commands into the sequence by a series of scheduling algorithms, users are able to rapidly and reliably construct the desired uplink command products. With the aid of Autogen, sequences may be produced in a matter of hours instead of weeks, with a significant reduction in the number of people on the sequence team. As a result, the uplink product generation process is significantly streamlined and mission risk is significantly reduced. Autogen is used for operations of MRO, Mars Global Surveyor (MGS), Mars Exploration Rover (MER), Mars Odyssey, and will be used for operations of Phoenix. Autogen Version 3.0 is the operational version of Autogen including the MRO adaptation for the cruise mission phase, and was also used for development of the aerobraking and mapping mission phases for MRO.
Concepts and algorithms for terminal-area traffic management
NASA Technical Reports Server (NTRS)
Erzberger, H.; Chapel, J. D.
1984-01-01
The nation's air-traffic-control system is the subject of an extensive modernization program, including the planned introduction of advanced automation techniques. This paper gives an overview of a concept for automating terminal-area traffic management. Four-dimensional (4D) guidance techniques, which play an essential role in the automated system, are reviewed. One technique, intended for on-board computer implementation, is based on application of optimal control theory. The second technique is a simplified approach to 4D guidance intended for ground computer implementation. It generates advisory messages to help the controller maintain scheduled landing times of aircraft not equipped with on-board 4D guidance systems. An operational system for the second technique, recently evaluated in a simulation, is also described.
Anchorage Arrival Scheduling Under Off-Nominal Weather Conditions
NASA Technical Reports Server (NTRS)
Grabbe, Shon; Chan, William N.; Mukherjee, Avijit
2012-01-01
Weather can cause flight diversions, passenger delays, additional fuel consumption and schedule disruptions at any high volume airport. The impacts are particularly acute at the Ted Stevens Anchorage International Airport in Anchorage, Alaska due to its importance as a major international portal. To minimize the impacts due to weather, a multi-stage scheduling process is employed that is iteratively executed, as updated aircraft demand and/or airport capacity data become available. The strategic scheduling algorithm assigns speed adjustments for flights that originate outside of Anchorage Center to achieve the proper demand and capacity balance. Similarly, an internal departure-scheduling algorithm assigns ground holds for pre-departure flights that originate from within Anchorage Center. Tactical flight controls in the form of airborne holding are employed to reactively account for system uncertainties. Real-world scenarios that were derived from the January 16, 2012 Anchorage visibility observations and the January 12, 2012 Anchorage arrival schedule were used to test the initial implementation of the scheduling algorithm in fast-time simulation experiments. Although over 90% of the flights in the scenarios arrived at Anchorage without requiring any delay, pre-departure scheduling was the dominant form of control for Anchorage arrivals. Additionally, tactical scheduling was used extensively in conjunction with the pre-departure scheduling to reactively compensate for uncertainties in the arrival demand. For long-haul flights, the strategic scheduling algorithm performed best when the scheduling horizon was greater than 1,000 nmi. With these long scheduling horizons, it was possible to absorb between ten and 12 minutes of delay through speed control alone. Unfortunately, the use of tactical scheduling, which resulted in airborne holding, was found to increase as the strategic scheduling horizon increased because of the additional uncertainty in the arrival times of the aircraft. Findings from these initial experiments indicate that it is possible to schedule arrivals into Anchorage with minimal delays under low-visibility conditions with less disruption to high-cost, international flights.
Diagnostics of multi-fractality of magnetized plasma inside coronal holes and quiet sun areas
NASA Astrophysics Data System (ADS)
Abramenko, Valentyna
Turbulent and multi-fractal properties of magnetized plasma in solar Coronal Holes (CHs) and Quiet Sun (QS) photosphere were explored using high-resolution magnetograms measured with the New Solar Telescope (NST) at the Big Bear Solar Observatory (BBSO, USA), Hinode/SOT and SDO/HMI instruments. Distribution functions of size and magnetic flux measured for small-scale magnetic elements follow the log-normal law, which implies multi-fractal organization of the magnetic field and the absence of a unique power law for all scales. The magnetograms show multi-fractality in CHs on scales 400 - 10000 km, which becomes better pronounced as the spatial resolution of data improves. Photospheric granulation measured with NST exhibits multi-fractal properties on very small scales of 50 - 600 km. While multi-fractal nature of solar active regions is well known, newly established multi-fractality of weakest magnetic fields on the solar surface, i.e., in CHs and QS, leads us to a conclusion that the entire variety of solar magnetic fields is generated by a unique nonlinear dynamical process.
Automated Planning and Scheduling for Planetary Rover Distributed Operations
NASA Technical Reports Server (NTRS)
Backes, Paul G.; Rabideau, Gregg; Tso, Kam S.; Chien, Steve
1999-01-01
Automated planning and Scheduling, including automated path planning, has been integrated with an Internet-based distributed operations system for planetary rover operations. The resulting prototype system enables faster generation of valid rover command sequences by a distributed planetary rover operations team. The Web Interface for Telescience (WITS) provides Internet-based distributed collaboration, the Automated Scheduling and Planning Environment (ASPEN) provides automated planning and scheduling, and an automated path planner provided path planning. The system was demonstrated on the Rocky 7 research rover at JPL.
The Impact of Block Scheduling on Various Indicators of School Success.
ERIC Educational Resources Information Center
Nichols, Joe D.
This project focused on the collection and analysis of longitudinal student data generated by six high schools from a large urban school system in the Midwest. Two of the schools recently converted to a 4 X 4 scheduling structure, while 3 additional schools have used a block-8 scheduling structure for a number of years. One school maintains a…
A Genetic Algorithm for Flow Shop Scheduling with Assembly Operations to Minimize Makespan
NASA Astrophysics Data System (ADS)
Bhongade, A. S.; Khodke, P. M.
2014-04-01
Manufacturing systems, in which, several parts are processed through machining workstations and later assembled to form final products, is common. Though scheduling of such problems are solved using heuristics, available solution approaches can provide solution for only moderate sized problems due to large computation time required. In this work, scheduling approach is developed for such flow-shop manufacturing system having machining workstations followed by assembly workstations. The initial schedule is generated using Disjunctive method and genetic algorithm (GA) is applied further for generating schedule for large sized problems. GA is found to give near optimal solution based on the deviation of makespan from lower bound. The lower bound of makespan of such problem is estimated and percent deviation of makespan from lower bounds is used as a performance measure to evaluate the schedules. Computational experiments are conducted on problems developed using fractional factorial orthogonal array, varying the number of parts per product, number of products, and number of workstations (ranging upto 1,520 number of operations). A statistical analysis indicated the significance of all the three factors considered. It is concluded that GA method can obtain optimal makespan.
Goal programming for cyclical auxiliary police scheduling at UiTM Cawangan Perlis
NASA Astrophysics Data System (ADS)
Mustapar, Wasilatul Effah; Nasir, Diana Sirmayunie Mohd; Nor, Nor Azriani Mohamad; Abas, Sharifah Fhahriyah Syed
2017-11-01
Constructing a good and fair schedule for shift workers poses a great challenge as it requires a lot of time and effort. In this study, goal programming has been applied on scheduling to achieve the hard and soft constraints for a cyclical schedule that would ease the head of auxiliary police at building new schedules. To accomplish this goal, shift types were assigned in order to provide a fair schedule that takes into account the auxiliary police's policies and preferences. The model was run using Lingo software. Three out of four goals set for the study were achieved. In addition, the results considered an equal allocation for every auxiliary police, namely 70% for total duty and 30% for the day. Furthermore, the schedule was able to cyclically generate another 10 sets schedule. More importantly, the model has provided unbiased scheduling of auxiliary policies which led to high satisfaction in auxiliary police management.
NASA Astrophysics Data System (ADS)
Paprocka, I.; Kempa, W. M.; Grabowik, C.; Kalinowski, K.; Krenczyk, D.
2016-08-01
In the paper a survey of predictive and reactive scheduling methods is done in order to evaluate how the ability of prediction of reliability characteristics influences over robustness criteria. The most important reliability characteristics are: Mean Time to Failure, Mean Time of Repair. Survey analysis is done for a job shop scheduling problem. The paper answers the question: what method generates robust schedules in the case of a bottleneck failure occurrence before, at the beginning of planned maintenance actions or after planned maintenance actions? Efficiency of predictive schedules is evaluated using criteria: makespan, total tardiness, flow time, idle time. Efficiency of reactive schedules is evaluated using: solution robustness criterion and quality robustness criterion. This paper is the continuation of the research conducted in the paper [1], where the survey of predictive and reactive scheduling methods is done only for small size scheduling problems.
The key to successful management of STS operations: An integrated production planning system
NASA Technical Reports Server (NTRS)
Johnson, W. A.; Thomasen, C. T.
1985-01-01
Space Transportation System operations managers are being confronted with a unique set of challenges as a result of increasing flight rates, the demand for flight manifest/production schedule flexibility and an emphasis on continued cost reduction. These challenges have created the need for an integrated production planning system that provides managers with the capability to plan, schedule, status and account for an orderly flow of products and services across a large, multi-discipline organization. With increased visibility into the end-to-end production flow for individual and parallel missions in process, managers can assess the integrated impact of changes, identify and measure the interrelationships of resource, schedule, and technical performance requirements and prioritize productivity enhancements.
TopMaker: A Technique for Automatic Multi-Block Topology Generation Using the Medial Axis
NASA Technical Reports Server (NTRS)
Heidmann, James D. (Technical Monitor); Rigby, David L.
2004-01-01
A two-dimensional multi-block topology generation technique has been developed. Very general configurations are addressable by the technique. A configuration is defined by a collection of non-intersecting closed curves, which will be referred to as loops. More than a single loop implies that holes exist in the domain, which poses no problem. This technique requires only the medial vertices and the touch points that define each vertex. From the information about the medial vertices, the connectivity between medial vertices is generated. The physical shape of the medial edge is not required. By applying a few simple rules to each medial edge, the multiblock topology is generated with no user intervention required. The resulting topologies contain only the level of complexity dictated by the configurations. Grid lines remain attached to the boundary except at sharp concave turns where a change in index family is introduced as would be desired. Keeping grid lines attached to the boundary is especially important in the area of computational fluid dynamics where highly clustered grids are used near no-slip boundaries. This technique is simple and robust and can easily be incorporated into the overall grid generation process.
NASA Astrophysics Data System (ADS)
Schott, John R.; Brown, Scott D.; Raqueno, Rolando V.; Gross, Harry N.; Robinson, Gary
1999-01-01
The need for robust image data sets for algorithm development and testing has prompted the consideration of synthetic imagery as a supplement to real imagery. The unique ability of synthetic image generation (SIG) tools to supply per-pixel truth allows algorithm writers to test difficult scenarios that would require expensive collection and instrumentation efforts. In addition, SIG data products can supply the user with `actual' truth measurements of the entire image area that are not subject to measurement error thereby allowing the user to more accurately evaluate the performance of their algorithm. Advanced algorithms place a high demand on synthetic imagery to reproduce both the spectro-radiometric and spatial character observed in real imagery. This paper describes a synthetic image generation model that strives to include the radiometric processes that affect spectral image formation and capture. In particular, it addresses recent advances in SIG modeling that attempt to capture the spatial/spectral correlation inherent in real images. The model is capable of simultaneously generating imagery from a wide range of sensors allowing it to generate daylight, low-light-level and thermal image inputs for broadband, multi- and hyper-spectral exploitation algorithms.
Modeling and Grid Generation of Iced Airfoils
NASA Technical Reports Server (NTRS)
Vickerman, Mary B.; Baez, Marivell; Braun, Donald C.; Hackenberg, Anthony W.; Pennline, James A.; Schilling, Herbert W.
2007-01-01
SmaggIce Version 2.0 is a software toolkit for geometric modeling and grid generation for two-dimensional, singleand multi-element, clean and iced airfoils. A previous version of SmaggIce was described in Preparing and Analyzing Iced Airfoils, NASA Tech Briefs, Vol. 28, No. 8 (August 2004), page 32. To recapitulate: Ice shapes make it difficult to generate quality grids around airfoils, yet these grids are essential for predicting ice-induced complex flow. This software efficiently creates high-quality structured grids with tools that are uniquely tailored for various ice shapes. SmaggIce Version 2.0 significantly enhances the previous version primarily by adding the capability to generate grids for multi-element airfoils. This version of the software is an important step in streamlining the aeronautical analysis of ice airfoils using computational fluid dynamics (CFD) tools. The user may prepare the ice shape, define the flow domain, decompose it into blocks, generate grids, modify/divide/merge blocks, and control grid density and smoothness. All these steps may be performed efficiently even for the difficult glaze and rime ice shapes. Providing the means to generate highly controlled grids near rough ice, the software includes the creation of a wrap-around block (called the "viscous sublayer block"), which is a thin, C-type block around the wake line and iced airfoil. For multi-element airfoils, the software makes use of grids that wrap around and fill in the areas between the viscous sub-layer blocks for all elements that make up the airfoil. A scripting feature records the history of interactive steps, which can be edited and replayed later to produce other grids. Using this version of SmaggIce, ice shape handling and grid generation can become a practical engineering process, rather than a laborious research effort.
300 Area treated effluent disposal facility sampling schedule. Revision 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loll, C.M.
1995-03-28
This document is the interface between the 300 Area liquid effluent process engineering (LEPE) group and the waste sampling and characterization facility (WSCF), concerning process control samples. It contains a schedule for process control samples at the 300 Area TEDF which describes the parameters to be measured, the frequency of sampling and analysis, the sampling point, and the purpose for each parameter.
Incentive Compatible Online Scheduling of Malleable Parallel Jobs with Individual Deadlines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carroll, Thomas E.; Grosu, Daniel
2010-09-13
We consider the online scheduling of malleable jobs on parallel systems, such as clusters, symmetric multiprocessing computers, and multi-core processor computers. Malleable jobs is a model of parallel processing in which jobs adapt to the number of processors assigned to them. This model permits the scheduler and resource manager to make more efficient use of the available resources. Each malleable job is characterized by arrival time, deadline, and value. If the job completes by its deadline, the user earns the payoff indicated by the value; otherwise, she earns a payoff of zero. The scheduling objective is to maximize the summore » of the values of the jobs that complete by their associated deadlines. Complicating the matter is that users in the real world are rational and they will attempt to manipulate the scheduler by misreporting their jobs’ parameters if it benefits them to do so. To mitigate this behavior, we design an incentive compatible online scheduling mechanism. Incentive compatibility assures us that the users will obtain the maximum payoff only if they truthfully report their jobs’ parameters to the scheduler. Finally, we simulate and study the mechanism to show the effects of misreports on the cheaters and on the system.« less
Design Considerations for a New Terminal Area Arrival Scheduler
NASA Technical Reports Server (NTRS)
Thipphavong, Jane; Mulfinger, Daniel
2010-01-01
Design of a terminal area arrival scheduler depends on the interrelationship between throughput, delay and controller intervention. The main contribution of this paper is an analysis of the above interdependence for several stochastic behaviors of expected system performance distributions in the aircraft s time of arrival at the meter fix and runway. Results of this analysis serve to guide the scheduler design choices for key control variables. Two types of variables are analyzed, separation buffers and terminal delay margins. The choice for these decision variables was tested using sensitivity analysis. Analysis suggests that it is best to set the separation buffer at the meter fix to its minimum and adjust the runway buffer to attain the desired system performance. Delay margin was found to have the least effect. These results help characterize the variables most influential in the scheduling operations of terminal area arrivals.
5 CFR 532.259 - Special appropriated fund wage schedules for U.S. insular areas.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Special appropriated fund wage schedules for U.S. insular areas. 532.259 Section 532.259 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PREVAILING RATE SYSTEMS Prevailing Rate Determinations § 532.259 Special...
Vickers, T. Winston; Ernest, Holly B.; Boyce, Walter M.
2017-01-01
The importance of examining multiple hierarchical levels when modeling resource use for wildlife has been acknowledged for decades. Multi-level resource selection functions have recently been promoted as a method to synthesize resource use across nested organizational levels into a single predictive surface. Analyzing multiple scales of selection within each hierarchical level further strengthens multi-level resource selection functions. We extend this multi-level, multi-scale framework to modeling resistance for wildlife by combining multi-scale resistance surfaces from two data types, genetic and movement. Resistance estimation has typically been conducted with one of these data types, or compared between the two. However, we contend it is not an either/or issue and that resistance may be better-modeled using a combination of resistance surfaces that represent processes at different hierarchical levels. Resistance surfaces estimated from genetic data characterize temporally broad-scale dispersal and successful breeding over generations, whereas resistance surfaces estimated from movement data represent fine-scale travel and contextualized movement decisions. We used telemetry and genetic data from a long-term study on pumas (Puma concolor) in a highly developed landscape in southern California to develop a multi-level, multi-scale resource selection function and a multi-level, multi-scale resistance surface. We used these multi-level, multi-scale surfaces to identify resource use patches and resistant kernel corridors. Across levels, we found puma avoided urban, agricultural areas, and roads and preferred riparian areas and more rugged terrain. For other landscape features, selection differed among levels, as did the scales of selection for each feature. With these results, we developed a conservation plan for one of the most isolated puma populations in the U.S. Our approach captured a wide spectrum of ecological relationships for a population, resulted in effective conservation planning, and can be readily applied to other wildlife species. PMID:28609466
Zeller, Katherine A; Vickers, T Winston; Ernest, Holly B; Boyce, Walter M
2017-01-01
The importance of examining multiple hierarchical levels when modeling resource use for wildlife has been acknowledged for decades. Multi-level resource selection functions have recently been promoted as a method to synthesize resource use across nested organizational levels into a single predictive surface. Analyzing multiple scales of selection within each hierarchical level further strengthens multi-level resource selection functions. We extend this multi-level, multi-scale framework to modeling resistance for wildlife by combining multi-scale resistance surfaces from two data types, genetic and movement. Resistance estimation has typically been conducted with one of these data types, or compared between the two. However, we contend it is not an either/or issue and that resistance may be better-modeled using a combination of resistance surfaces that represent processes at different hierarchical levels. Resistance surfaces estimated from genetic data characterize temporally broad-scale dispersal and successful breeding over generations, whereas resistance surfaces estimated from movement data represent fine-scale travel and contextualized movement decisions. We used telemetry and genetic data from a long-term study on pumas (Puma concolor) in a highly developed landscape in southern California to develop a multi-level, multi-scale resource selection function and a multi-level, multi-scale resistance surface. We used these multi-level, multi-scale surfaces to identify resource use patches and resistant kernel corridors. Across levels, we found puma avoided urban, agricultural areas, and roads and preferred riparian areas and more rugged terrain. For other landscape features, selection differed among levels, as did the scales of selection for each feature. With these results, we developed a conservation plan for one of the most isolated puma populations in the U.S. Our approach captured a wide spectrum of ecological relationships for a population, resulted in effective conservation planning, and can be readily applied to other wildlife species.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groth, B.D.
The Multi-Function Waste Tank Facility (MWTF) consists of four, nominal 1 million gallon, underground double-shell tanks, located in the 200-East area, and two tanks of the same capacity in the 200-West area. MWTF will provide environmentally safe storage capacity for wastes generated during remediation/retrieval activities of existing waste storage tanks. This document delineates in detail the information to be used for effective implementation of the Functional Design Criteria requirements.
Quantifying and Reducing Uncertainty in Correlated Multi-Area Short-Term Load Forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yannan; Hou, Zhangshuan; Meng, Da
2016-07-17
In this study, we represent and reduce the uncertainties in short-term electric load forecasting by integrating time series analysis tools including ARIMA modeling, sequential Gaussian simulation, and principal component analysis. The approaches are mainly focusing on maintaining the inter-dependency between multiple geographically related areas. These approaches are applied onto cross-correlated load time series as well as their forecast errors. Multiple short-term prediction realizations are then generated from the reduced uncertainty ranges, which are useful for power system risk analyses.
The application of connectionism to query planning/scheduling in intelligent user interfaces
NASA Technical Reports Server (NTRS)
Short, Nicholas, Jr.; Shastri, Lokendra
1990-01-01
In the mid nineties, the Earth Observing System (EOS) will generate an estimated 10 terabytes of data per day. This enormous amount of data will require the use of sophisticated technologies from real time distributed Artificial Intelligence (AI) and data management. Without regard to the overall problems in distributed AI, efficient models were developed for doing query planning and/or scheduling in intelligent user interfaces that reside in a network environment. Before intelligent query/planning can be done, a model for real time AI planning and/or scheduling must be developed. As Connectionist Models (CM) have shown promise in increasing run times, a connectionist approach to AI planning and/or scheduling is proposed. The solution involves merging a CM rule based system to a general spreading activation model for the generation and selection of plans. The system was implemented in the Rochester Connectionist Simulator and runs on a Sun 3/260.
Maintaining consistency between planning hierarchies: Techniques and applications
NASA Technical Reports Server (NTRS)
Zoch, David R.
1987-01-01
In many planning and scheduling environments, it is desirable to be able to view and manipulate plans at different levels of abstraction, allowing the users the option of viewing and manipulating either a very detailed representation of the plan or a high-level more abstract version of the plan. Generating a detailed plan from a more abstract plan requires domain-specific planning/scheduling knowledge; the reverse process of generating a high-level plan from a detailed plan Reverse Plan Maintenance, or RPM) requires having the system remember the actions it took based on its domain-specific knowledge and its reasons for taking those actions. This reverse plan maintenance process is described as implemented in a specific planning and scheduling tool, The Mission Operations Planning Assistant (MOPA), as well as the applications of RPM to other planning and scheduling problems; emphasizing the knowledge that is needed to maintain the correspondence between the different hierarchical planning levels.
Canavan, Maureen E.; Linnander, Erika; Ahmed, Shirin; Mohammed, Halima; Bradley, Elizabeth H.
2018-01-01
Background: Over the last decade, Ethiopia has made impressive national improvements in health outcomes, including reductions in maternal, neonatal, infant, and child mortality attributed in large part to their Health Extension Program (HEP). As this program continues to evolve and improve, understanding the unit cost of health extension worker (HEW) services is fundamental to planning for future growth and ensuring adequate financial support to deliver effective primary care throughout the country. Methods: We sought to examine and report the data needed to generate a HEW fee schedule that would allow for full cost recovery for HEW services. Using HEW activity data and estimates from national studies and local systems we were able to estimate salary costs and the average time spent by an HEW per patient/community encounter for each type of services associated with specific users. Using this information, we created separate fee schedules for activities in urban and rural settings with two estimates of non-salary multipliers to calculate the total cost for HEW services. Results: In the urban areas, the HEW fees for full cost recovery of the provision of services (including salary, supplies, and overhead costs) ranged from 55.1 birr to 209.1 birr per encounter. The rural HEW fees ranged from 19.6 birr to 219.4 birr. Conclusion: Efforts to support health system strengthening in low-income settings have often neglected to generate adequate, actionable data on the costs of primary care services. In this study, we have combined time-motion and available financial data to generate a fee schedule that allows for full cost recovery of the provision of services through billable health education and service encounters provided by Ethiopian HEWs. This may be useful in other country settings where managers seek to make evidence-informed planning and resource allocation decisions to address high burden of disease within the context of weak administrative data systems and severe financial constraints. PMID:29764103
Low Probability Tail Event Analysis and Mitigation in BPA Control Area: Task One Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Shuai; Makarov, Yuri V.
This is a report for task one of the tail event analysis project for BPA. Tail event refers to the situation in a power system when unfavorable forecast errors of load and wind are superposed onto fast load and wind ramps, or non-wind generators falling short of scheduled output, the imbalance between generation and load becomes very significant. This type of events occurs infrequently and appears on the tails of the distribution of system power imbalance; therefore, is referred to as tail events. This report analyzes what happened during the Electric Reliability Council of Texas (ERCOT) reliability event on Februarymore » 26, 2008, which was widely reported because of the involvement of wind generation. The objective is to identify sources of the problem, solutions to it and potential improvements that can be made to the system. Lessons learned from the analysis include the following: (1) Large mismatch between generation and load can be caused by load forecast error, wind forecast error and generation scheduling control error on traditional generators, or a combination of all of the above; (2) The capability of system balancing resources should be evaluated both in capacity (MW) and in ramp rate (MW/min), and be procured accordingly to meet both requirements. The resources need to be able to cover a range corresponding to the variability of load and wind in the system, additional to other uncertainties; (3) Unexpected ramps caused by load and wind can both become the cause leading to serious issues; (4) A look-ahead tool evaluating system balancing requirement during real-time operations and comparing that with available system resources should be very helpful to system operators in predicting the forthcoming of similar events and planning ahead; and (5) Demand response (only load reduction in ERCOT event) can effectively reduce load-generation mismatch and terminate frequency deviation in an emergency situation.« less
Human Identities and Nation Building: Comparative Analysis, Markets, and the Modern University
ERIC Educational Resources Information Center
Callejo Pérez, David; Hernández Ulloa, Abel; Martínez Ruiz, Xicoténcatl
2014-01-01
The purpose of this article is to discuss the dilemma of the multi-university in sustainable education, research, and outreach by addressing some of the ways in which universities, must generate actions that seek to address these challenges, develop strategic relationships, and maximize their potential in the areas of teaching, research and…
Impact of Feed Delivery Pattern on Aerial Particulate Matter and Behavior of Feedlot Cattle †
Mitloehner, Frank M.; Dailey, Jeff W.; Morrow, Julie L.; McGlone, John J.
2017-01-01
Simple Summary Fine particulate matter (with less than 2.5 microns diameter; aka PM2.5) are a human and animal health concern because they can carry microbes and chemicals into the lungs. Particulate matter (PM) in general emitted from cattle feedlots can reach high concentrations. When feedlot cattle were given an altered feeding schedule (ALT) that more closely reflected their biological feeding times compared with conventional morning feeding (CON), PM2.5 generation at peak times was substantially lowered. Average daily generation of PM2.5 was decreased by 37% when cattle behavior was redirected away from PM-generating behaviors and toward evening feeding behaviors. Behavioral problems such as agonistic (i.e., aggressive) and bulling (i.e., mounting each other) behaviors also were reduced several fold among ALT compared with CON cattle. Intake of feed was less and daily body weight gain tended to be less with the altered feeding schedule while efficiency of feed utilization was not affected. Although ALT may pose a challenge in feed delivery and labor scheduling, cattle had fewer behavioral problems and reduced PM2.5 generation when feed delivery times matched with the natural drive to eat in a crepuscular pattern. Abstract Fine particulate matter with less than 2.5 microns diameter (PM2.5) generated by cattle in feedlots is an environmental pollutant and a potential human and animal health issue. The objective of this study was to determine if a feeding schedule affects cattle behaviors that promote PM2.5 in a commercial feedlot. The study used 2813 crossbred steers housed in 14 adjacent pens at a large-scale commercial West Texas feedlot. Treatments were conventional feeding at 0700, 1000, and 1200 (CON) or feeding at 0700, 1000, and 1830 (ALT), the latter feeding time coincided with dusk. A mobile behavior lab was used to quantify behaviors of steers that were associated with generation of PM2.5 (e.g., fighting, mounting of peers, and increased locomotion). PM2.5 samplers measured respirable particles with a mass median diameter ≤2.5 μm (PM2.5) every 15 min over a period of 7 d in April and May. Simultaneously, the ambient temperature, humidity, wind speed and direction, precipitation, air pressure, and solar radiation were measured with a weather station. Elevated downwind PM2.5 concentrations were measured at dusk, when cattle that were fed according to the ALT vs. the CON feeding schedule, demonstrated less PM2.5-generating behaviors (p < 0.05). At dusk, steers on ALT vs. CON feeding schedules ate or were waiting to eat (standing in second row behind feeding cattle) at much greater rates (p < 0.05). Upwind PM2.5 concentrations were similar between the treatments. Downwind PM2.5 concentrations averaged over 24 h were lower from ALT compared with CON pens (0.072 vs. 0.115 mg/m3, p < 0.01). However, dry matter intake (DMI) was less (p < 0.05), and average daily gain (ADG) tended to be less (p < 0.1) in cattle that were fed according to the ALT vs. the CON feeding schedules, whereas feed efficiency (aka gain to feed, G:F) was not affected. Although ALT feeding may pose a challenge in feed delivery and labor scheduling, cattle exhibited fewer PM2.5-generating behaviors and reduced generation of PM2.5 when feed delivery times matched the natural desires of cattle to eat in a crepuscular pattern. PMID:28257061
Radke, Oliver C; Schneider, Thomas; Braune, Anja; Pirracchio, Romain; Fischer, Felix; Koch, Thea
2016-09-28
Both Electrical Impedance Tomography (EIT) and Computed Tomography (CT) allow the estimation of the lung area. We compared two algorithms for the detection of the lung area per quadrant from the EIT images with the lung areas derived from the CT images. 39 outpatients who were scheduled for an elective CT scan of the thorax were included in the study. For each patient we recorded EIT images immediately before the CT scan. The lung area per quadrant was estimated from both CT and EIT data using two different algorithms for the EIT data. Data showed considerable variation during spontaneous breathing of the patients. Overall correlation between EIT and CT was poor (0.58-0.77), the correlation between the two EIT algorithms was better (0.90-0.92). Bland-Altmann analysis revealed absence of bias, but wide limits of agreement. Lung area estimation from CT and EIT differs significantly, most probably because of the fundamental difference in image generation.
Exploiting loop level parallelism in nonprocedural dataflow programs
NASA Technical Reports Server (NTRS)
Gokhale, Maya B.
1987-01-01
Discussed are how loop level parallelism is detected in a nonprocedural dataflow program, and how a procedural program with concurrent loops is scheduled. Also discussed is a program restructuring technique which may be applied to recursive equations so that concurrent loops may be generated for a seemingly iterative computation. A compiler which generates C code for the language described below has been implemented. The scheduling component of the compiler and the restructuring transformation are described.
Whitford, David L; Paul, Gillian; Smith, Susan M
2013-07-01
The purpose of this study is to discuss the use of a system of patient generated "frequently asked questions" (FAQs) in order to gain insight into the information needs of participants. FAQs generated during group meetings taking place in a randomized controlled trial of peer support in type 2 diabetes are described in terms of their frequencies and topic areas. Data from focus groups and semi-structured interviews concerning the FAQs was subjected to content analysis. 59/182 (33%) of the FAQs were directly related to the topic area of the scheduled peer support meeting with foot care, eyes and kidneys generating the most specific questions. The FAQs addressed mainly knowledge and concerns. The FAQs appeared to enhance peer support and also enabled participants to ask questions to experts that they may not have asked in a clinic situation. The use of FAQs to support peer supporters proved beneficial in a randomized controlled trial and may be usefully added to the tools used within a peer support framework. The use of FAQs provided valuable insight into the informal information needs of people with diabetes. Means of providing a similar structure in routine clinical care should be explored. Copyright © 2013 Primary Care Diabetes Europe. Published by Elsevier Ltd. All rights reserved.
Multi-Element Integrated Project Planning at Kennedy Space Center
NASA Technical Reports Server (NTRS)
Mullon, Robert
2008-01-01
This presentation demonstrates how the ASRC Scheduling team developed working practices to support multiple NASA and ASRC Project Managers using the enterprise capabilities of Primavera P6 and P6 Web Access. This work has proceeded as part of Kennedy Ground Systems' preparation for its transition from the Shuttle Program to the Constellation Program. The presenters will cover Primavera's enterprise-class capabilities for schedule development, integrated critical path analysis, and reporting, as well as advanced Primavera P6 Web Access tools and techniques for communicating project status.
System-level power optimization for real-time distributed embedded systems
NASA Astrophysics Data System (ADS)
Luo, Jiong
Power optimization is one of the crucial design considerations for modern electronic systems. In this thesis, we present several system-level power optimization techniques for real-time distributed embedded systems, based on dynamic voltage scaling, dynamic power management, and management of peak power and variance of the power profile. Dynamic voltage scaling has been widely acknowledged as an important and powerful technique to trade off dynamic power consumption and delay. Efficient dynamic voltage scaling requires effective variable-voltage scheduling mechanisms that can adjust voltages and clock frequencies adaptively based on workloads and timing constraints. For this purpose, we propose static variable-voltage scheduling algorithms utilizing criticalpath driven timing analysis for the case when tasks are assumed to have uniform switching activities, as well as energy-gradient driven slack allocation for a more general scenario. The proposed techniques can achieve closeto-optimal power savings with very low computational complexity, without violating any real-time constraints. We also present algorithms for power-efficient joint scheduling of multi-rate periodic task graphs along with soft aperiodic tasks. The power issue is addressed through both dynamic voltage scaling and power management. Periodic task graphs are scheduled statically. Flexibility is introduced into the static schedule to allow the on-line scheduler to make local changes to PE schedules through resource reclaiming and slack stealing, without interfering with the validity of the global schedule. We provide a unified framework in which the response times of aperiodic tasks and power consumption are dynamically optimized simultaneously. Interconnection network fabrics point to a new generation of power-efficient and scalable interconnection architectures for distributed embedded systems. As the system bandwidth continues to increase, interconnection networks become power/energy limited as well. Variable-frequency links have been designed by circuit designers for both parallel and serial links, which can adaptively regulate the supply voltage of transceivers to a desired link frequency, to exploit the variations in bandwidth requirement for power savings. We propose solutions for simultaneous dynamic voltage scaling of processors and links. The proposed solution considers real-time scheduling, flow control, and packet routing jointly. It can trade off the power consumption on processors and communication links via efficient slack allocation, and lead to more power savings than dynamic voltage scaling on processors alone. For battery-operated systems, the battery lifespan is an important concern. Due to the effects of discharge rate and battery recovery, the discharge pattern of batteries has an impact on the battery lifespan. Battery models indicate that even under the same average power consumption, reducing peak power current and variance in the power profile can increase the battery efficiency and thereby prolong battery lifetime. To take advantage of these effects, we propose battery-driven scheduling techniques for embedded applications, to reduce the peak power and the variance in the power profile of the overall system under real-time constraints. The proposed scheduling algorithms are also beneficial in addressing reliability and signal integrity concerns by effectively controlling peak power and variance of the power profile.
3D reconstruction from multi-view VHR-satellite images in MicMac
NASA Astrophysics Data System (ADS)
Rupnik, Ewelina; Pierrot-Deseilligny, Marc; Delorme, Arthur
2018-05-01
This work addresses the generation of high quality digital surface models by fusing multiple depths maps calculated with the dense image matching method. The algorithm is adapted to very high resolution multi-view satellite images, and the main contributions of this work are in the multi-view fusion. The algorithm is insensitive to outliers, takes into account the matching quality indicators, handles non-correlated zones (e.g. occlusions), and is solved with a multi-directional dynamic programming approach. No geometric constraints (e.g. surface planarity) or auxiliary data in form of ground control points are required for its operation. Prior to the fusion procedures, the RPC geolocation parameters of all images are improved in a bundle block adjustment routine. The performance of the algorithm is evaluated on two VHR (Very High Resolution)-satellite image datasets (Pléiades, WorldView-3) revealing its good performance in reconstructing non-textured areas, repetitive patterns, and surface discontinuities.
An expert system for planning and scheduling in a telerobotic environment
NASA Technical Reports Server (NTRS)
Ntuen, Celestine A.; Park, Eui H.
1991-01-01
A knowledge based approach to assigning tasks to multi-agents working cooperatively in jobs that require a telerobot in the loop was developed. The generality of the approach allows for such a concept to be applied in a nonteleoperational domain. The planning architecture known as the task oriented planner (TOP) uses the principle of flow mechanism and the concept of planning by deliberation to preserve and use knowledge about a particular task. The TOP is an open ended architecture developed with a NEXPERT expert system shell and its knowledge organization allows for indirect consultation at various levels of task abstraction. Considering that a telerobot operates in a hostile and nonstructured environment, task scheduling should respond to environmental changes. A general heuristic was developed for scheduling jobs with the TOP system. The technique is not to optimize a given scheduling criterion as in classical job and/or flow shop problems. For a teleoperation job schedule, criteria are situation dependent. A criterion selection is fuzzily embedded in the task-skill matrix computation. However, goal achievement with minimum expected risk to the human operator is emphasized.
Hierarchical fractional-step approximations and parallel kinetic Monte Carlo algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arampatzis, Giorgos, E-mail: garab@math.uoc.gr; Katsoulakis, Markos A., E-mail: markos@math.umass.edu; Plechac, Petr, E-mail: plechac@math.udel.edu
2012-10-01
We present a mathematical framework for constructing and analyzing parallel algorithms for lattice kinetic Monte Carlo (KMC) simulations. The resulting algorithms have the capacity to simulate a wide range of spatio-temporal scales in spatially distributed, non-equilibrium physiochemical processes with complex chemistry and transport micro-mechanisms. Rather than focusing on constructing exactly the stochastic trajectories, our approach relies on approximating the evolution of observables, such as density, coverage, correlations and so on. More specifically, we develop a spatial domain decomposition of the Markov operator (generator) that describes the evolution of all observables according to the kinetic Monte Carlo algorithm. This domain decompositionmore » corresponds to a decomposition of the Markov generator into a hierarchy of operators and can be tailored to specific hierarchical parallel architectures such as multi-core processors or clusters of Graphical Processing Units (GPUs). Based on this operator decomposition, we formulate parallel Fractional step kinetic Monte Carlo algorithms by employing the Trotter Theorem and its randomized variants; these schemes, (a) are partially asynchronous on each fractional step time-window, and (b) are characterized by their communication schedule between processors. The proposed mathematical framework allows us to rigorously justify the numerical and statistical consistency of the proposed algorithms, showing the convergence of our approximating schemes to the original serial KMC. The approach also provides a systematic evaluation of different processor communicating schedules. We carry out a detailed benchmarking of the parallel KMC schemes using available exact solutions, for example, in Ising-type systems and we demonstrate the capabilities of the method to simulate complex spatially distributed reactions at very large scales on GPUs. Finally, we discuss work load balancing between processors and propose a re-balancing scheme based on probabilistic mass transport methods.« less
40 CFR 58.12 - Operating schedules.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) AMBIENT AIR QUALITY SURVEILLANCE Monitoring Network § 58.12 Operating schedules. State and local... part. Area-specific PAMS operating schedules must be included as part of the PAMS network description... remains once every six days. No less frequently than as part of each 5-year network assessment, the most...
40 CFR 58.12 - Operating schedules.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) AMBIENT AIR QUALITY SURVEILLANCE Monitoring Network § 58.12 Operating schedules. State and local... part. Area-specific PAMS operating schedules must be included as part of the PAMS network description... remains once every six days. No less frequently than as part of each 5-year network assessment, the most...
NASA Astrophysics Data System (ADS)
Pierro, Marco; De Felice, Matteo; Maggioni, Enrico; Moser, David; Perotto, Alessandro; Spada, Francesco; Cornaro, Cristina
2017-04-01
The growing photovoltaic generation results in a stochastic variability of the electric demand that could compromise the stability of the grid and increase the amount of energy reserve and the energy imbalance cost. On regional scale, solar power estimation and forecast is becoming essential for Distribution System Operators, Transmission System Operator, energy traders, and aggregators of generation. Indeed the estimation of regional PV power can be used for PV power supervision and real time control of residual load. Mid-term PV power forecast can be employed for transmission scheduling to reduce energy imbalance and related cost of penalties, residual load tracking, trading optimization, secondary energy reserve assessment. In this context, a new upscaling method was developed and used for estimation and mid-term forecast of the photovoltaic distributed generation in a small area in the north of Italy under the control of a local DSO. The method was based on spatial clustering of the PV fleet and neural networks models that input satellite or numerical weather prediction data (centered on cluster centroids) to estimate or predict the regional solar generation. It requires a low computational effort and very few input information should be provided by users. The power estimation model achieved a RMSE of 3% of installed capacity. Intra-day forecast (from 1 to 4 hours) obtained a RMSE of 5% - 7% while the one and two days forecast achieve to a RMSE of 7% and 7.5%. A model to estimate the forecast error and the prediction intervals was also developed. The photovoltaic production in the considered region provided the 6.9% of the electric consumption in 2015. Since the PV penetration is very similar to the one observed at national level (7.9%), this is a good case study to analyse the impact of PV generation on the electric grid and the effects of PV power forecast on transmission scheduling and on secondary reserve estimation. It appears that, already with 7% of PV penetration, the distributed PV generation could have a great impact both on the DSO energy need and on the transmission scheduling capability. Indeed, for some hours of the days in summer time, the photovoltaic generation can provide from 50% to 75% of the energy that the local DSO should buy from Italian TSO to cover the electrical demand. Moreover, mid-term forecast can reduce the annual energy imbalance between the scheduled transmission and the actual one from 10% of the TSO energy supply (without considering the PV forecast) to 2%. Furthermore, it was shown that prediction intervals could be used not only to estimate the probability of a specific PV generation bid on the energy market, but also to reduce the energy reserve predicted for the next day. Two different methods for energy reserve estimation were developed and tested. The first is based on a clear sky model while the second makes use of the PV prediction intervals with the 95% of confidence level. The latter reduces the amount of the day-ahead energy reserve of 36% with respect the clear sky method.
Integrating Solar PV in Utility System Operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mills, A.; Botterud, A.; Wu, J.
2013-10-31
This study develops a systematic framework for estimating the increase in operating costs due to uncertainty and variability in renewable resources, uses the framework to quantify the integration costs associated with sub-hourly solar power variability and uncertainty, and shows how changes in system operations may affect these costs. Toward this end, we present a statistical method for estimating the required balancing reserves to maintain system reliability along with a model for commitment and dispatch of the portfolio of thermal and renewable resources at different stages of system operations. We estimate the costs of sub-hourly solar variability, short-term forecast errors, andmore » day-ahead (DA) forecast errors as the difference in production costs between a case with “realistic” PV (i.e., subhourly solar variability and uncertainty are fully included in the modeling) and a case with “well behaved” PV (i.e., PV is assumed to have no sub-hourly variability and can be perfectly forecasted). In addition, we highlight current practices that allow utilities to compensate for the issues encountered at the sub-hourly time frame with increased levels of PV penetration. In this analysis we use the analytical framework to simulate utility operations with increasing deployment of PV in a case study of Arizona Public Service Company (APS), a utility in the southwestern United States. In our analysis, we focus on three processes that are important in understanding the management of PV variability and uncertainty in power system operations. First, we represent the decisions made the day before the operating day through a DA commitment model that relies on imperfect DA forecasts of load and wind as well as PV generation. Second, we represent the decisions made by schedulers in the operating day through hour-ahead (HA) scheduling. Peaking units can be committed or decommitted in the HA schedules and online units can be redispatched using forecasts that are improved relative to DA forecasts, but still imperfect. Finally, we represent decisions within the operating hour by schedulers and transmission system operators as real-time (RT) balancing. We simulate the DA and HA scheduling processes with a detailed unit-commitment (UC) and economic dispatch (ED) optimization model. This model creates a least-cost dispatch and commitment plan for the conventional generating units using forecasts and reserve requirements as inputs. We consider only the generation units and load of the utility in this analysis; we do not consider opportunities to trade power with neighboring utilities. We also do not consider provision of reserves from renewables or from demand-side options. We estimate dynamic reserve requirements in order to meet reliability requirements in the RT operations, considering the uncertainty and variability in load, solar PV, and wind resources. Balancing reserve requirements are based on the 2.5th and 97.5th percentile of 1-min deviations from the HA schedule in a previous year. We then simulate RT deployment of balancing reserves using a separate minute-by-minute simulation of deviations from the HA schedules in the operating year. In the simulations we assume that balancing reserves can be fully deployed in 10 min. The minute-by-minute deviations account for HA forecasting errors and the actual variability of the load, wind, and solar generation. Using these minute-by-minute deviations and deployment of balancing reserves, we evaluate the impact of PV on system reliability through the calculation of the standard reliability metric called Control Performance Standard 2 (CPS2). Broadly speaking, the CPS2 score measures the percentage of 10-min periods in which a balancing area is able to balance supply and demand within a specific threshold. Compliance with the North American Electric Reliability Corporation (NERC) reliability standards requires that the CPS2 score must exceed 90% (i.e., the balancing area must maintain adequate balance for 90% of the 10-min periods). The combination of representing DA forecast errors in the DA commitments, using 1-min PV data to simulate RT balancing, and estimates of reliability performance through the CPS2 metric, all factors that are important to operating systems with increasing amounts of PV, makes this study unique in its scope.« less
5 CFR 532.317 - Use of data from the nearest similar area.
Code of Federal Regulations, 2011 CFR
2011-01-01
... REGULATIONS PREVAILING RATE SYSTEMS Determining Rates for Principal Types of Positions § 532.317 Use of data... of Defense, the lead agency shall, in establishing the regular schedule under the provisions of this... obtained from inside the local wage survey area. The regular schedule for Department of Defense prevailing...
5 CFR 532.317 - Use of data from the nearest similar area.
Code of Federal Regulations, 2014 CFR
2014-01-01
... REGULATIONS PREVAILING RATE SYSTEMS Determining Rates for Principal Types of Positions § 532.317 Use of data... of Defense, the lead agency shall, in establishing the regular schedule under the provisions of this... obtained from inside the local wage survey area. The regular schedule for Department of Defense prevailing...
NASA Astrophysics Data System (ADS)
Anantharaj, V. G.; Venzke, J.; Lingerfelt, E.; Messer, B.
2015-12-01
Climate model simulations are used to understand the evolution and variability of earth's climate. Unfortunately, high-resolution multi-decadal climate simulations can take days to weeks to complete. Typically, the simulation results are not analyzed until the model runs have ended. During the course of the simulation, the output may be processed periodically to ensure that the model is preforming as expected. However, most of the data analytics and visualization are not performed until the simulation is finished. The lengthy time period needed for the completion of the simulation constrains the productivity of climate scientists. Our implementation of near real-time data visualization analytics capabilities allows scientists to monitor the progress of their simulations while the model is running. Our analytics software executes concurrently in a co-scheduling mode, monitoring data production. When new data are generated by the simulation, a co-scheduled data analytics job is submitted to render visualization artifacts of the latest results. These visualization output are automatically transferred to Bellerophon's data server located at ORNL's Compute and Data Environment for Science (CADES) where they are processed and archived into Bellerophon's database. During the course of the experiment, climate scientists can then use Bellerophon's graphical user interface to view animated plots and their associated metadata. The quick turnaround from the start of the simulation until the data are analyzed permits research decisions and projections to be made days or sometimes even weeks sooner than otherwise possible! The supercomputer resources used to run the simulation are unaffected by co-scheduling the data visualization jobs, so the model runs continuously while the data are visualized. Our just-in-time data visualization software looks to increase climate scientists' productivity as climate modeling moves into exascale era of computing.
Knowledge-Based Scheduling of Arrival Aircraft in the Terminal Area
NASA Technical Reports Server (NTRS)
Krzeczowski, K. J.; Davis, T.; Erzberger, H.; Lev-Ram, Israel; Bergh, Christopher P.
1995-01-01
A knowledge based method for scheduling arrival aircraft in the terminal area has been implemented and tested in real time simulation. The scheduling system automatically sequences, assigns landing times, and assign runways to arrival aircraft by utilizing continuous updates of aircraft radar data and controller inputs. The scheduling algorithm is driven by a knowledge base which was obtained in over two thousand hours of controller-in-the-loop real time simulation. The knowledge base contains a series of hierarchical 'rules' and decision logic that examines both performance criteria, such as delay reductions, as well as workload reduction criteria, such as conflict avoidance. The objective of the algorithm is to devise an efficient plan to land the aircraft in a manner acceptable to the air traffic controllers. This paper describes the scheduling algorithms, gives examples of their use, and presents data regarding their potential benefits to the air traffic system.
Departure Queue Prediction for Strategic and Tactical Surface Scheduler Integration
NASA Technical Reports Server (NTRS)
Zelinski, Shannon; Windhorst, Robert
2016-01-01
A departure metering concept to be demonstrated at Charlotte Douglas International Airport (CLT) will integrate strategic and tactical surface scheduling components to enable the respective collaborative decision making and improved efficiency benefits these two methods of scheduling provide. This study analyzes the effect of tactical scheduling on strategic scheduler predictability. Strategic queue predictions and target gate pushback times to achieve a desired queue length are compared between fast time simulations of CLT surface operations with and without tactical scheduling. The use of variable departure rates as a strategic scheduler input was shown to substantially improve queue predictions over static departure rates. With target queue length calibration, the strategic scheduler can be tuned to produce average delays within one minute of the tactical scheduler. However, root mean square differences between strategic and tactical delays were between 12 and 15 minutes due to the different methods the strategic and tactical schedulers use to predict takeoff times and generate gate pushback clearances. This demonstrates how difficult it is for the strategic scheduler to predict tactical scheduler assigned gate delays on an individual flight basis as the tactical scheduler adjusts departure sequence to accommodate arrival interactions. Strategic/tactical scheduler compatibility may be improved by providing more arrival information to the strategic scheduler and stabilizing tactical scheduler changes to runway sequence in response to arrivals.
XOPPS - OEL PROJECT PLANNER/SCHEDULER TOOL
NASA Technical Reports Server (NTRS)
Mulnix, C. L.
1994-01-01
XOPPS is a window-based graphics tool for scheduling and project planning that provides easy and fast on-screen WYSIWYG editing capabilities. It has a canvas area which displays the full image of the schedule being edited. The canvas contains a header area for text and a schedule area for plotting graphic representations of milestone objects in a flexible timeline. XOPPS is object-oriented, but it is unique in its capability for creating objects that have date attributes. Each object on the screen can be treated as a unit for moving, editing, etc. There is a mouse interface for simple control of pointer location. The user can position objects to pixel resolution, but objects with an associated date are positioned automatically in their correct timeline position in the schedule area. The schedule area has horizontal lines across the page with capabilities for multiple pages and for editing the number of lines per page and the line grid. The text on a line can be edited and a line can be moved with all objects on the line moving with it. The timeline display can be edited to plot any time period in a variety of formats from Fiscal year to Calendar Year and days to years. Text objects and image objects (rasterfiles and icons) can be created for placement anywhere on the page. Milestone event objects with a single associated date (and optional text and milestone symbol) and activity objects with start and end dates (and an optional completion date) have unique editing panels for entering data. A representation for schedule slips is also provided with the capability to automatically convert a milestone event to a slip. A milestone schedule on another computer can be saved to an ASCII file to be read by XOPPS. The program can print a schedule to a PostScript file. Dependencies between objects can also be displayed on the chart through the use of precedence lines. This program is not intended to replace a commercial scheduling/project management program. Because XOPPS has an ASCII file interface it can be used in conjunction with a project management tool to produce schedules with a quality appearance. XOPPS is written in C-language for Sun series workstations running SunOS. This package requires MIT's X Window System, Version 11 Revision 4, with OSF/Motif 1.1. A sample executable is included. XOPPS requires 375K main memory and 1.5Mb free disk space for execution. The standard distribution medium is a .25 inch streaming magnetic tape cartridge in UNIX tar format. XOPPS was developed in 1992, based on the Sunview version of OPPS (NPO-18439) developed in 1990. It is a copyrighted work with all copyright vested in NASA.
Wei, Xiaohui; Sun, Bingyi; Cui, Jiaxu; Xu, Gaochao
2016-01-01
As a result of the greatly increased use of mobile devices, the disadvantages of portable devices have gradually begun to emerge. To solve these problems, the use of mobile cloud computing assisted by cloud data centers has been proposed. However, cloud data centers are always very far from the mobile requesters. In this paper, we propose an improved multi-objective local mobile cloud model: Compounded Local Mobile Cloud Architecture with Dynamic Priority Queues (LMCpri). This new architecture could briefly store jobs that arrive simultaneously at the cloudlet in different priority positions according to the result of auction processing, and then execute partitioning tasks on capable helpers. In the Scheduling Module, NSGA-II is employed as the scheduling algorithm to shorten processing time and decrease requester cost relative to PSO and sequential scheduling. The simulation results show that the number of iteration times that is defined to 30 is the best choice of the system. In addition, comparing with LMCque, LMCpri is able to effectively accommodate a requester who would like his job to be executed in advance and shorten execution time. Finally, we make a comparing experiment between LMCpri and cloud assisting architecture, and the results reveal that LMCpri presents a better performance advantage than cloud assisting architecture.
Wei, Xiaohui; Sun, Bingyi; Cui, Jiaxu; Xu, Gaochao
2016-01-01
As a result of the greatly increased use of mobile devices, the disadvantages of portable devices have gradually begun to emerge. To solve these problems, the use of mobile cloud computing assisted by cloud data centers has been proposed. However, cloud data centers are always very far from the mobile requesters. In this paper, we propose an improved multi-objective local mobile cloud model: Compounded Local Mobile Cloud Architecture with Dynamic Priority Queues (LMCpri). This new architecture could briefly store jobs that arrive simultaneously at the cloudlet in different priority positions according to the result of auction processing, and then execute partitioning tasks on capable helpers. In the Scheduling Module, NSGA-II is employed as the scheduling algorithm to shorten processing time and decrease requester cost relative to PSO and sequential scheduling. The simulation results show that the number of iteration times that is defined to 30 is the best choice of the system. In addition, comparing with LMCque, LMCpri is able to effectively accommodate a requester who would like his job to be executed in advance and shorten execution time. Finally, we make a comparing experiment between LMCpri and cloud assisting architecture, and the results reveal that LMCpri presents a better performance advantage than cloud assisting architecture. PMID:27419854
NASA Astrophysics Data System (ADS)
Adler, D. S.
2000-12-01
The Science Planning and Scheduling Team (SPST) of the Space Telescope Science Institute (STScI) has historically operated exclusively under VMS. Due to diminished support for VMS-based platforms at STScI, SPST is in the process of transitioning to Unix operations. In the summer of 1999, SPST selected Python as the primary scripting language for the operational tools and began translation of the VMS DCL code. As of October 2000, SPST has installed a utility library of 16 modules consisting of 8000 lines of code and 80 Python tools consisting of 13000 lines of code. All tasks related to calendar generation have been switched to Unix operations. Current work focuses on translating the tools used to generate the Science Mission Specifications (SMS). The software required to generate the Mission Schedule and Command Loads (PASS), maintained by another team at STScI, will take longer to translate than the rest of the SPST operational code. SPST is planning on creating tools to access PASS from Unix in the short term. We are on schedule to complete the work needed to fully transition SPST to Unix operations (while remotely accessing PASS on VMS) by the fall of 2001.
Robust optimisation-based microgrid scheduling with islanding constraints
Liu, Guodong; Starke, Michael; Xiao, Bailu; ...
2017-02-17
This paper proposes a robust optimization based optimal scheduling model for microgrid operation considering constraints of islanding capability. Our objective is to minimize the total operation cost, including generation cost and spinning reserve cost of local resources as well as purchasing cost of energy from the main grid. In order to ensure the resiliency of a microgrid and improve the reliability of the local electricity supply, the microgrid is required to maintain enough spinning reserve (both up and down) to meet local demand and accommodate local renewable generation when the supply of power from the main grid is interrupted suddenly,more » i.e., microgrid transitions from grid-connected into islanded mode. Prevailing operational uncertainties in renewable energy resources and load are considered and captured using a robust optimization method. With proper robust level, the solution of the proposed scheduling model ensures successful islanding of the microgrid with minimum load curtailment and guarantees robustness against all possible realizations of the modeled operational uncertainties. Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator and a battery demonstrate the effectiveness of the proposed scheduling model.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gomes, C.
This report describes a successful project for transference of advanced AI technology into the domain of planning of outages of nuclear power plants as part of DOD`s dual-use program. ROMAN (Rome Lab Outage Manager) is the prototype system that was developed as a result of this project. ROMAN`s main innovation compared to the current state-of-the-art of outage management tools is its capability to automatically enforce safety constraints during the planning and scheduling phase. Another innovative aspect of ROMAN is the generation of more robust schedules that are feasible over time windows. In other words, ROMAN generates a family of schedulesmore » by assigning time intervals as start times to activities rather than single start times, without affecting the overall duration of the project. ROMAN uses a constraint satisfaction paradigm combining a global search tactic with constraint propagation. The derivation of very specialized representations for the constraints to perform efficient propagation is a key aspect for the generation of very fast schedules - constraints are compiled into the code, which is a novel aspect of our work using an automatic programming system, KIDS.« less
Furukawa, Hiroshi
2017-01-01
Round Robin based Intermittent Periodic Transmit (RR-IPT) has been proposed which achieves highly efficient multi-hop relays in multi-hop wireless backhaul networks (MWBN) where relay nodes are 2-dimensionally deployed. This paper newly investigates multi-channel packet scheduling and forwarding scheme for RR-IPT. Downlink traffic is forwarded by RR-IPT via one of the channels, while uplink traffic and part of downlink are accommodated in the other channel. By comparing IPT and carrier sense multiple access with collision avoidance (CSMA/CA) for uplink/downlink packet forwarding channel, IPT is more effective in reducing packet loss rate whereas CSMA/CA is better in terms of system throughput and packet delay improvement. PMID:29137164
Charge scheduling of an energy storage system under time-of-use pricing and a demand charge.
Yoon, Yourim; Kim, Yong-Hyuk
2014-01-01
A real-coded genetic algorithm is used to schedule the charging of an energy storage system (ESS), operated in tandem with renewable power by an electricity consumer who is subject to time-of-use pricing and a demand charge. Simulations based on load and generation profiles of typical residential customers show that an ESS scheduled by our algorithm can reduce electricity costs by approximately 17%, compared to a system without an ESS and by 8% compared to a scheduling algorithm based on net power.