Analysis of oil-pipeline distribution of multiple products subject to delivery time-windows
NASA Astrophysics Data System (ADS)
Jittamai, Phongchai
This dissertation defines the operational problems of, and develops solution methodologies for, a distribution of multiple products into oil pipeline subject to delivery time-windows constraints. A multiple-product oil pipeline is a pipeline system composing of pipes, pumps, valves and storage facilities used to transport different types of liquids. Typically, products delivered by pipelines are petroleum of different grades moving either from production facilities to refineries or from refineries to distributors. Time-windows, which are generally used in logistics and scheduling areas, are incorporated in this study. The distribution of multiple products into oil pipeline subject to delivery time-windows is modeled as multicommodity network flow structure and mathematically formulated. The main focus of this dissertation is the investigation of operating issues and problem complexity of single-source pipeline problems and also providing solution methodology to compute input schedule that yields minimum total time violation from due delivery time-windows. The problem is proved to be NP-complete. The heuristic approach, a reversed-flow algorithm, is developed based on pipeline flow reversibility to compute input schedule for the pipeline problem. This algorithm is implemented in no longer than O(T·E) time. This dissertation also extends the study to examine some operating attributes and problem complexity of multiple-source pipelines. The multiple-source pipeline problem is also NP-complete. A heuristic algorithm modified from the one used in single-source pipeline problems is introduced. This algorithm can also be implemented in no longer than O(T·E) time. Computational results are presented for both methodologies on randomly generated problem sets. The computational experience indicates that reversed-flow algorithms provide good solutions in comparison with the optimal solutions. Only 25% of the problems tested were more than 30% greater than optimal values and approximately 40% of the tested problems were solved optimally by the algorithms.
NASA Astrophysics Data System (ADS)
Wu, NaiQi; Zhu, MengChu; Bai, LiPing; Li, ZhiWu
2016-07-01
In some refineries, storage tanks are located at two different sites, one for low-fusion-point crude oil and the other for high one. Two pipelines are used to transport different oil types. Due to the constraints resulting from the high-fusion-point oil transportation, it is challenging to schedule such a system. This work studies the scheduling problem from a control-theoretic perspective. It proposes to use a hybrid Petri net method to model the system. It then finds the schedulability conditions by analysing the dynamic behaviour of the net model. Next, it proposes an efficient scheduling method to minimize the cost of high-fusion-point oil transportation. Finally, it gives a complex industrial case study to show its application.
VPipe: Virtual Pipelining for Scheduling of DAG Stream Query Plans
NASA Astrophysics Data System (ADS)
Wang, Song; Gupta, Chetan; Mehta, Abhay
There are data streams all around us that can be harnessed for tremendous business and personal advantage. For an enterprise-level stream processing system such as CHAOS [1] (Continuous, Heterogeneous Analytic Over Streams), handling of complex query plans with resource constraints is challenging. While several scheduling strategies exist for stream processing, efficient scheduling of complex DAG query plans is still largely unsolved. In this paper, we propose a novel execution scheme for scheduling complex directed acyclic graph (DAG) query plans with meta-data enriched stream tuples. Our solution, called Virtual Pipelined Chain (or VPipe Chain for short), effectively extends the "Chain" pipelining scheduling approach to complex DAG query plans.
30 CFR 250.1752 - How do I remove a pipeline?
Code of Federal Regulations, 2012 CFR
2012-07-01
... minimize such impacts; and (7) Projected removal schedule and duration. (b) Pig the pipeline, unless the Regional Supervisor determines that pigging is not practical; and (c) Flush the pipeline. ...
30 CFR 250.1752 - How do I remove a pipeline?
Code of Federal Regulations, 2013 CFR
2013-07-01
... minimize such impacts; and (7) Projected removal schedule and duration. (b) Pig the pipeline, unless the Regional Supervisor determines that pigging is not practical; and (c) Flush the pipeline. ...
30 CFR 250.1752 - How do I remove a pipeline?
Code of Federal Regulations, 2014 CFR
2014-07-01
... minimize such impacts; and (7) Projected removal schedule and duration. (b) Pig the pipeline, unless the Regional Supervisor determines that pigging is not practical; and (c) Flush the pipeline. ...
High-throughput bioinformatics with the Cyrille2 pipeline system
Fiers, Mark WEJ; van der Burgt, Ate; Datema, Erwin; de Groot, Joost CW; van Ham, Roeland CHJ
2008-01-01
Background Modern omics research involves the application of high-throughput technologies that generate vast volumes of data. These data need to be pre-processed, analyzed and integrated with existing knowledge through the use of diverse sets of software tools, models and databases. The analyses are often interdependent and chained together to form complex workflows or pipelines. Given the volume of the data used and the multitude of computational resources available, specialized pipeline software is required to make high-throughput analysis of large-scale omics datasets feasible. Results We have developed a generic pipeline system called Cyrille2. The system is modular in design and consists of three functionally distinct parts: 1) a web based, graphical user interface (GUI) that enables a pipeline operator to manage the system; 2) the Scheduler, which forms the functional core of the system and which tracks what data enters the system and determines what jobs must be scheduled for execution, and; 3) the Executor, which searches for scheduled jobs and executes these on a compute cluster. Conclusion The Cyrille2 system is an extensible, modular system, implementing the stated requirements. Cyrille2 enables easy creation and execution of high throughput, flexible bioinformatics pipelines. PMID:18269742
75 FR 16337 - Standards for Business Practices for Interstate Natural Gas Pipelines
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-01
... transportation by power plant operators more difficult. In response to this need, in early 2004, NAESB... receipt and delivery points; and (3) changes to the intraday nomination schedule to increase the number of... conference on the issue of intraday pipeline nomination schedules. In this regard, NGSA asserts that NAESB...
30 CFR 250.1752 - How do I remove a pipeline?
Code of Federal Regulations, 2011 CFR
2011-07-01
... minimize such impacts; and (7) Projected removal schedule and duration. (b) Pig the pipeline, unless the Regional Supervisor determines that pigging is not practical; and (c) Flush the pipeline. [67 FR 35406, May...
30 CFR 250.1752 - How do I remove a pipeline?
Code of Federal Regulations, 2010 CFR
2010-07-01
... (7) Projected removal schedule and duration. (b) Pig the pipeline, unless the Regional Supervisor determines that pigging is not practical; and (c) Flush the pipeline. [67 FR 35406, May 17, 2002, as amended...
Improving the result of forcasting using reservoir and surface network simulation
NASA Astrophysics Data System (ADS)
Hendri, R. S.; Winarta, J.
2018-01-01
This study was aimed to get more representative results in production forcasting using integrated simulation in pipeline gathering system of X field. There are 5 main scenarios which consist of the production forecast of the existing condition, work over, and infill drilling. Then, it’s determined the best development scenario. The methods of this study is Integrated Reservoir Simulator and Pipeline Simulator so-calle as Integrated Reservoir and Surface Network Simulation. After well data result from reservoir simulator was then integrated with pipeline networking simulator’s to construct a new schedule, which was input for all simulation procedure. The well design result was done by well modeling simulator then exported into pipeline simulator. Reservoir prediction depends on the minimum value of Tubing Head Pressure (THP) for each well, where the pressure drop on the Gathering Network is not necessary calculated. The same scenario was done also for the single-reservoir simulation. Integration Simulation produces results approaching the actual condition of the reservoir and was confirmed by the THP profile, which difference between those two methods. The difference between integrated simulation compared to single-modeling simulation is 6-9%. The aimed of solving back-pressure problem in pipeline gathering system of X field is achieved.
Parallel Directionally Split Solver Based on Reformulation of Pipelined Thomas Algorithm
NASA Technical Reports Server (NTRS)
Povitsky, A.
1998-01-01
In this research an efficient parallel algorithm for 3-D directionally split problems is developed. The proposed algorithm is based on a reformulated version of the pipelined Thomas algorithm that starts the backward step computations immediately after the completion of the forward step computations for the first portion of lines This algorithm has data available for other computational tasks while processors are idle from the Thomas algorithm. The proposed 3-D directionally split solver is based on the static scheduling of processors where local and non-local, data-dependent and data-independent computations are scheduled while processors are idle. A theoretical model of parallelization efficiency is used to define optimal parameters of the algorithm, to show an asymptotic parallelization penalty and to obtain an optimal cover of a global domain with subdomains. It is shown by computational experiments and by the theoretical model that the proposed algorithm reduces the parallelization penalty about two times over the basic algorithm for the range of the number of processors (subdomains) considered and the number of grid nodes per subdomain.
NASA Astrophysics Data System (ADS)
Clark, M.
2009-09-01
In the past, the physical presence and direct interaction of the astronomer with an observatory's staff and telescope equipment encouraged understanding and responsiveness between both staff and observers. But now, observatories often face the problem of expediently exchanging information with observers. New observatory procedures and policies such as automated-, remote- and service-observing, dynamic scheduling, data pipelining, or fully software-arbitrated telescope control provide for more efficient telescope use, but they have reduced the role of the observer to that of a customer rather than a partner in the process of observing. Topics for discussion will include scheduling, data quality, control interfaces, training and preparation for observing, and information distribution technologies, e.g., use of web sites, email, and RSS feeds.
77 FR 4220 - Storage Reporting Requirements of Interstate and Intrastate Natural Gas Companies
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-27
.... The reports by the two sets of pipelines must include: (1) the identity of each customer injecting gas... relationship), (2) the rate schedule (for interstate pipelines) or docket number (for intrastate pipelines... maximum daily withdrawal quantity applicable to each storage customer, (4) for each storage customer, the...
A 48Cycles/MB H.264/AVC Deblocking Filter Architecture for Ultra High Definition Applications
NASA Astrophysics Data System (ADS)
Zhou, Dajiang; Zhou, Jinjia; Zhu, Jiayi; Goto, Satoshi
In this paper, a highly parallel deblocking filter architecture for H.264/AVC is proposed to process one macroblock in 48 clock cycles and give real-time support to QFHD@60fps sequences at less than 100MHz. 4 edge filters organized in 2 groups for simultaneously processing vertical and horizontal edges are applied in this architecture to enhance its throughput. While parallelism increases, pipeline hazards arise owing to the latency of edge filters and data dependency of deblocking algorithm. To solve this problem, a zig-zag processing schedule is proposed to eliminate the pipeline bubbles. Data path of the architecture is then derived according to the processing schedule and optimized through data flow merging, so as to minimize the cost of logic and internal buffer. Meanwhile, the architecture's data input rate is designed to be identical to its throughput, while the transmission order of input data can also match the zig-zag processing schedule. Therefore no intercommunication buffer is required between the deblocking filter and its previous component for speed matching or data reordering. As a result, only one 24×64 two-port SRAM as internal buffer is required in this design. When synthesized with SMIC 130nm process, the architecture costs a gate count of 30.2k, which is competitive considering its high performance.
Pipelined CPU Design with FPGA in Teaching Computer Architecture
ERIC Educational Resources Information Center
Lee, Jong Hyuk; Lee, Seung Eun; Yu, Heon Chang; Suh, Taeweon
2012-01-01
This paper presents a pipelined CPU design project with a field programmable gate array (FPGA) system in a computer architecture course. The class project is a five-stage pipelined 32-bit MIPS design with experiments on the Altera DE2 board. For proper scheduling, milestones were set every one or two weeks to help students complete the project on…
Scheduling time-critical graphics on multiple processors
NASA Technical Reports Server (NTRS)
Meyer, Tom W.; Hughes, John F.
1995-01-01
This paper describes an algorithm for the scheduling of time-critical rendering and computation tasks on single- and multiple-processor architectures, with minimal pipelining. It was developed to manage scientific visualization scenes consisting of hundreds of objects, each of which can be computed and displayed at thousands of possible resolution levels. The algorithm generates the time-critical schedule using progressive-refinement techniques; it always returns a feasible schedule and, when allowed to run to completion, produces a near-optimal schedule which takes advantage of almost the entire multiple-processor system.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Reports by natural gas..., NATURAL GAS ACT STATEMENTS AND REPORTS (SCHEDULES) § 260.9 Reports by natural gas pipeline companies on service interruptions and damage to facilities. (a)(1) Every natural gas company must report to the...
A Cross-Layer Duty Cycle MAC Protocol Supporting a Pipeline Feature for Wireless Sensor Networks
Tong, Fei; Xie, Rong; Shu, Lei; Kim, Young-Chon
2011-01-01
Although the conventional duty cycle MAC protocols for Wireless Sensor Networks (WSNs) such as RMAC perform well in terms of saving energy and reducing end-to-end delivery latency, they were designed independently and require an extra routing protocol in the network layer to provide path information for the MAC layer. In this paper, we propose a new cross-layer duty cycle MAC protocol with data forwarding supporting a pipeline feature (P-MAC) for WSNs. P-MAC first divides the whole network into many grades around the sink. Each node identifies its grade according to its logical hop distance to the sink and simultaneously establishes a sleep/wakeup schedule using the grade information. Those nodes in the same grade keep the same schedule, which is staggered with the schedule of the nodes in the adjacent grade. Then a variation of the RTS/CTS handshake mechanism is used to forward data continuously in a pipeline fashion from the higher grade to the lower grade nodes and finally to the sink. No extra routing overhead is needed, thus increasing the network scalability while maintaining the superiority of duty-cycling. The simulation results in OPNET show that P-MAC has better performance than S-MAC and RMAC in terms of packet delivery latency and energy efficiency. PMID:22163895
Numerical Leak Detection in a Pipeline Network of Complex Structure with Unsteady Flow
NASA Astrophysics Data System (ADS)
Aida-zade, K. R.; Ashrafova, E. R.
2017-12-01
An inverse problem for a pipeline network of complex loopback structure is solved numerically. The problem is to determine the locations and amounts of leaks from unsteady flow characteristics measured at some pipeline points. The features of the problem include impulse functions involved in a system of hyperbolic differential equations, the absence of classical initial conditions, and boundary conditions specified as nonseparated relations between the states at the endpoints of adjacent pipeline segments. The problem is reduced to a parametric optimal control problem without initial conditions, but with nonseparated boundary conditions. The latter problem is solved by applying first-order optimization methods. Results of numerical experiments are presented.
A comparison of multiprocessor scheduling methods for iterative data flow architectures
NASA Technical Reports Server (NTRS)
Storch, Matthew
1993-01-01
A comparative study is made between the Algorithm to Architecture Mapping Model (ATAMM) and three other related multiprocessing models from the published literature. The primary focus of all four models is the non-preemptive scheduling of large-grain iterative data flow graphs as required in real-time systems, control applications, signal processing, and pipelined computations. Important characteristics of the models such as injection control, dynamic assignment, multiple node instantiations, static optimum unfolding, range-chart guided scheduling, and mathematical optimization are identified. The models from the literature are compared with the ATAMM for performance, scheduling methods, memory requirements, and complexity of scheduling and design procedures.
High-throughput Analysis of Large Microscopy Image Datasets on CPU-GPU Cluster Platforms
Teodoro, George; Pan, Tony; Kurc, Tahsin M.; Kong, Jun; Cooper, Lee A. D.; Podhorszki, Norbert; Klasky, Scott; Saltz, Joel H.
2014-01-01
Analysis of large pathology image datasets offers significant opportunities for the investigation of disease morphology, but the resource requirements of analysis pipelines limit the scale of such studies. Motivated by a brain cancer study, we propose and evaluate a parallel image analysis application pipeline for high throughput computation of large datasets of high resolution pathology tissue images on distributed CPU-GPU platforms. To achieve efficient execution on these hybrid systems, we have built runtime support that allows us to express the cancer image analysis application as a hierarchical data processing pipeline. The application is implemented as a coarse-grain pipeline of stages, where each stage may be further partitioned into another pipeline of fine-grain operations. The fine-grain operations are efficiently managed and scheduled for computation on CPUs and GPUs using performance aware scheduling techniques along with several optimizations, including architecture aware process placement, data locality conscious task assignment, data prefetching, and asynchronous data copy. These optimizations are employed to maximize the utilization of the aggregate computing power of CPUs and GPUs and minimize data copy overheads. Our experimental evaluation shows that the cooperative use of CPUs and GPUs achieves significant improvements on top of GPU-only versions (up to 1.6×) and that the execution of the application as a set of fine-grain operations provides more opportunities for runtime optimizations and attains better performance than coarser-grain, monolithic implementations used in other works. An implementation of the cancer image analysis pipeline using the runtime support was able to process an image dataset consisting of 36,848 4Kx4K-pixel image tiles (about 1.8TB uncompressed) in less than 4 minutes (150 tiles/second) on 100 nodes of a state-of-the-art hybrid cluster system. PMID:25419546
U.S. pipeline industry enters new era
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnsen, M.R.
1999-11-01
The largest construction project in North America this year and next--the Alliance Pipeline--marks some advances for the US pipeline industry. With the Alliance Pipeline system (Alliance), mechanized welding and ultrasonic testing are making their debuts in the US as primary mainline construction techniques. Particularly in Canada and Europe, mechanized welding technology has been used for both onshore and offshore pipeline construction for at least 15 years. However, it has never before been used to build a cross-country pipeline in the US, although it has been tested on short segments. This time, however, an accelerated construction schedule, among other reasons, necessitatedmore » the use of mechanized gas metal arc welding (GMAW). The $3-billion pipeline will delivery natural gas from northwestern British Columbia and northeastern Alberta in Canada to a hub near Chicago, Ill., where it will connect to the North American pipeline grid. Once the pipeline is completed and buried, crews will return the topsoil. Corn and other crops will reclaim the land. While the casual passerby probably won't know the Alliance pipeline is there, it may have a far-reaching effect on the way mainline pipelines are built in the US. For even though mechanized welding and ultrasonic testing are being used for the first time in the United States on this project, some US workers had already gained experience with the technology on projects elsewhere. And work on this pipeline has certainly developed a much larger pool of experienced workers for industry to draw from. The Alliance project could well signal the start of a new era in US pipeline construction.« less
Gender Equality in the Academy: The Pipeline Problem
ERIC Educational Resources Information Center
Monroe, Kristen Renwick; Chiu, William F.
2010-01-01
As part of the ongoing work by the Committee on the Status of Women in the Profession (CSWP), we offer an empirical analysis of the pipeline problem in academia. The image of a pipeline is a commonly advanced explanation for persistent discrimination that suggests that gender inequality will decline once there are sufficient numbers of qualified…
Main Pipelines Corrosion Monitoring Device
NASA Astrophysics Data System (ADS)
Anatoliy, Bazhenov; Galina, Bondareva; Natalia, Grivennaya; Sergey, Malygin; Mikhail, Goryainov
2017-01-01
The aim of the article is to substantiate the technical solution for the problem of monitoring corrosion changes in oil and gas pipelines with use (using) of an electromagnetic NDT method. Pipeline wall thinning under operating conditions can lead to perforations and leakage of the product to be transported outside the pipeline. In most cases there is danger for human life and environment. Monitoring of corrosion changes in pipeline inner wall under operating conditions is complicated because pipelines are mainly made of structural steels with conductive and magnetic properties that complicate test signal passage through the entire thickness of the object under study. The technical solution of this problem lies in monitoring of the internal corrosion changes in pipes under operating conditions in order to increase safety of pipelines by automated prediction of achieving the threshold pre-crash values due to corrosion.
Synthetic natural gas in California: When and why. [from coal
NASA Technical Reports Server (NTRS)
Wood, W. B.
1978-01-01
A coal gasification plant planned for northwestern New Mexico to produce 250 MMCFD of pipeline quality gas (SNG) using the German Lurgi process is discussed. The SNG will be commingled with natural gas in existing pipelines for delivery to southern California and the Midwest. Cost of the plant is figured at more than $1.4 billion in January 1978 dollars with a current inflation rate of $255,000 for each day of delay. Plant start-up is now scheduled for 1984.
Method for oil pipeline leak detection based on distributed fiber optic technology
NASA Astrophysics Data System (ADS)
Chen, Huabo; Tu, Yaqing; Luo, Ting
1998-08-01
Pipeline leak detection is a difficult problem to solve up to now. Some traditional leak detection methods have such problems as high rate of false alarm or missing detection, low location estimate capability. For the problems given above, a method for oil pipeline leak detection based on distributed optical fiber sensor with special coating is presented. The fiber's coating interacts with hydrocarbon molecules in oil, which alters the refractive indexed of the coating. Therefore the light-guiding properties of the fiber are modified. Thus pipeline leak location can be determined by OTDR. Oil pipeline lead detection system is designed based on the principle. The system has some features like real time, multi-point detection at the same time and high location accuracy. In the end, some factors that probably influence detection are analyzed and primary improving actions are given.
Madduri, Ravi K.; Sulakhe, Dinanath; Lacinski, Lukasz; Liu, Bo; Rodriguez, Alex; Chard, Kyle; Dave, Utpal J.; Foster, Ian T.
2014-01-01
We describe Globus Genomics, a system that we have developed for rapid analysis of large quantities of next-generation sequencing (NGS) genomic data. This system achieves a high degree of end-to-end automation that encompasses every stage of data analysis including initial data retrieval from remote sequencing centers or storage (via the Globus file transfer system); specification, configuration, and reuse of multi-step processing pipelines (via the Galaxy workflow system); creation of custom Amazon Machine Images and on-demand resource acquisition via a specialized elastic provisioner (on Amazon EC2); and efficient scheduling of these pipelines over many processors (via the HTCondor scheduler). The system allows biomedical researchers to perform rapid analysis of large NGS datasets in a fully automated manner, without software installation or a need for any local computing infrastructure. We report performance and cost results for some representative workloads. PMID:25342933
Madduri, Ravi K; Sulakhe, Dinanath; Lacinski, Lukasz; Liu, Bo; Rodriguez, Alex; Chard, Kyle; Dave, Utpal J; Foster, Ian T
2014-09-10
We describe Globus Genomics, a system that we have developed for rapid analysis of large quantities of next-generation sequencing (NGS) genomic data. This system achieves a high degree of end-to-end automation that encompasses every stage of data analysis including initial data retrieval from remote sequencing centers or storage (via the Globus file transfer system); specification, configuration, and reuse of multi-step processing pipelines (via the Galaxy workflow system); creation of custom Amazon Machine Images and on-demand resource acquisition via a specialized elastic provisioner (on Amazon EC2); and efficient scheduling of these pipelines over many processors (via the HTCondor scheduler). The system allows biomedical researchers to perform rapid analysis of large NGS datasets in a fully automated manner, without software installation or a need for any local computing infrastructure. We report performance and cost results for some representative workloads.
VizieR Online Data Catalog: LAMOST candidate members of star clusters (Xiang+, 2015)
NASA Astrophysics Data System (ADS)
Xiang, M. S.; Liu, X. W.; Yuan, H. B.; Huang, Y.; Huo, Z. Y.; Zhang, H. W.; Chen, B. Q.; Zhang, H. H.; Sun, N. C.; Wang, C.; Zhao, Y. H.; Shi, J. R.; Luo, A. L.; Li, G. P.; Wu, Y.; Bai, Z. R.; Zhang, Y.; Hou, Y. H.; Yuan, H. L.; Li, G. W.; Wei, Z.
2015-08-01
In this work, we describe the algorithms and implementation of LSP3, the LAMOST Stellar Parameter Pipeline at Peking University, a pipeline developed to determine the stellar parameters (radial velocity Vr, effective temperature Teff, surface gravity logg and metallicity [Fe/H]) from LAMOST spectra based on a template-matching technique. Following the data policy of LAMOST surveys, the data as well as the LSP3 pipeline will be public released as value-added products of the first data release of LAMOST (LAMOST DR1; Bai et al., 2015, A&A submitted), currently scheduled in 2014 December and can be accessed via http://lamost973.pku.edu.cn/site/node/4, along with a description file. (1 data file).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hooker, J.N.
This report describes an investigation of energy consumption and efficiency of oil pipelines in the US in 1978. It is based on a simulation of the actual movement of oil on a very detailed representation of the pipeline network, and it uses engineering equations to calculate the energy that pipeline pumps must have exerted on the oil to move it in this manner. The efficiencies of pumps and drivers are estimated so as to arrive at the amount of energy consumed at pumping stations. The throughput in each pipeline segment is estimated by distributing each pipeline company's reported oil movementsmore » over its segments in proportions predicted by regression equations that show typical throughput and throughput capacity as functions of pipe diameter. The form of the equations is justified by a generalized cost-engineering study of pipelining, and their parameters are estimated using new techniques developed for the purpose. A simplified model of flow scheduling is chosen on the basis of actual energy use data obtained from a few companies. The study yields energy consumption and intensiveness estimates for crude oil trunk lines, crude oil gathering lines and oil products lines, for the nation as well as by state and by pipe diameter. It characterizes the efficiency of typical pipelines of various diameters operating at capacity. Ancillary results include estimates of oil movements by state and by diameter and approximate pipeline capacity utilization nationwide.« less
A statistical-based scheduling algorithm in automated data path synthesis
NASA Technical Reports Server (NTRS)
Jeon, Byung Wook; Lursinsap, Chidchanok
1992-01-01
In this paper, we propose a new heuristic scheduling algorithm based on the statistical analysis of the cumulative frequency distribution of operations among control steps. It has a tendency of escaping from local minima and therefore reaching a globally optimal solution. The presented algorithm considers the real world constraints such as chained operations, multicycle operations, and pipelined data paths. The result of the experiment shows that it gives optimal solutions, even though it is greedy in nature.
Diagnostic Inspection of Pipelines for Estimating the State of Stress in Them
NASA Astrophysics Data System (ADS)
Subbotin, V. A.; Kolotilov, Yu. V.; Smirnova, V. Yu.; Ivashko, S. K.
2017-12-01
The diagnostic inspection used to estimate the technical state of a pipeline is described. The problems of inspection works are listed, and a functional-structural scheme is developed to estimate the state of stress in a pipeline. Final conclusions regarding the actual loading of a pipeline section are drawn upon a cross analysis of the entire information obtained during pipeline inspection.
Code of Federal Regulations, 2010 CFR
2010-10-01
... scheduling of a hearing. A petition is granted only if the petitioner shows good cause for a hearing. If a... Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY... not provide for a hearing, any interested person may petition the Administrator for an informal...
Project Scheduling Based on Risk of Gas Transmission Pipe
NASA Astrophysics Data System (ADS)
Silvianita; Nurbaity, A.; Mulyadi, Y.; Suntoyo; Chamelia, D. M.
2018-03-01
The planning of a project has a time limit on which must be completed before or right at a predetermined time. Thus, in a project planning, it is necessary to have scheduling management that is useful for completing a project to achieve maximum results by considering the constraints that will exists. Scheduling management is undertaken to deal with uncertainties and negative impacts of time and cost in project completion. This paper explains about scheduling management in gas transmission pipeline project Gresik-Semarang to find out which scheduling plan is most effectively used in accordance with its risk value. Scheduling management in this paper is assissted by Microsoft Project software to find the critical path of existing project scheduling planning data. Critical path is the longest scheduling path with the fastest completion time. The result is found a critical path on project scheduling with completion time is 152 days. Furthermore, the calculation of risk is done by using House of Risk (HOR) method and it is found that the critical path has a share of 40.98 percent of all causes of the occurence of risk events that will be experienced.
50 CFR 29.21-2 - Application procedures.
Code of Federal Regulations, 2014 CFR
2014-10-01
...) State of local governments or agencies or instrumentalities thereof except as to rights-of-way... schedule: (A) For linear facilities (e.g., powerlines, pipelines, roads, etc.). Length Payment Less than 5... application includes both linear and nonlinear facilities, payment will be the aggregate of amounts under...
50 CFR 29.21-2 - Application procedures.
Code of Federal Regulations, 2013 CFR
2013-10-01
...) State of local governments or agencies or instrumentalities thereof except as to rights-of-way... schedule: (A) For linear facilities (e.g., powerlines, pipelines, roads, etc.). Length Payment Less than 5... application includes both linear and nonlinear facilities, payment will be the aggregate of amounts under...
A Pipeline Tool for CCD Image Processing
NASA Astrophysics Data System (ADS)
Bell, Jon F.; Young, Peter J.; Roberts, William H.; Sebo, Kim M.
MSSSO is part of a collaboration developing a wide field imaging CCD mosaic (WFI). As part of this project, we have developed a GUI based pipeline tool that is an integrated part of MSSSO's CICADA data acquisition environment and processes CCD FITS images as they are acquired. The tool is also designed to run as a stand alone program to process previously acquired data. IRAF tasks are used as the central engine, including the new NOAO mscred package for processing multi-extension FITS files. The STScI OPUS pipeline environment may be used to manage data and process scheduling. The Motif GUI was developed using SUN Visual Workshop. C++ classes were written to facilitate launching of IRAF and OPUS tasks. While this first version implements calibration processing up to and including flat field corrections, there is scope to extend it to other processing.
Wild Horse 69-kV transmission line environmental assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-12-01
Hill County Electric Cooperative Inc. (Hill County) proposes to construct and operate a 69-kV transmission line from its North Gildford Substation in Montana north to the Canadian border. A vicinity project area map is enclosed as a figure. TransCanada Power Corporation (TCP), a Canadian power-marketing company, will own and construct the connecting 69-kV line from the international border to Express Pipeline`s pump station at Wild Horse, Alberta. This Environmental Assessment is prepared for the Department of Energy (DOE) as lead federal agency to comply with the requirements of the National Environmental Policy Act (NEPA), as part of DOE`s review andmore » approval process of the applications filed by Hill County for a DOE Presidential Permit and License to Export Electricity to a foreign country. The purpose of the proposed line is to supply electric energy to a crude oil pump station in Canada, owned by Express Pipeline Ltd. (Express). The pipeline would transport Canadian-produced oil from Hardisty, Alberta, Canada, to Caster, Wyoming. The Express Pipeline is scheduled to be constructed in 1996--97 and will supply crude oil to refineries in Wyoming and the midwest.« less
The Very Large Array Data Processing Pipeline
NASA Astrophysics Data System (ADS)
Kent, Brian R.; Masters, Joseph S.; Chandler, Claire J.; Davis, Lindsey E.; Kern, Jeffrey S.; Ott, Juergen; Schinzel, Frank K.; Medlin, Drew; Muders, Dirk; Williams, Stewart; Geers, Vincent C.; Momjian, Emmanuel; Butler, Bryan J.; Nakazato, Takeshi; Sugimoto, Kanako
2018-01-01
We present the VLA Pipeline, software that is part of the larger pipeline processing framework used for the Karl G. Jansky Very Large Array (VLA), and Atacama Large Millimeter/sub-millimeter Array (ALMA) for both interferometric and single dish observations.Through a collection of base code jointly used by the VLA and ALMA, the pipeline builds a hierarchy of classes to execute individual atomic pipeline tasks within the Common Astronomy Software Applications (CASA) package. Each pipeline task contains heuristics designed by the team to actively decide the best processing path and execution parameters for calibration and imaging. The pipeline code is developed and written in Python and uses a "context" structure for tracking the heuristic decisions and processing results. The pipeline "weblog" acts as the user interface in verifying the quality assurance of each calibration and imaging stage. The majority of VLA scheduling blocks above 1 GHz are now processed with the standard continuum recipe of the pipeline and offer a calibrated measurement set as a basic data product to observatory users. In addition, the pipeline is used for processing data from the VLA Sky Survey (VLASS), a seven year community-driven endeavor started in September 2017 to survey the entire sky down to a declination of -40 degrees at S-band (2-4 GHz). This 5500 hour next-generation large radio survey will explore the time and spectral domains, relying on pipeline processing to generate calibrated measurement sets, polarimetry, and imaging data products that are available to the astronomical community with no proprietary period. Here we present an overview of the pipeline design philosophy, heuristics, and calibration and imaging results produced by the pipeline. Future development will include the testing of spectral line recipes, low signal-to-noise heuristics, and serving as a testing platform for science ready data products.The pipeline is developed as part of the CASA software package by an international consortium of scientists and software developers based at the National Radio Astronomical Observatory (NRAO), the European Southern Observatory (ESO), and the National Astronomical Observatory of Japan (NAOJ).
The "Learning Disabilities to Juvenile Detention" Pipeline: A Case Study
ERIC Educational Resources Information Center
Mallett, Christopher A.
2014-01-01
Adolescents becoming formally involved with a juvenile court because of school-related behavior and discipline problems is a phenomenon known as the school-to-prison pipeline. Adolescents with learning disabilities are disproportionately represented within this pipeline. A study was conducted to review the outcomes for a population of youthful…
Disrupting the School-to-Prison Pipeline
ERIC Educational Resources Information Center
Bahena, Sofía, Ed.; Cooc, North, Ed.; Currie-Rubin, Rachel, Ed.; Kuttner, Paul, Ed.; Ng, Monica, Ed.
2012-01-01
A trenchant and wide-ranging look at this alarming national trend, "Disrupting the School-to-Prison Pipeline" is unsparing in its account of the problem while pointing in the direction of meaningful and much-needed reforms. The "school-to-prison pipeline" has received much attention in the education world over the past few…
Translations on USSR Resources No. 830
1978-10-06
duction of the automated control system must be continued. A second strand of the Shatlyk -Ashkhabad-Bezmein gas pipeline is to be built to increase the...schedule to Northern Tyumenskaya Oblast, to the Orenburg and Shatlyk fields. "We are in constant contact with our clients," says the plant’s deputy
Code of Federal Regulations, 2010 CFR
2010-07-01
... the pipeline end manifold must be closed whenever: (1) A storm warning forecasts weather conditions... vessel is about to depart the SPM because of storm conditions; or (3) The SPM is not scheduled for use in...
Code of Federal Regulations, 2011 CFR
2011-07-01
... the pipeline end manifold must be closed whenever: (1) A storm warning forecasts weather conditions... vessel is about to depart the SPM because of storm conditions; or (3) The SPM is not scheduled for use in...
Code of Federal Regulations, 2014 CFR
2014-07-01
... the pipeline end manifold must be closed whenever: (1) A storm warning forecasts weather conditions... vessel is about to depart the SPM because of storm conditions; or (3) The SPM is not scheduled for use in...
Code of Federal Regulations, 2013 CFR
2013-07-01
... the pipeline end manifold must be closed whenever: (1) A storm warning forecasts weather conditions... vessel is about to depart the SPM because of storm conditions; or (3) The SPM is not scheduled for use in...
Code of Federal Regulations, 2012 CFR
2012-07-01
... the pipeline end manifold must be closed whenever: (1) A storm warning forecasts weather conditions... vessel is about to depart the SPM because of storm conditions; or (3) The SPM is not scheduled for use in...
Testing the School-to-Prison Pipeline
ERIC Educational Resources Information Center
Owens, Emily G.
2017-01-01
The School-to-Prison Pipeline is a social phenomenon where students become formally involved with the criminal justice system as a result of school policies that use law enforcement, rather than discipline, to address behavioral problems. A potentially important part of the School-to-Prison Pipeline is the use of sworn School Resource Officers…
ERIC Educational Resources Information Center
Elias, Marilyn
2013-01-01
Policies that encourage police presence at schools, harsh tactics including physical restraint, and automatic punishments that result in suspensions and out-of-class time are huge contributors to the school-to-prison pipeline, but the problem is more complex than that. The school-to-prison pipeline starts (or is best avoided) in the classroom.…
Constraint-based integration of planning and scheduling for space-based observatory management
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Smith, Steven F.
1994-01-01
Progress toward the development of effective, practical solutions to space-based observatory scheduling problems within the HSTS scheduling framework is reported. HSTS was developed and originally applied in the context of the Hubble Space Telescope (HST) short-term observation scheduling problem. The work was motivated by the limitations of the current solution and, more generally, by the insufficiency of classical planning and scheduling approaches in this problem context. HSTS has subsequently been used to develop improved heuristic solution techniques in related scheduling domains and is currently being applied to develop a scheduling tool for the upcoming Submillimeter Wave Astronomy Satellite (SWAS) mission. The salient architectural characteristics of HSTS and their relationship to previous scheduling and AI planning research are summarized. Then, some key problem decomposition techniques underlying the integrated planning and scheduling approach to the HST problem are described; research results indicate that these techniques provide leverage in solving space-based observatory scheduling problems. Finally, more recently developed constraint-posting scheduling procedures and the current SWAS application focus are summarized.
A meta-heuristic method for solving scheduling problem: crow search algorithm
NASA Astrophysics Data System (ADS)
Adhi, Antono; Santosa, Budi; Siswanto, Nurhadi
2018-04-01
Scheduling is one of the most important processes in an industry both in manufacturingand services. The scheduling process is the process of selecting resources to perform an operation on tasks. Resources can be machines, peoples, tasks, jobs or operations.. The selection of optimum sequence of jobs from a permutation is an essential issue in every research in scheduling problem. Optimum sequence becomes optimum solution to resolve scheduling problem. Scheduling problem becomes NP-hard problem since the number of job in the sequence is more than normal number can be processed by exact algorithm. In order to obtain optimum results, it needs a method with capability to solve complex scheduling problems in an acceptable time. Meta-heuristic is a method usually used to solve scheduling problem. The recently published method called Crow Search Algorithm (CSA) is adopted in this research to solve scheduling problem. CSA is an evolutionary meta-heuristic method which is based on the behavior in flocks of crow. The calculation result of CSA for solving scheduling problem is compared with other algorithms. From the comparison, it is found that CSA has better performance in term of optimum solution and time calculation than other algorithms.
Application Analysis of BIM Technology in Metro Rail Transit
NASA Astrophysics Data System (ADS)
Liu, Bei; Sun, Xianbin
2018-03-01
With the rapid development of urban roads, especially the construction of subway rail transit, it is an effective way to alleviate urban traffic congestion. There are limited site space, complex resource allocation, tight schedule, underground pipeline complex engineering problems. BIM technology, three-dimensional visualization, parameterization, virtual simulation and many other advantages can effectively solve these technical problems. Based on the project of Shenzhen Metro Line 9, BIM technology is innovatively researched throughout the lifecycle of BIM technology in the context of the metro rail transit project rarely used at this stage. The model information file is imported into Navisworks for four-dimensional animation simulation to determine the optimum construction scheme of the shield machine. Subway construction management application platform based on BIM and private cloud technology, the use of cameras and sensors to achieve electronic integration, dynamic monitoring of the operation and maintenance of underground facilities. Make full use of the many advantages of BIM technology to improve the engineering quality and construction efficiency of the subway rail transit project and to complete the operation and maintenance.
NASA Technical Reports Server (NTRS)
Smith, Stephen F.; Pathak, Dhiraj K.
1991-01-01
In this paper, we report work aimed at applying concepts of constraint-based problem structuring and multi-perspective scheduling to an over-subscribed scheduling problem. Previous research has demonstrated the utility of these concepts as a means for effectively balancing conflicting objectives in constraint-relaxable scheduling problems, and our goal here is to provide evidence of their similar potential in the context of HST observation scheduling. To this end, we define and experimentally assess the performance of two time-bounded heuristic scheduling strategies in balancing the tradeoff between resource setup time minimization and satisfaction of absolute time constraints. The first strategy considered is motivated by dispatch-based manufacturing scheduling research, and employs a problem decomposition that concentrates local search on minimizing resource idle time due to setup activities. The second is motivated by research in opportunistic scheduling and advocates a problem decomposition that focuses attention on the goal activities that have the tightest temporal constraints. Analysis of experimental results gives evidence of differential superiority on the part of each strategy in different problem solving circumstances. A composite strategy based on recognition of characteristics of the current problem solving state is then defined and tested to illustrate the potential benefits of constraint-based problem structuring and multi-perspective scheduling in over-subscribe scheduling problems.
18 CFR 356.2 - General instructions.
Code of Federal Regulations, 2010 CFR
2010-04-01
... all books of account and other records prepared by or on behalf of the oil pipeline companies. (2) The... significant information not shown on the originals. (5) Records other than those listed in the schedule may be... public interest, investors, or consumers. A waiver from any provision of these regulations may be made by...
40 CFR 265.1064 - Recordkeeping requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
...). (iii) Type of equipment (e.g., a pump or pipeline valve). (iv) Percent-by-weight total organics in the...)(2), an implementation schedule as specified in § 265.1033(a)(2). (3) Where an owner or operator... concentration achieved by the control device, a performance test plan as specified in § 265.1035(b)(3). (4...
40 CFR 265.1064 - Recordkeeping requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
...). (iii) Type of equipment (e.g., a pump or pipeline valve). (iv) Percent-by-weight total organics in the...)(2), an implementation schedule as specified in § 265.1033(a)(2). (3) Where an owner or operator... concentration achieved by the control device, a performance test plan as specified in § 265.1035(b)(3). (4...
40 CFR 265.1064 - Recordkeeping requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
...). (iii) Type of equipment (e.g., a pump or pipeline valve). (iv) Percent-by-weight total organics in the...)(2), an implementation schedule as specified in § 265.1033(a)(2). (3) Where an owner or operator... concentration achieved by the control device, a performance test plan as specified in § 265.1035(b)(3). (4...
40 CFR 265.1064 - Recordkeeping requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
...). (iii) Type of equipment (e.g., a pump or pipeline valve). (iv) Percent-by-weight total organics in the...)(2), an implementation schedule as specified in § 265.1033(a)(2). (3) Where an owner or operator... concentration achieved by the control device, a performance test plan as specified in § 265.1035(b)(3). (4...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pribanic, Tomas; Awwad, Amer; Crespo, Jairo
2012-07-01
Transferring high-level waste (HLW) between storage tanks or to treatment facilities is a common practice performed at the Department of Energy (DoE) sites. Changes in the chemical and/or physical properties of the HLW slurry during the transfer process may lead to the formation of blockages inside the pipelines resulting in schedule delays and increased costs. To improve DoE's capabilities in the event of a pipeline plugging incident, FIU has continued to develop two novel unplugging technologies: an asynchronous pulsing system and a peristaltic crawler. The asynchronous pulsing system uses a hydraulic pulse generator to create pressure disturbances at two oppositemore » inlet locations of the pipeline to dislodge blockages by attacking the plug from both sides remotely. The peristaltic crawler is a pneumatic/hydraulic operated crawler that propels itself by a sequence of pressurization/depressurization of cavities (inner tubes). The crawler includes a frontal attachment that has a hydraulically powered unplugging tool. In this paper, details of the asynchronous pulsing system's ability to unplug a pipeline on a small-scale test-bed and results from the experimental testing of the second generation peristaltic crawler are provided. The paper concludes with future improvements for the third generation crawler and a recommended path forward for the asynchronous pulsing testing. (authors)« less
ERIC Educational Resources Information Center
Brown, Bryan A.; Henderson, J. Bryan; Gray, Salina; Donovan, Brian; Sullivan, Shayna; Patterson, Alexis; Waggstaff, William
2016-01-01
We conducted a mixed-methods study of matriculation issues for African-Americans in the STEM pipeline. The project compares the experiences of students currently majoring in science (N?=?304) with the experiences of those who have succeeded in earning science degrees (N?=?307). Participants were surveyed about their pipeline experiences based on…
mdtmFTP and its evaluation on ESNET SDN testbed
Zhang, Liang; Wu, Wenji; DeMar, Phil; ...
2017-04-21
In this paper, to address the high-performance challenges of data transfer in the big data era, we are developing and implementing mdtmFTP: a high-performance data transfer tool for big data. mdtmFTP has four salient features. First, it adopts an I/O centric architecture to execute data transfer tasks. Second, it more efficiently utilizes the underlying multicore platform through optimized thread scheduling. Third, it implements a large virtual file mechanism to address the lots-of-small-files (LOSF) problem. In conclusion, mdtmFTP integrates multiple optimization mechanisms, including–zero copy, asynchronous I/O, pipelining, batch processing, and pre-allocated buffer pools–to enhance performance. mdtmFTP has been extensively tested andmore » evaluated within the ESNET 100G testbed. Evaluations show that mdtmFTP can achieve higher performance than existing data transfer tools, such as GridFTP, FDT, and BBCP.« less
Decomposability and scalability in space-based observatory scheduling
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Smith, Stephen F.
1992-01-01
In this paper, we discuss issues of problem and model decomposition within the HSTS scheduling framework. HSTS was developed and originally applied in the context of the Hubble Space Telescope (HST) scheduling problem, motivated by the limitations of the current solution and, more generally, the insufficiency of classical planning and scheduling approaches in this problem context. We first summarize the salient architectural characteristics of HSTS and their relationship to previous scheduling and AI planning research. Then, we describe some key problem decomposition techniques supported by HSTS and underlying our integrated planning and scheduling approach, and we discuss the leverage they provide in solving space-based observatory scheduling problems.
The Application of Simulation Method in Isothermal Elastic Natural Gas Pipeline
NASA Astrophysics Data System (ADS)
Xing, Chunlei; Guan, Shiming; Zhao, Yue; Cao, Jinggang; Chu, Yanji
2018-02-01
This Elastic pipeline mathematic model is of crucial importance in natural gas pipeline simulation because of its compliance with the practical industrial cases. The numerical model of elastic pipeline will bring non-linear complexity to the discretized equations. Hence the Newton-Raphson method cannot achieve fast convergence in this kind of problems. Therefore A new Newton Based method with Powell-Wolfe Condition to simulate the Isothermal elastic pipeline flow is presented. The results obtained by the new method aregiven based on the defined boundary conditions. It is shown that the method converges in all cases and reduces significant computational cost.
AI techniques for a space application scheduling problem
NASA Technical Reports Server (NTRS)
Thalman, N.; Sparn, T.; Jaffres, L.; Gablehouse, D.; Judd, D.; Russell, C.
1991-01-01
Scheduling is a very complex optimization problem which can be categorized as an NP-complete problem. NP-complete problems are quite diverse, as are the algorithms used in searching for an optimal solution. In most cases, the best solutions that can be derived for these combinatorial explosive problems are near-optimal solutions. Due to the complexity of the scheduling problem, artificial intelligence (AI) can aid in solving these types of problems. Some of the factors are examined which make space application scheduling problems difficult and presents a fairly new AI-based technique called tabu search as applied to a real scheduling application. the specific problem is concerned with scheduling application. The specific problem is concerned with scheduling solar and stellar observations for the SOLar-STellar Irradiance Comparison Experiment (SOLSTICE) instrument in a constrained environment which produces minimum impact on the other instruments and maximizes target observation times. The SOLSTICE instrument will gly on-board the Upper Atmosphere Research Satellite (UARS) in 1991, and a similar instrument will fly on the earth observing system (Eos).
NASA Astrophysics Data System (ADS)
Li, Li; Zhang, Yunwei; Chen, Ling
2018-03-01
In order to solve the problem of selecting positioning technology for inspection robot in underground pipeline environment, the wireless network signal strength and GPS positioning signal testing are carried out in the actual underground pipeline environment. Firstly, the strength variation of the 3G wireless network signal and Wi-Fi wireless signal provided by China Telecom and China Unicom ground base stations are tested, and the attenuation law of these wireless signals along the pipeline is analyzed quantitatively and described. Then, the receiving data of the GPS satellite signal in the pipeline are tested, and the attenuation of GPS satellite signal under underground pipeline is analyzed. The testing results may be reference for other related research which need to consider positioning in pipeline.
SOFIA's Choice: Automating the Scheduling of Airborne Observations
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Norvig, Peter (Technical Monitor)
1999-01-01
This paper describes the problem of scheduling observations for an airborne telescope. Given a set of prioritized observations to choose from, and a wide range of complex constraints governing legitimate choices and orderings, how can we efficiently and effectively create a valid flight plan which supports high priority observations? This problem is quite different from scheduling problems which are routinely solved automatically in industry. For instance, the problem requires making choices which lead to other choices later, and contains many interacting complex constraints over both discrete and continuous variables. Furthermore, new types of constraints may be added as the fundamental problem changes. As a result of these features, this problem cannot be solved by traditional scheduling techniques. The problem resembles other problems in NASA and industry, from observation scheduling for rovers and other science instruments to vehicle routing. The remainder of the paper is organized as follows. In 2 we describe the observatory in order to provide some background. In 3 we describe the problem of scheduling a single flight. In 4 we compare flight planning and other scheduling problems and argue that traditional techniques are not sufficient to solve this problem. We also mention similar complex scheduling problems which may benefit from efforts to solve this problem. In 5 we describe an approach for solving this problem based on research into a similar problem, that of scheduling observations for a space-borne probe. In 6 we discuss extensions of the flight planning problem as well as other problems which are similar to flight planning. In 7 we conclude and discuss future work.
NASA Astrophysics Data System (ADS)
Tomarov, G. V.; Povarov, V. P.; Shipkov, A. A.; Gromov, A. F.; Kiselev, A. N.; Shepelev, S. V.; Galanin, A. V.
2015-02-01
Specific features relating to development of the information-analytical system on the problem of flow-accelerated corrosion of pipeline elements in the secondary coolant circuit of the VVER-440-based power units at the Novovoronezh nuclear power plant are considered. The results from a statistical analysis of data on the quantity, location, and operating conditions of the elements and preinserted segments of pipelines used in the condensate-feedwater and wet steam paths are presented. The principles of preparing and using the information-analytical system for determining the lifetime to reaching inadmissible wall thinning in elements of pipelines used in the secondary coolant circuit of the VVER-440-based power units at the Novovoronezh NPP are considered.
Fritz, Jennifer N; Jackson, Lynsey M; Stiefler, Nicole A; Wimberly, Barbara S; Richardson, Amy R
2017-07-01
The effects of noncontingent reinforcement (NCR) without extinction during treatment of problem behavior maintained by social positive reinforcement were evaluated for five individuals diagnosed with autism spectrum disorder. A continuous NCR schedule was gradually thinned to a fixed-time 5-min schedule. If problem behavior increased during NCR schedule thinning, a continuous NCR schedule was reinstated and NCR schedule thinning was repeated with differential reinforcement of alternative behavior (DRA) included. Results showed an immediate decrease in all participants' problem behavior during continuous NCR, and problem behavior maintained at low levels during NCR schedule thinning for three participants. Problem behavior increased and maintained at higher rates during NCR schedule thinning for two other participants; however, the addition of DRA to the intervention resulted in decreased problem behavior and increased mands. © 2017 Society for the Experimental Analysis of Behavior.
18 CFR Appendix B to Subpart H of... - Appendix B to Subpart H of Part 35
Code of Federal Regulations, 2011 CFR
2011-04-01
... assets or the field is not applicable please indicate so by inputting (NA). Electric Transmission Assets... SCHEDULES AND TARIFFS Wholesale Sales of Electric Energy, Capacity and Ancillary Services at Market-Based... pipeline and related equipment with 50 MMcf/d capacity. *If the field is not applicable please indicate so...
Job shop scheduling problem with late work criterion
NASA Astrophysics Data System (ADS)
Piroozfard, Hamed; Wong, Kuan Yew
2015-05-01
Scheduling is considered as a key task in many industries, such as project based scheduling, crew scheduling, flight scheduling, machine scheduling, etc. In the machine scheduling area, the job shop scheduling problems are considered to be important and highly complex, in which they are characterized as NP-hard. The job shop scheduling problems with late work criterion and non-preemptive jobs are addressed in this paper. Late work criterion is a fairly new objective function. It is a qualitative measure and concerns with late parts of the jobs, unlike classical objective functions that are quantitative measures. In this work, simulated annealing was presented to solve the scheduling problem. In addition, operation based representation was used to encode the solution, and a neighbourhood search structure was employed to search for the new solutions. The case studies are Lawrence instances that were taken from the Operations Research Library. Computational results of this probabilistic meta-heuristic algorithm were compared with a conventional genetic algorithm, and a conclusion was made based on the algorithm and problem.
NASA Technical Reports Server (NTRS)
Moore, J. E.
1975-01-01
An enumeration algorithm is presented for solving a scheduling problem similar to the single machine job shop problem with sequence dependent setup times. The scheduling problem differs from the job shop problem in two ways. First, its objective is to select an optimum subset of the available tasks to be performed during a fixed period of time. Secondly, each task scheduled is constrained to occur within its particular scheduling window. The algorithm is currently being used to develop typical observational timelines for a telescope that will be operated in earth orbit. Computational times associated with timeline development are presented.
Research on Production Scheduling System with Bottleneck Based on Multi-agent
NASA Astrophysics Data System (ADS)
Zhenqiang, Bao; Weiye, Wang; Peng, Wang; Pan, Quanke
Aimed at the imbalance problem of resource capacity in Production Scheduling System, this paper uses Production Scheduling System based on multi-agent which has been constructed, and combines the dynamic and autonomous of Agent; the bottleneck problem in the scheduling is solved dynamically. Firstly, this paper uses Bottleneck Resource Agent to find out the bottleneck resource in the production line, analyses the inherent mechanism of bottleneck, and describes the production scheduling process based on bottleneck resource. Bottleneck Decomposition Agent harmonizes the relationship of job's arrival time and transfer time in Bottleneck Resource Agent and Non-Bottleneck Resource Agents, therefore, the dynamic scheduling problem is simplified as the single machine scheduling of each resource which takes part in the scheduling. Finally, the dynamic real-time scheduling problem is effectively solved in Production Scheduling System.
Completable scheduling: An integrated approach to planning and scheduling
NASA Technical Reports Server (NTRS)
Gervasio, Melinda T.; Dejong, Gerald F.
1992-01-01
The planning problem has traditionally been treated separately from the scheduling problem. However, as more realistic domains are tackled, it becomes evident that the problem of deciding on an ordered set of tasks to achieve a set of goals cannot be treated independently of the problem of actually allocating resources to the tasks. Doing so would result in losing the robustness and flexibility needed to deal with imperfectly modeled domains. Completable scheduling is an approach which integrates the two problems by allowing an a priori planning module to defer particular planning decisions, and consequently the associated scheduling decisions, until execution time. This allows a completable scheduling system to maximize plan flexibility by allowing runtime information to be taken into consideration when making planning and scheduling decision. Furthermore, through the criteria of achievability placed on deferred decision, a completable scheduling system is able to retain much of the goal-directedness and guarantees of achievement afforded by a priori planning. The completable scheduling approach is further enhanced by the use of contingent explanation-based learning, which enables a completable scheduling system to learn general completable plans from example and improve its performance through experience. Initial experimental results show that completable scheduling outperforms classical scheduling as well as pure reactive scheduling in a simple scheduling domain.
A Pipeline Software Architecture for NMR Spectrum Data Translation
Ellis, Heidi J.C.; Weatherby, Gerard; Nowling, Ronald J.; Vyas, Jay; Fenwick, Matthew; Gryk, Michael R.
2012-01-01
The problem of formatting data so that it conforms to the required input for scientific data processing tools pervades scientific computing. The CONNecticut Joint University Research Group (CONNJUR) has developed a data translation tool based on a pipeline architecture that partially solves this problem. The CONNJUR Spectrum Translator supports data format translation for experiments that use Nuclear Magnetic Resonance to determine the structure of large protein molecules. PMID:24634607
NASA Astrophysics Data System (ADS)
Badillo-Olvera, A.; Begovich, O.; Peréz-González, A.
2017-01-01
The present paper is motivated by the purpose of detection and isolation of a single leak considering the Fault Model Approach (FMA) focused on pipelines with changes in their geometry. These changes generate a different pressure drop that those produced by the friction, this phenomenon is a common scenario in real pipeline systems. The problem arises, since the dynamical model of the fluid in a pipeline only considers straight geometries without fittings. In order to address this situation, several papers work with a virtual model of a pipeline that generates a equivalent straight length, thus, friction produced by the fittings is taking into account. However, when this method is applied, the leak is isolated in a virtual length, which for practical reasons does not represent a complete solution. This research proposes as a solution to the problem of leak isolation in a virtual length, the use of a polynomial interpolation function in order to approximate the conversion of the virtual position to a real-coordinates value. Experimental results in a real prototype are shown, concluding that the proposed methodology has a good performance.
iGAS: A framework for using electronic intraoperative medical records for genomic discovery.
Levin, Matthew A; Joseph, Thomas T; Jeff, Janina M; Nadukuru, Rajiv; Ellis, Stephen B; Bottinger, Erwin P; Kenny, Eimear E
2017-03-01
Design and implement a HIPAA and Integrating the Healthcare Enterprise (IHE) profile compliant automated pipeline, the integrated Genomics Anesthesia System (iGAS), linking genomic data from the Mount Sinai Health System (MSHS) BioMe biobank to electronic anesthesia records, including physiological data collected during the perioperative period. The resulting repository of multi-dimensional data can be used for precision medicine analysis of physiological readouts, acute medical conditions, and adverse events that can occur during surgery. A structured pipeline was developed atop our existing anesthesia data warehouse using open-source tools. The pipeline is automated using scheduled tasks. The pipeline runs weekly, and finds and identifies all new and existing anesthetic records for BioMe participants. The pipeline went live in June 2015 with 49.2% (n=15,673) of BioMe participants linked to 40,947 anesthetics. The pipeline runs weekly in minimal time. After eighteen months, an additional 3671 participants were enrolled in BioMe and the number of matched anesthetic records grew 21% to 49,545. Overall percentage of BioMe patients with anesthetics remained similar at 51.1% (n=18,128). Seven patients opted out during this time. The median number of anesthetics per participant was 2 (range 1-144). Collectively, there were over 35 million physiologic data points and 480,000 medication administrations linked to genomic data. To date, two projects are using the pipeline at MSHS. Automated integration of biobank and anesthetic data sources is feasible and practical. This integration enables large-scale genomic analyses that might inform variable physiological response to anesthetic and surgical stress, and examine genetic factors underlying adverse outcomes during and after surgery. Copyright © 2017 Elsevier Inc. All rights reserved.
Commanding Constellations (Pipeline Architecture)
NASA Technical Reports Server (NTRS)
Ray, Tim; Condron, Jeff
2003-01-01
Providing ground command software for constellations of spacecraft is a challenging problem. Reliable command delivery requires a feedback loop; for a constellation there will likely be an independent feedback loop for each constellation member. Each command must be sent via the proper Ground Station, which may change from one contact to the next (and may be different for different members). Dynamic configuration of the ground command software is usually required (e.g. directives to configure each member's feedback loop and assign the appropriate Ground Station). For testing purposes, there must be a way to insert command data at any level in the protocol stack. The Pipeline architecture described in this paper can support all these capabilities with a sequence of software modules (the pipeline), and a single self-identifying message format (for all types of command data and configuration directives). The Pipeline architecture is quite simple, yet it can solve some complex problems. The resulting solutions are conceptually simple, and therefore, reliable. They are also modular, and therefore, easy to distribute and extend. We first used the Pipeline architecture to design a CCSDS (Consultative Committee for Space Data Systems) Ground Telecommand system (to command one spacecraft at a time with a fixed Ground Station interface). This pipeline was later extended to include gateways to any of several Ground Stations. The resulting pipeline was then extended to handle a small constellation of spacecraft. The use of the Pipeline architecture allowed us to easily handle the increasing complexity. This paper will describe the Pipeline architecture, show how it was used to solve each of the above commanding situations, and how it can easily be extended to handle larger constellations.
NASA Astrophysics Data System (ADS)
Buchner, Johannes
2011-12-01
Scheduling, the task of producing a time table for resources and tasks, is well-known to be a difficult problem the more resources are involved (a NP-hard problem). This is about to become an issue in Radio astronomy as observatories consisting of hundreds to thousands of telescopes are planned and operated. The Square Kilometre Array (SKA), which Australia and New Zealand bid to host, is aiming for scales where current approaches -- in construction, operation but also scheduling -- are insufficent. Although manual scheduling is common today, the problem is becoming complicated by the demand for (1) independent sub-arrays doing simultaneous observations, which requires the scheduler to plan parallel observations and (2) dynamic re-scheduling on changed conditions. Both of these requirements apply to the SKA, especially in the construction phase. We review the scheduling approaches taken in the astronomy literature, as well as investigate techniques from human schedulers and today's observatories. The scheduling problem is specified in general for scientific observations and in particular on radio telescope arrays. Also taken into account is the fact that the observatory may be oversubscribed, requiring the scheduling problem to be integrated with a planning process. We solve this long-term scheduling problem using a time-based encoding that works in the very general case of observation scheduling. This research then compares algorithms from various approaches, including fast heuristics from CPU scheduling, Linear Integer Programming and Genetic algorithms, Branch-and-Bound enumeration schemes. Measures include not only goodness of the solution, but also scalability and re-scheduling capabilities. In conclusion, we have identified a fast and good scheduling approach that allows (re-)scheduling difficult and changing problems by combining heuristics with a Genetic algorithm using block-wise mutation operations. We are able to explain and eradicate two problems in the literature: The inability of a GA to properly improve schedules and the generation of schedules with frequent interruptions. Finally, we demonstrate the scheduling framework for several operating telescopes: (1) Dynamic re-scheduling with the AUT Warkworth 12m telescope, (2) Scheduling for the Australian Mopra 22m telescope and scheduling for the Allen Telescope Array. Furthermore, we discuss the applicability of the presented scheduling framework to the Atacama Large Millimeter/submillimeter Array (ALMA, in construction) and the SKA. In particular, during the development phase of the SKA, this dynamic, scalable scheduling framework can accommodate changing conditions.
Parallel algorithms for mapping pipelined and parallel computations
NASA Technical Reports Server (NTRS)
Nicol, David M.
1988-01-01
Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.
Mathematical simulation for compensation capacities area of pipeline routes in ship systems
NASA Astrophysics Data System (ADS)
Ngo, G. V.; Sakhno, K. N.
2018-05-01
In this paper, the authors considered the problem of manufacturability’s enhancement of ship systems pipeline at the designing stage. The analysis of arrangements and possibilities for compensation of deviations for pipeline routes has been carried out. The task was set to produce the “fit pipe” together with the rest of the pipes in the route. It was proposed to compensate for deviations by movement of the pipeline route during pipe installation and to calculate maximum values of these displacements in the analyzed path. Theoretical bases of deviation compensation for pipeline routes using rotations of parallel section pairs of pipes are assembled. Mathematical and graphical simulations of compensation area capacities of pipeline routes with various configurations are completed. Prerequisites have been created for creating an automated program that will allow one to determine values of the compensatory capacities area for pipeline routes and to assign quantities of necessary allowances.
Finite-Element Modeling of a Damaged Pipeline Repaired Using the Wrap of a Composite Material
NASA Astrophysics Data System (ADS)
Lyapin, A. A.; Chebakov, M. I.; Dumitrescu, A.; Zecheru, G.
2015-07-01
The nonlinear static problem of FEM modeling of a damaged pipeline repaired by a composite material and subjected to internal pressure is considered. The calculation is carried out using plasticity theory for the pipeline material and considering the polymeric filler and the composite wrap. The level of stresses in various zones of the structure is analyzed. The most widespread alloy used for oil pipelines is selected as pipe material. The contribution of each component of the pipeline-filler-wrap system to the level of stresses is investigated. The effect of the number of composite wrap layers is estimated. The results obtained allow one to decrease the costs needed for producing test specimens.
Iturbe, Rosario; Flores, Carlos; Castro, Alejandrina; Torres, Luis G
2007-10-01
Oil spills due to oil pipelines is a very frequent problem in Mexico. Petroleos Mexicanos (PEMEX), very concerned with the environmental agenda, has been developing inspection and correction plans for zones around oil pipelines pumping stations and pipeline right-of-way. These stations are located at regular intervals of kilometres along the pipelines. In this study, two sections of an oil pipeline and two pipeline pumping stations zones are characterized in terms of the presence of Total Petroleum Hydrocarbons (TPHs) and Polycyclic Aromatic Hydrocarbons (PAHs). The study comprehends sampling of the areas, delimitation of contamination in the vertical and horizontal extension, analysis of the sampled soils regarding TPHs content and, in some cases, the 16 PAHs considered as priority by USEPA, calculation of areas and volumes contaminated (according to Mexican legislation, specifically NOM-EM-138-ECOL-2002) and, finally, a proposal for the best remediation techniques suitable for the contamination levels and the localization of contaminants.
An Optimization Model for Scheduling Problems with Two-Dimensional Spatial Resource Constraint
NASA Technical Reports Server (NTRS)
Garcia, Christopher; Rabadi, Ghaith
2010-01-01
Traditional scheduling problems involve determining temporal assignments for a set of jobs in order to optimize some objective. Some scheduling problems also require the use of limited resources, which adds another dimension of complexity. In this paper we introduce a spatial resource-constrained scheduling problem that can arise in assembly, warehousing, cross-docking, inventory management, and other areas of logistics and supply chain management. This scheduling problem involves a twodimensional rectangular area as a limited resource. Each job, in addition to having temporal requirements, has a width and a height and utilizes a certain amount of space inside the area. We propose an optimization model for scheduling the jobs while respecting all temporal and spatial constraints.
An Implicit Enumeration Algorithm with Binary-Valued Constraints.
1986-03-01
problems is the National Basketball Association ( NBA -) schedul- ing problems developed by Bean (1980), as discussed in detail in the Appendix. These...fY! X F L- %n~ P ’ % -C-10 K7 K: K7 -L- -7".i - W. , W V APPENDIX The NBA Scheduling Problem §A.1 Formulation The National Basketball Association...16 2.2 4.9 40.2 15.14 §6.2.3 NBA Scheduling Problem The last set of testing problems involves the NBA scheduling problem. A detailed description of
Science returns of flexible scheduling on UKIRT and the JCMT
NASA Astrophysics Data System (ADS)
Adamson, Andrew J.; Tilanus, Remo P.; Buckle, Jane; Davis, Gary R.; Economou, Frossie; Jenness, Tim; Delorey, K.
2004-09-01
The Joint Astronomy Centre operates two telescopes at the Mauna Kea Observatory: the James Clerk Maxwell Telescope, operating in the submillimetre, and the United Kingdom Infrared Telescope, operating in the near and thermal infrared. Both wavelength regimes benefit from the ability to schedule observations flexibly according to observing conditions, albeit via somewhat different "site quality" criteria. Both UKIRT and JCMT now operate completely flexible schedules. These operations are based on telescope hardware which can quickly switch between observing modes, and on a comprehensive suite of software (ORAC/OMP) which handles observing preparation by remote PIs, observation submission into the summit database, conditions-based programme selection at the summit, pipeline data reduction for all observing modes, and instant data quality feedback to the PI who may or may not be remote from the telescope. This paper describes the flexible scheduling model and presents science statistics for the first complete year of UKIRT and JCMT observing under the combined system.
The inverse electroencephalography pipeline
NASA Astrophysics Data System (ADS)
Weinstein, David Michael
The inverse electroencephalography (EEG) problem is defined as determining which regions of the brain are active based on remote measurements recorded with scalp EEG electrodes. An accurate solution to this problem would benefit both fundamental neuroscience research and clinical neuroscience applications. However, constructing accurate patient-specific inverse EEG solutions requires complex modeling, simulation, and visualization algorithms, and to date only a few systems have been developed that provide such capabilities. In this dissertation, a computational system for generating and investigating patient-specific inverse EEG solutions is introduced, and the requirements for each stage of this Inverse EEG Pipeline are defined and discussed. While the requirements of many of the stages are satisfied with existing algorithms, others have motivated research into novel modeling and simulation methods. The principal technical results of this work include novel surface-based volume modeling techniques, an efficient construction for the EEG lead field, and the Open Source release of the Inverse EEG Pipeline software for use by the bioelectric field research community. In this work, the Inverse EEG Pipeline is applied to three research problems in neurology: comparing focal and distributed source imaging algorithms; separating measurements into independent activation components for multifocal epilepsy; and localizing the cortical activity that produces the P300 effect in schizophrenia.
Performance comparison of some evolutionary algorithms on job shop scheduling problems
NASA Astrophysics Data System (ADS)
Mishra, S. K.; Rao, C. S. P.
2016-09-01
Job Shop Scheduling as a state space search problem belonging to NP-hard category due to its complexity and combinational explosion of states. Several naturally inspire evolutionary methods have been developed to solve Job Shop Scheduling Problems. In this paper the evolutionary methods namely Particles Swarm Optimization, Artificial Intelligence, Invasive Weed Optimization, Bacterial Foraging Optimization, Music Based Harmony Search Algorithms are applied and find tuned to model and solve Job Shop Scheduling Problems. To compare about 250 Bench Mark instances have been used to evaluate the performance of these algorithms. The capabilities of each these algorithms in solving Job Shop Scheduling Problems are outlined.
USSR Report, Political and Sociological Affairs, No. 1445, Central Asian Press Surveys -- 1983
1983-08-16
Industry Product Quality Low 16 Siberian Oil Pipeline Comes to Chimkent 17 Ecological Problems Impede Expansion in Cramped Pavlodar 17 Editorial...picture. - 16 SIBERIAN OIL PIPELINE COMES TO CHIMKENT [Editorial Report] Alma-Ata SOTSIALISTIK QAZAQSTAN in Kazakh 19 March 1983 carries on page 3 a...the rubric "At the Construction Sites of the 5-Year-Plan." The Siberian Petroleum Pipeline, which stretches through 1,642 kilometers of difficult
The impact of Docker containers on the performance of genomic pipelines
Palumbo, Emilio; Chatzou, Maria; Prieto, Pablo; Heuer, Michael L.; Notredame, Cedric
2015-01-01
Genomic pipelines consist of several pieces of third party software and, because of their experimental nature, frequent changes and updates are commonly necessary thus raising serious deployment and reproducibility issues. Docker containers are emerging as a possible solution for many of these problems, as they allow the packaging of pipelines in an isolated and self-contained manner. This makes it easy to distribute and execute pipelines in a portable manner across a wide range of computing platforms. Thus, the question that arises is to what extent the use of Docker containers might affect the performance of these pipelines. Here we address this question and conclude that Docker containers have only a minor impact on the performance of common genomic pipelines, which is negligible when the executed jobs are long in terms of computational time. PMID:26421241
The impact of Docker containers on the performance of genomic pipelines.
Di Tommaso, Paolo; Palumbo, Emilio; Chatzou, Maria; Prieto, Pablo; Heuer, Michael L; Notredame, Cedric
2015-01-01
Genomic pipelines consist of several pieces of third party software and, because of their experimental nature, frequent changes and updates are commonly necessary thus raising serious deployment and reproducibility issues. Docker containers are emerging as a possible solution for many of these problems, as they allow the packaging of pipelines in an isolated and self-contained manner. This makes it easy to distribute and execute pipelines in a portable manner across a wide range of computing platforms. Thus, the question that arises is to what extent the use of Docker containers might affect the performance of these pipelines. Here we address this question and conclude that Docker containers have only a minor impact on the performance of common genomic pipelines, which is negligible when the executed jobs are long in terms of computational time.
Compiling Planning into Scheduling: A Sketch
NASA Technical Reports Server (NTRS)
Bedrax-Weiss, Tania; Crawford, James M.; Smith, David E.
2004-01-01
Although there are many approaches for compiling a planning problem into a static CSP or a scheduling problem, current approaches essentially preserve the structure of the planning problem in the encoding. In this pape: we present a fundamentally different encoding that more accurately resembles a scheduling problem. We sketch the approach and argue, based on an example, that it is possible to automate the generation of such an encoding for problems with certain properties and thus produce a compiler of planning into scheduling problems. Furthermore we argue that many NASA problems exhibit these properties and that such a compiler would provide benefits to both theory and practice.
Pipeline scada upgrade uses satellite terminal system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conrad, W.; Skovrinski, J.R.
In the recent automation of its supervisory control and data acquisition (scada) system, Transwestern Pipeline Co. has become the first to use very small aperture satellite terminals (VSAT's) for scada. A subsidiary of Enron Interstate Pipeline, Houston, Transwestern moves natural gas through a 4,400-mile system from West Texas, New Mexico, and Oklahoma to southern California markets. Transwestern's modernization, begun in November 1985, addressed problems associated with its aging control equipment which had been installed when the compressor stations were built in 1960. Over the years a combination of three different systems had been added. All were cumbersome to maintain andmore » utilized outdated technology. Problems with reliability, high maintenance time, and difficulty in getting new parts were determining factors in Transwestern's decision to modernize its scada system. In addition, the pipeline was anticipating moving its control center from Roswell, N.M., to Houston and believed it would be impossible to marry the old system with the new computer equipment in Houston.« less
Bridging the Gap Between Planning and Scheduling
NASA Technical Reports Server (NTRS)
Smith, David E.; Frank, Jeremy; Jonsson, Ari K.; Norvig, Peter (Technical Monitor)
2000-01-01
Planning research in Artificial Intelligence (AI) has often focused on problems where there are cascading levels of action choice and complex interactions between actions. In contrast. Scheduling research has focused on much larger problems where there is little action choice, but the resulting ordering problem is hard. In this paper, we give an overview of M planning and scheduling techniques, focusing on their similarities, differences, and limitations. We also argue that many difficult practical problems lie somewhere between planning and scheduling, and that neither area has the right set of tools for solving these vexing problems.
Underground pipeline laying using the pipe-in-pipe system
NASA Astrophysics Data System (ADS)
Antropova, N.; Krets, V.; Pavlov, M.
2016-09-01
The problems of resource saving and environmental safety during the installation and operation of the underwater crossings are always relevant. The paper describes the existing methods of trenchless pipeline technology, the structure of multi-channel pipelines, the types of supporting and guiding systems. The rational design is suggested for the pipe-in-pipe system. The finite element model is presented for the most dangerous sections of the inner pipes, the optimum distance is detected between the roller supports.
Interactive computer aided shift scheduling.
Gaertner, J
2001-12-01
This paper starts with a discussion of computer aided shift scheduling. After a brief review of earlier approaches, two conceptualizations of this field are introduced: First, shift scheduling as a field that ranges from extremely stable rosters at one pole to rather market-like approaches on the other pole. Unfortunately, already small alterations of a scheduling problem (e.g., the number of groups, the number of shifts) may call for rather different approaches and tools. Second, their environment shapes scheduling problems and scheduling has to be done within idiosyncratic organizational settings. This calls for the amalgamation of scheduling with other tasks (e.g., accounting) and for reflections whether better solutions might become possible by changes in the problem definition (e.g., other service levels, organizational changes). Therefore shift scheduling should be understood as a highly connected problem. Building upon these two conceptualizations, a few examples of software that ease scheduling in some areas of this field are given and future research questions are outlined.
Optimal recombination in genetic algorithms for flowshop scheduling problems
NASA Astrophysics Data System (ADS)
Kovalenko, Julia
2016-10-01
The optimal recombination problem consists in finding the best possible offspring as a result of a recombination operator in a genetic algorithm, given two parent solutions. We prove NP-hardness of the optimal recombination for various variants of the flowshop scheduling problem with makespan criterion and criterion of maximum lateness. An algorithm for solving the optimal recombination problem for permutation flowshop problems is built, using enumeration of prefect matchings in a special bipartite graph. The algorithm is adopted for the classical flowshop scheduling problem and for the no-wait flowshop problem. It is shown that the optimal recombination problem for the permutation flowshop scheduling problem is solvable in polynomial time for almost all pairs of parent solutions as the number of jobs tends to infinity.
Integrated scheduling and resource management. [for Space Station Information System
NASA Technical Reports Server (NTRS)
Ward, M. T.
1987-01-01
This paper examines the problem of integrated scheduling during the Space Station era. Scheduling for Space Station entails coordinating the support of many distributed users who are sharing common resources and pursuing individual and sometimes conflicting objectives. This paper compares the scheduling integration problems of current missions with those anticipated for the Space Station era. It examines the facilities and the proposed operations environment for Space Station. It concludes that the pattern of interdependecies among the users and facilities, which are the source of the integration problem is well structured, allowing a dividing of the larger problem into smaller problems. It proposes an architecture to support integrated scheduling by scheduling efficiently at local facilities as a function of dependencies with other facilities of the program. A prototype is described that is being developed to demonstrate this integration concept.
Computer models of complex multiloop branched pipeline systems
NASA Astrophysics Data System (ADS)
Kudinov, I. V.; Kolesnikov, S. V.; Eremin, A. V.; Branfileva, A. N.
2013-11-01
This paper describes the principal theoretical concepts of the method used for constructing computer models of complex multiloop branched pipeline networks, and this method is based on the theory of graphs and two Kirchhoff's laws applied to electrical circuits. The models make it possible to calculate velocities, flow rates, and pressures of a fluid medium in any section of pipeline networks, when the latter are considered as single hydraulic systems. On the basis of multivariant calculations the reasons for existing problems can be identified, the least costly methods of their elimination can be proposed, and recommendations for planning the modernization of pipeline systems and construction of their new sections can be made. The results obtained can be applied to complex pipeline systems intended for various purposes (water pipelines, petroleum pipelines, etc.). The operability of the model has been verified on an example of designing a unified computer model of the heat network for centralized heat supply of the city of Samara.
NASA Astrophysics Data System (ADS)
Ge, Yaomou
Oil and gas pipelines play a critical role in delivering the energy resources from producing fields to power communities around the world. However, there are many threats to pipeline integrity, which may lead to significant incidents, causing safety, environmental and economic problems. Corrosion has been a big threat to oil and gas pipelines for a long time, which has attributed to approximately 18% of the significant incidents in oil and gas pipelines. In addition, external corrosion of pipelines accounts for a significant portion (more than 25%) of pipeline failure. External corrosion detection is the research area of this thesis. In this thesis, a review of existing corrosion detection or monitoring methods is presented, and optical fiber sensors show a great promise in corrosion detection of oil and gas pipelines. Several scenarios of optical fiber corrosion sensors are discussed, and two of them are selected for future research. A new corrosion and leakage detection sensor, consisting of a custom designed trigger and a FBG optical fiber, will be presented. This new device has been experimentally tested and it shows great promise.
NASA Astrophysics Data System (ADS)
Dudin, S. M.; Novitskiy, D. V.
2018-05-01
The works of researchers at VNIIgaz, Giprovostokneft, Kuibyshev NIINP, Grozny Petroleum Institute, etc., are devoted to modeling heterogeneous medium flows in pipelines under laboratory conditions. In objective consideration, the empirical relationships obtained and the calculation procedures for pipelines transporting multiphase products are a bank of experimental data on the problem of pipeline transportation of multiphase systems. Based on the analysis of the published works, the main design requirements for experimental installations designed to study the flow regimes of gas-liquid flows in pipelines were formulated, which were taken into account by the authors when creating the experimental stand. The article describes the results of experimental studies of the flow regimes of a gas-liquid mixture in a pipeline, and also gives a methodological description of the experimental installation. Also the article describes the software of the experimental scientific and educational stand developed with the participation of the authors.
NASA Astrophysics Data System (ADS)
Leporini, M.; Terenzi, A.; Marchetti, B.; Giacchetta, G.; Polonara, F.; Corvaro, F.; Cocci Grifoni, R.
2017-11-01
Pipelining Liquefied Petroleum Gas (LPG) is a mode of LPG transportation more environmentally-friendly than others due to the lower energy consumption and exhaust emissions. Worldwide, there are over 20000 kilometers of LPG pipelines. There are a number of codes that industry follows for the design, fabrication, construction and operation of liquid LPG pipelines. However, no standards exist to modelling particular critical phenomena which can occur on these lines due to external environmental conditions like the solar radiation pressurization. In fact, the solar radiation can expose above ground pipeline sections at pressure values above the maximum Design Pressure with resulting risks and problems. The present work presents an innovative practice suitable for the Oil & Gas industry to modelling the pressurization induced by the solar radiation on above ground LPG pipeline sections with the application to a real case.
Applications of dynamic scheduling technique to space related problems: Some case studies
NASA Astrophysics Data System (ADS)
Nakasuka, Shinichi; Ninomiya, Tetsujiro
1994-10-01
The paper discusses the applications of 'Dynamic Scheduling' technique, which has been invented for the scheduling of Flexible Manufacturing System, to two space related scheduling problems: operation scheduling of a future space transportation system, and resource allocation in a space system with limited resources such as space station or space shuttle.
Solving a real-world problem using an evolving heuristically driven schedule builder.
Hart, E; Ross, P; Nelson, J
1998-01-01
This work addresses the real-life scheduling problem of a Scottish company that must produce daily schedules for the catching and transportation of large numbers of live chickens. The problem is complex and highly constrained. We show that it can be successfully solved by division into two subproblems and solving each using a separate genetic algorithm (GA). We address the problem of whether this produces locally optimal solutions and how to overcome this. We extend the traditional approach of evolving a "permutation + schedule builder" by concentrating on evolving the schedule builder itself. This results in a unique schedule builder being built for each daily scheduling problem, each individually tailored to deal with the particular features of that problem. This results in a robust, fast, and flexible system that can cope with most of the circumstances imaginable at the factory. We also compare the performance of a GA approach to several other evolutionary methods and show that population-based methods are superior to both hill-climbing and simulated annealing in the quality of solutions produced. Population-based methods also have the distinct advantage of producing multiple, equally fit solutions, which is of particular importance when considering the practical aspects of the problem.
Numerical Analysis of Flow-Induced Vibrations in Closed Side Branches
NASA Astrophysics Data System (ADS)
KníŽat, Branislav; Troják, Michal
2011-12-01
Vibrations occuring in closed side branches connected to a main pipe are a frequent problem during pipeline system operation. At the design stage of pipeline systems, this problem is sometimes overlooked or underestimated which can later lead to the shortening of the systems life cycle or may even cause injury. The aim of this paper is a numerical analysis of the start of self-induced vibrations on the edge of a closed side branch. Calculation conditions and obtained results are presented within.
Computational structures for robotic computations
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Chang, P. R.
1987-01-01
The computational problem of inverse kinematics and inverse dynamics of robot manipulators by taking advantage of parallelism and pipelining architectures is discussed. For the computation of inverse kinematic position solution, a maximum pipelined CORDIC architecture has been designed based on a functional decomposition of the closed-form joint equations. For the inverse dynamics computation, an efficient p-fold parallel algorithm to overcome the recurrence problem of the Newton-Euler equations of motion to achieve the time lower bound of O(log sub 2 n) has also been developed.
NASA Astrophysics Data System (ADS)
Schoess, Jeffrey N.; Seifert, Greg; Paul, Clare A.
1996-05-01
The smart aircraft fastener evaluation (SAFE) system is an advanced structural health monitoring effort to detect and characterize corrosion in hidden and inaccessible locations of aircraft structures. Hidden corrosion is the number one logistics problem for the U.S. Air Force, with an estimated maintenance cost of $700M per year in 1990 dollars. The SAFE system incorporates a solid-state electrochemical microsensor and smart sensor electronics in the body of a Hi-Lok aircraft fastener to process and autonomously report corrosion status to aircraft maintenance personnel. The long-term payoff for using SAFE technology will be in predictive maintenance for aging aircraft and rotorcraft systems, fugitive emissions applications such as control valves, chemical pipeline vessels, and industrial boilers. Predictive maintenance capability, service, and repair will replace the current practice of scheduled maintenance to substantially reduce operational costs. A summary of the SAFE concept, laboratory test results, and future field test plans is presented.
Testing Task Schedulers on Linux System
NASA Astrophysics Data System (ADS)
Jelenković, Leonardo; Groš, Stjepan; Jakobović, Domagoj
Testing task schedulers on Linux operating system proves to be a challenging task. There are two main problems. The first one is to identify which properties of the scheduler to test. The second problem is how to perform it, e.g., which API to use that is sufficiently precise and in the same time supported on most platforms. This paper discusses the problems in realizing test framework for testing task schedulers and presents one potential solution. Observed behavior of the scheduler is the one used for “normal” task scheduling (SCHED_OTHER), unlike one used for real-time tasks (SCHED_FIFO, SCHED_RR).
Methods of increasing efficiency and maintainability of pipeline systems
NASA Astrophysics Data System (ADS)
Ivanov, V. A.; Sokolov, S. M.; Ogudova, E. V.
2018-05-01
This study is dedicated to the issue of pipeline transportation system maintenance. The article identifies two classes of technical-and-economic indices, which are used to select an optimal pipeline transportation system structure. Further, the article determines various system maintenance strategies and strategy selection criteria. Meanwhile, the maintenance strategies turn out to be not sufficiently effective due to non-optimal values of maintenance intervals. This problem could be solved by running the adaptive maintenance system, which includes a pipeline transportation system reliability improvement algorithm, especially an equipment degradation computer model. In conclusion, three model building approaches for determining optimal technical systems verification inspections duration were considered.
Discrete Bat Algorithm for Optimal Problem of Permutation Flow Shop Scheduling
Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang
2014-01-01
A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem. PMID:25243220
Discrete bat algorithm for optimal problem of permutation flow shop scheduling.
Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang
2014-01-01
A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem.
Two-machine flow shop scheduling integrated with preventive maintenance planning
NASA Astrophysics Data System (ADS)
Wang, Shijin; Liu, Ming
2016-02-01
This paper investigates an integrated optimisation problem of production scheduling and preventive maintenance (PM) in a two-machine flow shop with time to failure of each machine subject to a Weibull probability distribution. The objective is to find the optimal job sequence and the optimal PM decisions before each job such that the expected makespan is minimised. To investigate the value of integrated scheduling solution, computational experiments on small-scale problems with different configurations are conducted with total enumeration method, and the results are compared with those of scheduling without maintenance but with machine degradation, and individual job scheduling combined with independent PM planning. Then, for large-scale problems, four genetic algorithm (GA) based heuristics are proposed. The numerical results with several large problem sizes and different configurations indicate the potential benefits of integrated scheduling solution and the results also show that proposed GA-based heuristics are efficient for the integrated problem.
A Comparison of Techniques for Scheduling Fleets of Earth-Observing Satellites
NASA Technical Reports Server (NTRS)
Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna
2003-01-01
Earth observing satellite (EOS) scheduling is a complex real-world domain representative of a broad class of over-subscription scheduling problems. Over-subscription problems are those where requests for a facility exceed its capacity. These problems arise in a wide variety of NASA and terrestrial domains and are .XI important class of scheduling problems because such facilities often represent large capital investments. We have run experiments comparing multiple variants of the genetic algorithm, hill climbing, simulated annealing, squeaky wheel optimization and iterated sampling on two variants of a realistically-sized model of the EOS scheduling problem. These are implemented as permutation-based methods; methods that search in the space of priority orderings of observation requests and evaluate each permutation by using it to drive a greedy scheduler. Simulated annealing performs best and random mutation operators outperform our squeaky (more intelligent) operator. Furthermore, taking smaller steps towards the end of the search improves performance.
Minorities in Higher Education: A Pipeline Problem?
ERIC Educational Resources Information Center
Sethna, Beheruz N.
2011-01-01
This paper uses national data from the American Council on Education (ACE) to study the progress of different ethnic groups through the academic pipeline--stages studied include the Bachelor's, Master's, doctoral, levels, and then progress to the Assistant, Associate, and (full) Professor stages, to full-time administrators and finally to the CEO…
NASA Technical Reports Server (NTRS)
Phillips, K.
1976-01-01
A mathematical model for job scheduling in a specified context is presented. The model uses both linear programming and combinatorial methods. While designed with a view toward optimization of scheduling of facility and plant operations at the Deep Space Communications Complex, the context is sufficiently general to be widely applicable. The general scheduling problem including options for scheduling objectives is discussed and fundamental parameters identified. Mathematical algorithms for partitioning problems germane to scheduling are presented.
User requirements for a patient scheduling system
NASA Technical Reports Server (NTRS)
Zimmerman, W.
1979-01-01
A rehabilitation institute's needs and wants from a scheduling system were established by (1) studying the existing scheduling system and the variables that affect patient scheduling, (2) conducting a human-factors study to establish the human interfaces that affect patients' meeting prescribed therapy schedules, and (3) developing and administering a questionnaire to the staff which pertains to the various interface problems in order to identify staff requirements to minimize scheduling problems and other factors that may limit the effectiveness of any new scheduling system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vazquez, Gabriela; Pribanic, Tomas
2013-07-01
There are approximately 56 million gallons (212 km{sup 3}) of high level waste (HLW) at the U.S. Department of Energy (DOE) Hanford Site. It is scheduled that by the year 2040, the HLW is to be completely transferred to secure double-shell tanks (DST) from the leaking single-tanks (SST) via transfer pipeline system. Blockages have formed inside the pipes during transport because of the variety in composition and characteristics of the waste. These full and partial plugs delay waste transfers and require manual intervention to repair, therefore are extremely expensive, consuming millions of dollars and further threatening the environment. To successfullymore » continue the transfer of waste through the pipelines, DOE site engineers are in need of a technology that can accurately locate the blockages and unplug the pipelines. In this study, the proposed solution to remediate blockages formed in pipelines is the use of a peristaltic crawler: a pneumatically/hydraulically operated device that propels itself in a worm-like motion through sequential fluctuations of pressure in its air cavities. The crawler is also equipped with a high-pressure water nozzle used to clear blockages inside the pipelines. The crawler is now in its third generation. Previous generations showed limitations in its durability, speed, and maneuverability. Latest improvements include an automation of sequence that prevents kickback, a front-mounted inspection camera for visual feedback, and a thinner wall outer bellow for improved maneuverability. Different experimental tests were conducted to evaluate the improvements of crawler relative to its predecessors using a pipeline test-bed assembly. Anchor force tests, unplugging tests, and fatigue testing for both the bellow and rubber rims have yet to be conducted and thus results are not presented in this research. Experiments tested bellow force and response, cornering maneuverability, and straight line navigational speed. The design concept and experimental test results are reported. (authors)« less
A Parallel Compact Multi-Dimensional Numerical Algorithm with Aeroacoustics Applications
NASA Technical Reports Server (NTRS)
Povitsky, Alex; Morris, Philip J.
1999-01-01
In this study we propose a novel method to parallelize high-order compact numerical algorithms for the solution of three-dimensional PDEs (Partial Differential Equations) in a space-time domain. For this numerical integration most of the computer time is spent in computation of spatial derivatives at each stage of the Runge-Kutta temporal update. The most efficient direct method to compute spatial derivatives on a serial computer is a version of Gaussian elimination for narrow linear banded systems known as the Thomas algorithm. In a straightforward pipelined implementation of the Thomas algorithm processors are idle due to the forward and backward recurrences of the Thomas algorithm. To utilize processors during this time, we propose to use them for either non-local data independent computations, solving lines in the next spatial direction, or local data-dependent computations by the Runge-Kutta method. To achieve this goal, control of processor communication and computations by a static schedule is adopted. Thus, our parallel code is driven by a communication and computation schedule instead of the usual "creative, programming" approach. The obtained parallelization speed-up of the novel algorithm is about twice as much as that for the standard pipelined algorithm and close to that for the explicit DRP algorithm.
Coordinated Scheduling for Interdependent Electric Power and Natural Gas Infrastructures
Zlotnik, Anatoly; Roald, Line; Backhaus, Scott; ...
2016-03-24
The extensive installation of gas-fired power plants in many parts of the world has led electric systems to depend heavily on reliable gas supplies. The use of gas-fired generators for peak load and reserve provision causes high intraday variability in withdrawals from high-pressure gas transmission systems. Such variability can lead to gas price fluctuations and supply disruptions that affect electric generator dispatch, electricity prices, and threaten the security of power systems and gas pipelines. These infrastructures function on vastly different spatio-temporal scales, which prevents current practices for separate operations and market clearing from being coordinated. Here in this article, wemore » apply new techniques for control of dynamic gas flows on pipeline networks to examine day-ahead scheduling of electric generator dispatch and gas compressor operation for different levels of integration, spanning from separate forecasting, and simulation to combined optimal control. We formulate multiple coordination scenarios and develop tractable physically accurate computational implementations. These scenarios are compared using an integrated model of test networks for power and gas systems with 24 nodes and 24 pipes, respectively, which are coupled through gas-fired generators. The analysis quantifies the economic efficiency and security benefits of gas-electric coordination and dynamic gas system operation.« less
Evaluation of scheduling techniques for payload activity planning
NASA Technical Reports Server (NTRS)
Bullington, Stanley F.
1991-01-01
Two tasks related to payload activity planning and scheduling were performed. The first task involved making a comparison of space mission activity scheduling problems with production scheduling problems. The second task consisted of a statistical analysis of the output of runs of the Experiment Scheduling Program (ESP). Details of the work which was performed on these two tasks are presented.
Artificial intelligence approaches to astronomical observation scheduling
NASA Technical Reports Server (NTRS)
Johnston, Mark D.; Miller, Glenn
1988-01-01
Automated scheduling will play an increasing role in future ground- and space-based observatory operations. Due to the complexity of the problem, artificial intelligence technology currently offers the greatest potential for the development of scheduling tools with sufficient power and flexibility to handle realistic scheduling situations. Summarized here are the main features of the observatory scheduling problem, how artificial intelligence (AI) techniques can be applied, and recent progress in AI scheduling for Hubble Space Telescope.
NASA Astrophysics Data System (ADS)
Paprocka, I.; Kempa, W. M.; Grabowik, C.; Kalinowski, K.; Krenczyk, D.
2016-08-01
In the paper a survey of predictive and reactive scheduling methods is done in order to evaluate how the ability of prediction of reliability characteristics influences over robustness criteria. The most important reliability characteristics are: Mean Time to Failure, Mean Time of Repair. Survey analysis is done for a job shop scheduling problem. The paper answers the question: what method generates robust schedules in the case of a bottleneck failure occurrence before, at the beginning of planned maintenance actions or after planned maintenance actions? Efficiency of predictive schedules is evaluated using criteria: makespan, total tardiness, flow time, idle time. Efficiency of reactive schedules is evaluated using: solution robustness criterion and quality robustness criterion. This paper is the continuation of the research conducted in the paper [1], where the survey of predictive and reactive scheduling methods is done only for small size scheduling problems.
Planning and Scheduling for Fleets of Earth Observing Satellites
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Jonsson, Ari; Morris, Robert; Smith, David E.; Norvig, Peter (Technical Monitor)
2001-01-01
We address the problem of scheduling observations for a collection of earth observing satellites. This scheduling task is a difficult optimization problem, potentially involving many satellites, hundreds of requests, constraints on when and how to service each request, and resources such as instruments, recording devices, transmitters, and ground stations. High-fidelity models are required to ensure the validity of schedules; at the same time, the size and complexity of the problem makes it unlikely that systematic optimization search methods will be able to solve them in a reasonable time. This paper presents a constraint-based approach to solving the Earth Observing Satellites (EOS) scheduling problem, and proposes a stochastic heuristic search method for solving it.
Deep Space Network Scheduling Using Evolutionary Computational Methods
NASA Technical Reports Server (NTRS)
Guillaume, Alexandre; Lee, Seugnwon; Wang, Yeou-Fang; Terrile, Richard J.
2007-01-01
The paper presents the specific approach taken to formulate the problem in terms of gene encoding, fitness function, and genetic operations. The genome is encoded such that a subset of the scheduling constraints is automatically satisfied. Several fitness functions are formulated to emphasize different aspects of the scheduling problem. The optimal solutions of the different fitness functions demonstrate the trade-off of the scheduling problem and provide insight into a conflict resolution process.
Enhanced Specification and Verification for Timed Planning
2009-02-28
Scheduling Problem The job-shop scheduling problem ( JSSP ) is a generic resource allocation problem in which common resources (“machines”) are required...interleaving of all processes Pi with the non-delay and mutual exclusion constraints: JSSP =̂ |||0<i6n Pi Where mutual-exclusion( JSSP ) For every complete...execution of JSSP (which terminates), its associated sched- ule S is a feasible schedule. An optimal schedule is a trace of JSSP with the minimum ending
Improving Resource Selection and Scheduling Using Predictions. Chapter 1
NASA Technical Reports Server (NTRS)
Smith, Warren
2003-01-01
The introduction of computational grids has resulted in several new problems in the area of scheduling that can be addressed using predictions. The first problem is selecting where to run an application on the many resources available in a grid. Our approach to help address this problem is to provide predictions of when an application would start to execute if submitted to specific scheduled computer systems. The second problem is gaining simultaneous access to multiple computer systems so that distributed applications can be executed. We help address this problem by investigating how to support advance reservations in local scheduling systems. Our approaches to both of these problems are based on predictions for the execution time of applications on space- shared parallel computers. As a side effect of this work, we also discuss how predictions of application run times can be used to improve scheduling performance.
Optimization problems in natural gas transportation systems. A state-of-the-art review
Ríos-Mercado, Roger Z.; Borraz-Sánchez, Conrado
2015-03-24
Our paper provides a review on the most relevant research works conducted to solve natural gas transportation problems via pipeline systems. The literature reveals three major groups of gas pipeline systems, namely gathering, transmission, and distribution systems. In this work, we aim at presenting a detailed discussion of the efforts made in optimizing natural gas transmission lines.There is certainly a vast amount of research done over the past few years on many decision-making problems in the natural gas industry and, specifically, in pipeline network optimization. In this work, we present a state-of-the-art survey focusing on specific categories that include short-termmore » basis storage (line-packing problems), gas quality satisfaction (pooling problems), and compressor station modeling (fuel cost minimization problems). We also discuss both steady-state and transient optimization models highlighting the modeling aspects and the most relevant solution approaches known to date. Although the literature on natural gas transmission system problems is quite extensive, this is, to the best of our knowledge, the first comprehensive review or survey covering this specific research area on natural gas transmission from an operations research perspective. Furthermore, this paper includes a discussion of the most important and promising research areas in this field. Hence, our paper can serve as a useful tool to gain insight into the evolution of the many real-life applications and most recent advances in solution methodologies arising from this exciting and challenging research area of decision-making problems.« less
Optimization problems in natural gas transportation systems. A state-of-the-art review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ríos-Mercado, Roger Z.; Borraz-Sánchez, Conrado
Our paper provides a review on the most relevant research works conducted to solve natural gas transportation problems via pipeline systems. The literature reveals three major groups of gas pipeline systems, namely gathering, transmission, and distribution systems. In this work, we aim at presenting a detailed discussion of the efforts made in optimizing natural gas transmission lines.There is certainly a vast amount of research done over the past few years on many decision-making problems in the natural gas industry and, specifically, in pipeline network optimization. In this work, we present a state-of-the-art survey focusing on specific categories that include short-termmore » basis storage (line-packing problems), gas quality satisfaction (pooling problems), and compressor station modeling (fuel cost minimization problems). We also discuss both steady-state and transient optimization models highlighting the modeling aspects and the most relevant solution approaches known to date. Although the literature on natural gas transmission system problems is quite extensive, this is, to the best of our knowledge, the first comprehensive review or survey covering this specific research area on natural gas transmission from an operations research perspective. Furthermore, this paper includes a discussion of the most important and promising research areas in this field. Hence, our paper can serve as a useful tool to gain insight into the evolution of the many real-life applications and most recent advances in solution methodologies arising from this exciting and challenging research area of decision-making problems.« less
Experiments with a decision-theoretic scheduler
NASA Technical Reports Server (NTRS)
Hansson, Othar; Holt, Gerhard; Mayer, Andrew
1992-01-01
This paper describes DTS, a decision-theoretic scheduler designed to employ state-of-the-art probabilistic inference technology to speed the search for efficient solutions to constraint-satisfaction problems. Our approach involves assessing the performance of heuristic control strategies that are normally hard-coded into scheduling systems, and using probabilistic inference to aggregate this information in light of features of a given problem. BPS, the Bayesian Problem-Solver, introduced a similar approach to solving single-agent and adversarial graph search problems, yielding orders-of-magnitude improvement over traditional techniques. Initial efforts suggest that similar improvements will be realizable when applied to typical constraint-satisfaction scheduling problems.
Producing Satisfactory Solutions to Scheduling Problems: An Iterative Constraint Relaxation Approach
NASA Technical Reports Server (NTRS)
Chien, S.; Gratch, J.
1994-01-01
One drawback to using constraint-propagation in planning and scheduling systems is that when a problem has an unsatisfiable set of constraints such algorithms typically only show that no solution exists. While, technically correct, in practical situations, it is desirable in these cases to produce a satisficing solution that satisfies the most important constraints (typically defined in terms of maximizing a utility function). This paper describes an iterative constraint relaxation approach in which the scheduler uses heuristics to progressively relax problem constraints until the problem becomes satisfiable. We present empirical results of applying these techniques to the problem of scheduling spacecraft communications for JPL/NASA antenna resources.
Hybrid Rendering with Scheduling under Uncertainty
Tamm, Georg; Krüger, Jens
2014-01-01
As scientific data of increasing size is generated by today’s simulations and measurements, utilizing dedicated server resources to process the visualization pipeline becomes necessary. In a purely server-based approach, requirements on the client-side are minimal as the client only displays results received from the server. However, the client may have a considerable amount of hardware available, which is left idle. Further, the visualization is put at the whim of possibly unreliable server and network conditions. Server load, bandwidth and latency may substantially affect the response time on the client. In this paper, we describe a hybrid method, where visualization workload is assigned to server and client. A capable client can produce images independently. The goal is to determine a workload schedule that enables a synergy between the two sides to provide rendering results to the user as fast as possible. The schedule is determined based on processing and transfer timings obtained at runtime. Our probabilistic scheduler adapts to changing conditions by shifting workload between server and client, and accounts for the performance variability in the dynamic system. PMID:25309115
Effect of elastic boundaries in hydrostatic problems
NASA Astrophysics Data System (ADS)
Volobuev, A. N.; Tolstonogov, A. P.
2010-03-01
The possibility and conditions of use of the Bernoulli equation for description of an elastic pipeline were considered. It is shown that this equation is identical in form to the Bernoulli equation used for description of a rigid pipeline. It has been established that the static pressure entering into the Bernoulli equation is not identical to the pressure entering into the impulse-momentum equation. The hydrostatic problem on the pressure distribution over the height of a beaker with a rigid bottom and elastic walls, filled with a liquid, was solved.
Partitioning problems in parallel, pipelined and distributed computing
NASA Technical Reports Server (NTRS)
Bokhari, S.
1985-01-01
The problem of optimally assigning the modules of a parallel program over the processors of a multiple computer system is addressed. A Sum-Bottleneck path algorithm is developed that permits the efficient solution of many variants of this problem under some constraints on the structure of the partitions. In particular, the following problems are solved optimally for a single-host, multiple satellite system: partitioning multiple chain structured parallel programs, multiple arbitrarily structured serial programs and single tree structured parallel programs. In addition, the problems of partitioning chain structured parallel programs across chain connected systems and across shared memory (or shared bus) systems are also solved under certain constraints. All solutions for parallel programs are equally applicable to pipelined programs. These results extend prior research in this area by explicitly taking concurrency into account and permit the efficient utilization of multiple computer architectures for a wide range of problems of practical interest.
A New Lagrangian Relaxation Method Considering Previous Hour Scheduling for Unit Commitment Problem
NASA Astrophysics Data System (ADS)
Khorasani, H.; Rashidinejad, M.; Purakbari-Kasmaie, M.; Abdollahi, A.
2009-08-01
Generation scheduling is a crucial challenge in power systems especially under new environment of liberalization of electricity industry. A new Lagrangian relaxation method for unit commitment (UC) has been presented for solving generation scheduling problem. This paper focuses on the economical aspect of UC problem, while the previous hour scheduling as a very important issue is studied. In this paper generation scheduling of present hour has been conducted by considering the previous hour scheduling. The impacts of hot/cold start-up cost have been taken in to account in this paper. Case studies and numerical analysis presents significant outcomes while it demonstrates the effectiveness of the proposed method.
Production scheduling with ant colony optimization
NASA Astrophysics Data System (ADS)
Chernigovskiy, A. S.; Kapulin, D. V.; Noskova, E. E.; Yamskikh, T. N.; Tsarev, R. Yu
2017-10-01
The optimum solution of the production scheduling problem for manufacturing processes at an enterprise is crucial as it allows one to obtain the required amount of production within a specified time frame. Optimum production schedule can be found using a variety of optimization algorithms or scheduling algorithms. Ant colony optimization is one of well-known techniques to solve the global multi-objective optimization problem. In the article, the authors present a solution of the production scheduling problem by means of an ant colony optimization algorithm. A case study of the algorithm efficiency estimated against some others production scheduling algorithms is presented. Advantages of the ant colony optimization algorithm and its beneficial effect on the manufacturing process are provided.
Sensibility study in a flexible job shop scheduling problem
NASA Astrophysics Data System (ADS)
Curralo, Ana; Pereira, Ana I.; Barbosa, José; Leitão, Paulo
2013-10-01
This paper proposes the impact assessment of the jobs order in the optimal time of operations in a Flexible Job Shop Scheduling Problem. In this work a real assembly cell was studied: the AIP-PRIMECA cell at the Université de Valenciennes et du Hainaut-Cambrésis, in France, which is considered as a Flexible Job Shop problem. The problem consists in finding the machines operations schedule, taking into account the precedence constraints. The main objective is to minimize the batch makespan, i.e. the finish time of the last operation completed in the schedule. Shortly, the present study consists in evaluating if the jobs order affects the optimal time of the operations schedule. The genetic algorithm was used to solve the optimization problem. As a conclusion, it's assessed that the jobs order influence the optimal time.
Coordinating space telescope operations in an integrated planning and scheduling architecture
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Smith, Stephen F.; Cesta, Amedeo; D'Aloisi, Daniela
1992-01-01
The Heuristic Scheduling Testbed System (HSTS), a software architecture for integrated planning and scheduling, is discussed. The architecture has been applied to the problem of generating observation schedules for the Hubble Space Telescope. This problem is representative of the class of problems that can be addressed: their complexity lies in the interaction of resource allocation and auxiliary task expansion. The architecture deals with this interaction by viewing planning and scheduling as two complementary aspects of the more general process of constructing behaviors of a dynamical system. The principal components of the software architecture are described, indicating how to model the structure and dynamics of a system, how to represent schedules at multiple levels of abstraction in the temporal database, and how the problem solving machinery operates. A scheduler for the detailed management of Hubble Space Telescope operations that has been developed within HSTS is described. Experimental performance results are given that indicate the utility and practicality of the approach.
Preliminary Evaluation of BIM-based Approaches for Schedule Delay Analysis
NASA Astrophysics Data System (ADS)
Chou, Hui-Yu; Yang, Jyh-Bin
2017-10-01
The problem of schedule delay commonly occurs in construction projects. The quality of delay analysis depends on the availability of schedule-related information and delay evidence. More information used in delay analysis usually produces more accurate and fair analytical results. How to use innovative techniques to improve the quality of schedule delay analysis results have received much attention recently. As Building Information Modeling (BIM) technique has been quickly developed, using BIM and 4D simulation techniques have been proposed and implemented. Obvious benefits have been achieved especially in identifying and solving construction consequence problems in advance of construction. This study preforms an intensive literature review to discuss the problems encountered in schedule delay analysis and the possibility of using BIM as a tool in developing a BIM-based approach for schedule delay analysis. This study believes that most of the identified problems can be dealt with by BIM technique. Research results could be a fundamental of developing new approaches for resolving schedule delay disputes.
Detection of leaks in buried rural water pipelines using thermal infrared images
Eidenshink, Jeffery C.
1985-01-01
Leakage is a major problem in many pipelines. Minor leaks called 'seeper leaks', which generally range from 2 to 10 m3 per day, are common and are difficult to detect using conventional ground surveys. The objective of this research was to determine whether airborne thermal-infrared remote sensing could be used in detecting leaks and monitoring rural water pipelines. This study indicates that such leaks can be detected using low-altitude 8.7- to 11.5. micrometer wavelength, thermal infrared images collected under proper conditions.
Campos, Claudia; Leon, Yanerys; Sleiman, Andressa; Urcuyo, Beatriz
2017-03-01
One potential limitation of functional communication training (FCT) is that after the functional communication response (FCR) is taught, the response may be emitted at high rates or inappropriate times. Thus, schedule thinning is often necessary. Previous research has demonstrated that multiple schedules can facilitate schedule thinning by establishing discriminative control of the communication response while maintaining low rates of problem behavior. To date, most applied research evaluating the clinical utility of multiple schedules has done so in the context of behavior maintained by positive reinforcement (e.g., attention or tangible items). This study examined the use of a multiple schedule with alternating Fixed Ratio (FR 1)/extinction (EXT) components for two individuals with developmental disabilities who emitted escape-maintained problem behavior. Although problem behavior remained low during all FCT and multiple schedule phases, the use of the multiple schedule alone did not result in discriminated manding.
Solomon Gulch hydroelectric project takes shape
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The planning and current construction activities for the Solomon Gulch hydroelectric plant near Valdez, Alaska which is scheduled for dam completion in 1980 and power plant operation in 1981 are discussed. The main dam will be 115 ft high and 360 ft wide. The two paralled 48-in. dia penstocks will be constructed from surplus pipe left over from the Alaska pipeline project. Construction on the 12 MW plant began in October 1978. (LCL)
Closing the Gaps and Filling the STEM Pipeline: A Multidisciplinary Approach
ERIC Educational Resources Information Center
Doerschuk, Peggy; Bahrim, Cristian; Daniel, Jennifer; Kruger, Joseph; Mann, Judith; Martin, Cristopher
2016-01-01
There is a growing demand for degreed science, technology, engineering and mathematics (STEM) professionals, but the production of degreed STEM students is not keeping pace. Problems exist at every juncture along the pipeline. Too few students choose to major in STEM disciplines. Many of those who do major in STEM drop out or change majors.…
The CWF pipeline system from Shen mu to the Yellow Sea
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ercolani, D.
1993-12-31
A feasibility study on the applicability of coal-water fuel (CWF) technology in the People`s Republic of China (PRC) is in progress. This study, awarded to Snamprogetti by the International Centre for Scientific Culture (World Laboratory) of Geneva, Switzerland, is performed on behalf of Chinese Organizations led by the Ministry of Energy Resources and the Academy of Sciences of the People`s Republic of China. Slurry pipelines appear to be a solution for solving the logistic problems created by a progressively increasing coal consumption and a limited availability of conventional transport infrastructures in the PRC. Within this framework, CWF pipelines are themore » most innovative technological option in consideration of the various advantages the technology offers with respect to conventional slurry pipelines. The PRC CWF pipeline system study evaluates two alternative transport streams, but originating from the same slurry production plant, located at Shachuanguo, about 100 km from Sheng Mu.« less
Fractional Programming for Communication Systems—Part II: Uplink Scheduling via Matching
NASA Astrophysics Data System (ADS)
Shen, Kaiming; Yu, Wei
2018-05-01
This two-part paper develops novel methodologies for using fractional programming (FP) techniques to design and optimize communication systems. Part I of this paper proposes a new quadratic transform for FP and treats its application for continuous optimization problems. In this Part II of the paper, we study discrete problems, such as those involving user scheduling, which are considerably more difficult to solve. Unlike the continuous problems, discrete or mixed discrete-continuous problems normally cannot be recast as convex problems. In contrast to the common heuristic of relaxing the discrete variables, this work reformulates the original problem in an FP form amenable to distributed combinatorial optimization. The paper illustrates this methodology by tackling the important and challenging problem of uplink coordinated multi-cell user scheduling in wireless cellular systems. Uplink scheduling is more challenging than downlink scheduling, because uplink user scheduling decisions significantly affect the interference pattern in nearby cells. Further, the discrete scheduling variable needs to be optimized jointly with continuous variables such as transmit power levels and beamformers. The main idea of the proposed FP approach is to decouple the interaction among the interfering links, thereby permitting a distributed and joint optimization of the discrete and continuous variables with provable convergence. The paper shows that the well-known weighted minimum mean-square-error (WMMSE) algorithm can also be derived from a particular use of FP; but our proposed FP-based method significantly outperforms WMMSE when discrete user scheduling variables are involved, both in term of run-time efficiency and optimizing results.
The Ames-Lockheed orbiter processing scheduling system
NASA Technical Reports Server (NTRS)
Zweben, Monte; Gargan, Robert
1991-01-01
A general purpose scheduling system and its application to Space Shuttle Orbiter Processing at the Kennedy Space Center (KSC) are described. Orbiter processing entails all the inspection, testing, repair, and maintenance necessary to prepare the Shuttle for launch and takes place within the Orbiter Processing Facility (OPF) at KSC, the Vehicle Assembly Building (VAB), and on the launch pad. The problems are extremely combinatoric in that there are thousands of tasks, resources, and other temporal considerations that must be coordinated. Researchers are building a scheduling tool that they hope will be an integral part of automating the planning and scheduling process at KSC. The scheduling engine is domain independent and is also being applied to Space Shuttle cargo processing problems as well as wind tunnel scheduling problems.
An investigation of the use of temporal decomposition in space mission scheduling
NASA Technical Reports Server (NTRS)
Bullington, Stanley E.; Narayanan, Venkat
1994-01-01
This research involves an examination of techniques for solving scheduling problems in long-duration space missions. The mission timeline is broken up into several time segments, which are then scheduled incrementally. Three methods are presented for identifying the activities that are to be attempted within these segments. The first method is a mathematical model, which is presented primarily to illustrate the structure of the temporal decomposition problem. Since the mathematical model is bound to be computationally prohibitive for realistic problems, two heuristic assignment procedures are also presented. The first heuristic method is based on dispatching rules for activity selection, and the second heuristic assigns performances of a model evenly over timeline segments. These heuristics are tested using a sample Space Station mission and a Spacelab mission. The results are compared with those obtained by scheduling the missions without any problem decomposition. The applicability of this approach to large-scale mission scheduling problems is also discussed.
NASA Technical Reports Server (NTRS)
Richards, Stephen F.
1991-01-01
Although computerized operations have significant gains realized in many areas, one area, scheduling, has enjoyed few benefits from automation. The traditional methods of industrial engineering and operations research have not proven robust enough to handle the complexities associated with the scheduling of realistic problems. To address this need, NASA has developed the computer-aided scheduling system (COMPASS), a sophisticated, interactive scheduling tool that is in wide-spread use within NASA and the contractor community. Therefore, COMPASS provides no explicit support for the large class of problems in which several people, perhaps at various locations, build separate schedules that share a common pool of resources. This research examines the issue of distributing scheduling, as applied to application domains characterized by the partial ordering of tasks, limited resources, and time restrictions. The focus of this research is on identifying issues related to distributed scheduling, locating applicable problem domains within NASA, and suggesting areas for ongoing research. The issues that this research identifies are goals, rescheduling requirements, database support, the need for communication and coordination among individual schedulers, the potential for expert system support for scheduling, and the possibility of integrating artificially intelligent schedulers into a network of human schedulers.
Geohazard assessment lifecycle for a natural gas pipeline project
NASA Astrophysics Data System (ADS)
Lekkakis, D.; Boone, M. D.; Strassburger, E.; Li, Z.; Duffy, W. P.
2015-09-01
This paper is a walkthrough of the geohazard risk assessment performed for the Front End Engineering Design (FEED) of a planned large-diameter natural gas pipeline, extending from Eastern Europe to Western Asia for a total length of approximately 1,850 km. The geohazards discussed herein include liquefaction-induced pipe buoyancy, cyclic softening, lateral spreading, slope instability, groundwater rise-induced pipe buoyancy, and karst. The geohazard risk assessment lifecycle was comprised of 4 stages: initially a desktop study was carried out to describe the geologic setting along the alignment and to conduct a preliminary assessment of the geohazards. The development of a comprehensive Digital Terrain Model topography and aerial photography data were fundamental in this process. Subsequently, field geohazard mapping was conducted with the deployment of 8 teams of geoprofessionals, to investigate the proposed major reroutes and delve into areas of poor or questionable data. During the third stage, a geotechnical subsurface site investigation was then executed based on the results of the above study and mapping efforts in order to obtain sufficient data tailored for risk quantification. Lastly, all gathered and processed information was overlain into a Geographical Information database towards a final determination of the critical reaches of the pipeline alignment. Input from Subject Matter Experts (SME) in the fields of landslides, karst and fluvial geomorphology was incorporated during the second and fourth stages of the assessment. Their experience in that particular geographical region was key to making appropriate decisions based on engineering judgment. As the design evolved through the above stages, the pipeline corridor was narrowed from a 2-km wide corridor, to a 500-m corridor and finally to a fixed alignment. Where the geohazard risk was high, rerouting of the pipeline was generally selected as a mitigation measure. In some cases of high uncertainty in the assessment, further exploration was proposed. In cases where rerouting was constrained, mitigation via structural measures was proposed. This paper further discusses the cost, schedule and resource challenges of planning and executing such a large-scale geotechnical investigation, the interfaces between the various disciplines involved during the assessment, the innovative tools employed for the field mapping, the classifications developed for mapping landslides, karst geology, and trench excavatability, determining liquefaction stretches and the process for the site localization of the Above Ground Installations (AGI). It finally discusses the objectives of the FEED study in terms of providing a route, a ± 20% project cost estimate and a schedule, and the additional engineering work foreseen to take place in the detailed engineering phase of the project.
Automated telescope scheduling
NASA Technical Reports Server (NTRS)
Johnston, Mark D.
1988-01-01
With the ever increasing level of automation of astronomical telescopes the benefits and feasibility of automated planning and scheduling are becoming more apparent. Improved efficiency and increased overall telescope utilization are the most obvious goals. Automated scheduling at some level has been done for several satellite observatories, but the requirements on these systems were much less stringent than on modern ground or satellite observatories. The scheduling problem is particularly acute for Hubble Space Telescope: virtually all observations must be planned in excruciating detail weeks to months in advance. Space Telescope Science Institute has recently made significant progress on the scheduling problem by exploiting state-of-the-art artificial intelligence software technology. What is especially interesting is that this effort has already yielded software that is well suited to scheduling groundbased telescopes, including the problem of optimizing the coordinated scheduling of more than one telescope.
Reinforcement learning in scheduling
NASA Technical Reports Server (NTRS)
Dietterich, Tom G.; Ok, Dokyeong; Zhang, Wei; Tadepalli, Prasad
1994-01-01
The goal of this research is to apply reinforcement learning methods to real-world problems like scheduling. In this preliminary paper, we show that learning to solve scheduling problems such as the Space Shuttle Payload Processing and the Automatic Guided Vehicle (AGV) scheduling can be usefully studied in the reinforcement learning framework. We discuss some of the special challenges posed by the scheduling domain to these methods and propose some possible solutions we plan to implement.
Genetic algorithm parameters tuning for resource-constrained project scheduling problem
NASA Astrophysics Data System (ADS)
Tian, Xingke; Yuan, Shengrui
2018-04-01
Project Scheduling Problem (RCPSP) is a kind of important scheduling problem. To achieve a certain optimal goal such as the shortest duration, the smallest cost, the resource balance and so on, it is required to arrange the start and finish of all tasks under the condition of satisfying project timing constraints and resource constraints. In theory, the problem belongs to the NP-hard problem, and the model is abundant. Many combinatorial optimization problems are special cases of RCPSP, such as job shop scheduling, flow shop scheduling and so on. At present, the genetic algorithm (GA) has been used to deal with the classical RCPSP problem and achieved remarkable results. Vast scholars have also studied the improved genetic algorithm for the RCPSP problem, which makes it to solve the RCPSP problem more efficiently and accurately. However, for the selection of the main parameters of the genetic algorithm, there is no parameter optimization in these studies. Generally, we used the empirical method, but it cannot ensure to meet the optimal parameters. In this paper, the problem was carried out, which is the blind selection of parameters in the process of solving the RCPSP problem. We made sampling analysis, the establishment of proxy model and ultimately solved the optimal parameters.
Application of decentralized cooperative problem solving in dynamic flexible scheduling
NASA Astrophysics Data System (ADS)
Guan, Zai-Lin; Lei, Ming; Wu, Bo; Wu, Ya; Yang, Shuzi
1995-08-01
The object of this study is to discuss an intelligent solution to the problem of task-allocation in shop floor scheduling. For this purpose, the technique of distributed artificial intelligence (DAI) is applied. Intelligent agents (IAs) are used to realize decentralized cooperation, and negotiation is realized by using message passing based on the contract net model. Multiple agents, such as manager agents, workcell agents, and workstation agents, make game-like decisions based on multiple criteria evaluations. This procedure of decentralized cooperative problem solving makes local scheduling possible. And by integrating such multiple local schedules, dynamic flexible scheduling for the whole shop floor production can be realized.
Mathematical modeling of ignition of woodlands resulted from accident on the pipeline
NASA Astrophysics Data System (ADS)
Perminov, V. A.; Loboda, E. L.; Reyno, V. V.
2014-11-01
Accidents occurring at the sites of pipelines, accompanied by environmental damage, economic loss, and sometimes loss of life. In this paper we calculated the sizes of the possible ignition zones in emergency situations on pipelines located close to the forest, accompanied by the appearance of fireballs. In this paper, using the method of mathematical modeling calculates the maximum size of the ignition zones of vegetation as a result of accidental releases of flammable substances. The paper suggested in the context of the general mathematical model of forest fires give a new mathematical setting and method of numerical solution of a problem of a forest fire modeling. The boundary-value problem is solved numerically using the method of splitting according to physical processes. The dependences of the size of the forest fuel for different amounts of leaked flammable substances and moisture content of vegetation.
Genetic algorithm to solve the problems of lectures and practicums scheduling
NASA Astrophysics Data System (ADS)
Syahputra, M. F.; Apriani, R.; Sawaluddin; Abdullah, D.; Albra, W.; Heikal, M.; Abdurrahman, A.; Khaddafi, M.
2018-02-01
Generally, the scheduling process is done manually. However, this method has a low accuracy level, along with possibilities that a scheduled process collides with another scheduled process. When doing theory class and practicum timetable scheduling process, there are numerous problems, such as lecturer teaching schedule collision, schedule collision with another schedule, practicum lesson schedules that collides with theory class, and the number of classrooms available. In this research, genetic algorithm is implemented to perform theory class and practicum timetable scheduling process. The algorithm will be used to process the data containing lists of lecturers, courses, and class rooms, obtained from information technology department at University of Sumatera Utara. The result of scheduling process using genetic algorithm is the most optimal timetable that conforms to available time slots, class rooms, courses, and lecturer schedules.
Decision-theoretic control of EUVE telescope scheduling
NASA Technical Reports Server (NTRS)
Hansson, Othar; Mayer, Andrew
1993-01-01
This paper describes a decision theoretic scheduler (DTS) designed to employ state-of-the-art probabilistic inference technology to speed the search for efficient solutions to constraint-satisfaction problems. Our approach involves assessing the performance of heuristic control strategies that are normally hard-coded into scheduling systems and using probabilistic inference to aggregate this information in light of the features of a given problem. The Bayesian Problem-Solver (BPS) introduced a similar approach to solving single agent and adversarial graph search patterns yielding orders-of-magnitude improvement over traditional techniques. Initial efforts suggest that similar improvements will be realizable when applied to typical constraint-satisfaction scheduling problems.
Design tool for multiprocessor scheduling and evaluation of iterative dataflow algorithms
NASA Technical Reports Server (NTRS)
Jones, Robert L., III
1995-01-01
A graph-theoretic design process and software tool is defined for selecting a multiprocessing scheduling solution for a class of computational problems. The problems of interest are those that can be described with a dataflow graph and are intended to be executed repetitively on a set of identical processors. Typical applications include signal processing and control law problems. Graph-search algorithms and analysis techniques are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool applies the design process to a given problem and includes performance optimization through the inclusion of additional precedence constraints among the schedulable tasks.
Simulation of a manual electric-arc welding in a working gas pipeline. 1. Formulation of the problem
NASA Astrophysics Data System (ADS)
Baikov, V. I.; Gishkelyuk, I. A.; Rus', A. M.; Sidorovich, T. V.; Tonkonogov, B. A.
2010-11-01
Problems of mathematical simulation of the temperature stresses arising in the wall of a pipe of a cross-country gas pipeline in the process of electric-arc welding of defects in it have been considered. Mathematical models of formation of temperatures, deformations, and stresses in a gas pipe subjected to phase transformations have been developed. These models were numerically realized in the form of algorithms representing a part of an application-program package. Results of verification of the computational complex and calculation results obtained with it are presented.
A visual programming environment for the Navier-Stokes computer
NASA Technical Reports Server (NTRS)
Tomboulian, Sherryl; Crockett, Thomas W.; Middleton, David
1988-01-01
The Navier-Stokes computer is a high-performance, reconfigurable, pipelined machine designed to solve large computational fluid dynamics problems. Due to the complexity of the architecture, development of effective, high-level language compilers for the system appears to be a very difficult task. Consequently, a visual programming methodology has been developed which allows users to program the system at an architectural level by constructing diagrams of the pipeline configuration. These schematic program representations can then be checked for validity and automatically translated into machine code. The visual environment is illustrated by using a prototype graphical editor to program an example problem.
N, Sadhasivam; R, Balamurugan; M, Pandi
2018-01-27
Objective: Epigenetic modifications involving DNA methylation and histone statud are responsible for the stable maintenance of cellular phenotypes. Abnormalities may be causally involved in cancer development and therefore could have diagnostic potential. The field of epigenomics refers to all epigenetic modifications implicated in control of gene expression, with a focus on better understanding of human biology in both normal and pathological states. Epigenomics scientific workflow is essentially a data processing pipeline to automate the execution of various genome sequencing operations or tasks. Cloud platform is a popular computing platform for deploying large scale epigenomics scientific workflow. Its dynamic environment provides various resources to scientific users on a pay-per-use billing model. Scheduling epigenomics scientific workflow tasks is a complicated problem in cloud platform. We here focused on application of an improved particle swam optimization (IPSO) algorithm for this purpose. Methods: The IPSO algorithm was applied to find suitable resources and allocate epigenomics tasks so that the total cost was minimized for detection of epigenetic abnormalities of potential application for cancer diagnosis. Result: The results showed that IPSO based task to resource mapping reduced total cost by 6.83 percent as compared to the traditional PSO algorithm. Conclusion: The results for various cancer diagnosis tasks showed that IPSO based task to resource mapping can achieve better costs when compared to PSO based mapping for epigenomics scientific application workflow. Creative Commons Attribution License
Designing a fuzzy scheduler for hard real-time systems
NASA Technical Reports Server (NTRS)
Yen, John; Lee, Jonathan; Pfluger, Nathan; Natarajan, Swami
1992-01-01
In hard real-time systems, tasks have to be performed not only correctly, but also in a timely fashion. If timing constraints are not met, there might be severe consequences. Task scheduling is the most important problem in designing a hard real-time system, because the scheduling algorithm ensures that tasks meet their deadlines. However, the inherent nature of uncertainty in dynamic hard real-time systems increases the problems inherent in scheduling. In an effort to alleviate these problems, we have developed a fuzzy scheduler to facilitate searching for a feasible schedule. A set of fuzzy rules are proposed to guide the search. The situation we are trying to address is the performance of the system when no feasible solution can be found, and therefore, certain tasks will not be executed. We wish to limit the number of important tasks that are not scheduled.
DTS: Building custom, intelligent schedulers
NASA Technical Reports Server (NTRS)
Hansson, Othar; Mayer, Andrew
1994-01-01
DTS is a decision-theoretic scheduler, built on top of a flexible toolkit -- this paper focuses on how the toolkit might be reused in future NASA mission schedulers. The toolkit includes a user-customizable scheduling interface, and a 'Just-For-You' optimization engine. The customizable interface is built on two metaphors: objects and dynamic graphs. Objects help to structure problem specifications and related data, while dynamic graphs simplify the specification of graphical schedule editors (such as Gantt charts). The interface can be used with any 'back-end' scheduler, through dynamically-loaded code, interprocess communication, or a shared database. The 'Just-For-You' optimization engine includes user-specific utility functions, automatically compiled heuristic evaluations, and a postprocessing facility for enforcing scheduling policies. The optimization engine is based on BPS, the Bayesian Problem-Solver (1,2), which introduced a similar approach to solving single-agent and adversarial graph search problems.
Improving Hospital-wide Patient Scheduling Decisions by Clinical Pathway Mining.
Gartner, Daniel; Arnolds, Ines V; Nickel, Stefan
2015-01-01
Recent research has highlighted the need for solving hospital-wide patient scheduling problems. Inpatient scheduling, patient activities have to be scheduled on scarce hospital resources such that temporal relations between activities (e.g. for recovery times) are ensured. Common objectives are, among others, the minimization of the length of stay (LOS). In this paper, we consider a hospital-wide patient scheduling problem with LOS minimization based on uncertain clinical pathways. We approach the problem in three stages: First, we learn most likely clinical pathways using a sequential pattern mining approach. Second, we provide a mathematical model for patient scheduling and finally, we combine the two approaches. In an experimental study carried out using real-world data, we show that our approach outperforms baseline approaches on two metrics.
Development of bacterial biofilms in dairy processing lines.
Austin, J W; Bergeron, G
1995-08-01
Adherence of bacteria to various milk contact sites was examined by scanning electron microscopy and transmission electron microscopy. New gaskets, endcaps, vacuum breaker plugs and pipeline inserts were installed in different areas in lines carrying either raw or pasteurized milk, and a routine schedule of cleaning-in-place and sanitizing was followed. Removed cleaned and sanitized gaskets were processed for scanning or transmission electron microscopy. Adherent bacteria were observed on the sides of gaskets removed from both pasteurized and raw milk lines. Some areas of Buna-n gaskets were colonized with a confluent layer of bacterial cells surrounded by an extensive amorphous matrix, while other areas of Buna-n gaskets showed a diffuse adherence over large areas of the surface. Most of the bacteria attached to polytetrafluoroethylene (PTFE or Teflon) gaskets were found in crevices created by insertion of the gasket into the pipeline. Examination of stainless steel endcaps, pipeline inserts, and PTFE vacuum breaker plugs did not reveal the presence of adherent bacteria. The results of this study indicate that biofilms developed on the sides of gaskets in spite of cleaning-in-place procedures. These biofilms may be a source of post-pasteurization contamination.
Transport of thermal water from well to thermal baths
NASA Astrophysics Data System (ADS)
Montegrossi, Giordano; Vaselli, Orlando; Tassi, Franco; Nocentini, Matteo; Liccioli, Caterina; Nisi, Barbara
2013-04-01
The main problem in building a thermal bath is having a hot spring or a thermal well located in an appropriate position for customer access; since Roman age, thermal baths were distributed in the whole empire and often road and cities were built all around afterwards. Nowadays, the perspectives are changed and occasionally the thermal resource is required to be transported with a pipeline system from the main source to the spa. Nevertheless, the geothermal fluid may show problems of corrosion and scaling during transport. In the Ambra valley, central Italy, a geothermal well has recently been drilled and it discharges a Ca(Mg)-SO4, CO2-rich water at the temperature of 41 °C, that could be used for supplying a new spa in the surrounding areas of the well itself. The main problem is that the producing well is located in a forest tree ca. 4 km far away from the nearest structure suitable to host the thermal bath. In this study, we illustrate the pipeline design from the producing well to the spa, constraining the physical and geochemical parameters to reduce scaling and corrosion phenomena. The starting point is the thermal well that has a flow rate ranging from 22 up to 25 L/sec. The thermal fluid is heavily precipitating calcite (50-100 ton/month) due to the calcite-CO2 equilibrium in the reservoir, where a partial pressure of 11 bar of CO2 is present. One of the most vexing problems in investigating scaling processed during the fluid transport in the pipeline is that there is not a proper software package for multiphase fluid flow in pipes characterized by such a complex chemistry. As a consequence, we used a modified TOUGHREACT with Pitzer database, arranged to use Darcy-Weisbach equation, and applying "fictitious" material properties in order to give the proper y- z- velocity profile in comparison to the analytical solution for laminar fluid flow in pipes. This investigation gave as a result the lowest CO2 partial pressure to be kept in the pipeline (nearly 2.5 bar) to avoid uncontrolled calcite precipitation, and accordingly the pipeline path was designed. Non-linear phenomena that may originate calcite precipitation, such as phase separation and pressure waves, were discussed. The pipeline and the thermal bath are planned to be built next year.
Wang, Zhaocai; Ji, Zuwen; Wang, Xiaoming; Wu, Tunhua; Huang, Wei
2017-12-01
As a promising approach to solve the computationally intractable problem, the method based on DNA computing is an emerging research area including mathematics, computer science and molecular biology. The task scheduling problem, as a well-known NP-complete problem, arranges n jobs to m individuals and finds the minimum execution time of last finished individual. In this paper, we use a biologically inspired computational model and describe a new parallel algorithm to solve the task scheduling problem by basic DNA molecular operations. In turn, we skillfully design flexible length DNA strands to represent elements of the allocation matrix, take appropriate biological experiment operations and get solutions of the task scheduling problem in proper length range with less than O(n 2 ) time complexity. Copyright © 2017. Published by Elsevier B.V.
ERIC Educational Resources Information Center
Cooper, Catherine R.; Chavira, Gabriela; Mena, Dolores D.
2005-01-01
This article maps recent progress on 5 key questions about "the academic pipeline problem" of different rates of persistence through school among ethnically diverse students across the nation. The article shows the complementary development of the Overlapping Spheres of Influence Theory and Sociocultural Theory and aligns concepts and measures…
NASA Astrophysics Data System (ADS)
Li, Guoliang; Xing, Lining; Chen, Yingwu
2017-11-01
The autonomicity of self-scheduling on Earth observation satellite and the increasing scale of satellite network attract much attention from researchers in the last decades. In reality, the limited onboard computational resource presents challenge for the online scheduling algorithm. This study considered online scheduling problem for a single autonomous Earth observation satellite within satellite network environment. It especially addressed that the urgent tasks arrive stochastically during the scheduling horizon. We described the problem and proposed a hybrid online scheduling mechanism with revision and progressive techniques to solve this problem. The mechanism includes two decision policies, a when-to-schedule policy combining periodic scheduling and critical cumulative number-based event-driven rescheduling, and a how-to-schedule policy combining progressive and revision approaches to accommodate two categories of task: normal tasks and urgent tasks. Thus, we developed two heuristic (re)scheduling algorithms and compared them with other generally used techniques. Computational experiments indicated that the into-scheduling percentage of urgent tasks in the proposed mechanism is much higher than that in periodic scheduling mechanism, and the specific performance is highly dependent on some mechanism-relevant and task-relevant factors. For the online scheduling, the modified weighted shortest imaging time first and dynamic profit system benefit heuristics outperformed the others on total profit and the percentage of successfully scheduled urgent tasks.
Technology Cost and Schedule Estimation (TCASE) Final Report
NASA Technical Reports Server (NTRS)
Wallace, Jon; Schaffer, Mark
2015-01-01
During the 2014-2015 project year, the focus of the TCASE project has shifted from collection of historical data from many sources to securing a data pipeline between TCASE and NASA's widely used TechPort system. TCASE v1.0 implements a data import solution that was achievable within the project scope, while still providing the basis for a long-term ability to keep TCASE in sync with TechPort. Conclusion: TCASE data quantity is adequate and the established data pipeline will enable future growth. Data quality is now highly dependent the quality of data in TechPort. Recommendation: Technology development organizations within NASA should continue to work closely with project/program data tracking and archiving efforts (e.g. TechPort) to ensure that the right data is being captured at the appropriate quality level. TCASE would greatly benefit, for example, if project cost/budget information was included in TechPort in the future.
WFIRST: STScI Science Operations Center (SSOC) Activities and Plans
NASA Astrophysics Data System (ADS)
Gilbert, Karoline M.; STScI WFIRST Team
2018-01-01
The science operations for the WFIRST Mission will be distributed between Goddard Space Flight Center, the Space Telescope Science Institute (STScI), and the Infrared Processing and Analysis Center (IPAC). The STScI Science Operations Center (SSOC) will schedule and archive all WFIRST observations, will calibrate and produce pipeline-reduced data products for the Wide Field Instrument, and will support the astronomical community in planning WFI observations and analyzing WFI data. During the formulation phase, WFIRST team members at STScI have developed operations concepts for scheduling, data management, and the archive; have performed technical studies investigating the impact of WFIRST design choices on data quality and analysis; and have built simulation tools to aid the community in exploring WFIRST’s capabilities. We will highlight examples of each of these efforts.
NASA Astrophysics Data System (ADS)
Ryabkov, A. V.; Stafeeva, N. A.; Ivanov, V. A.; Zakuraev, A. F.
2018-05-01
A complex construction consisting of a universal floating pontoon road for laying pipelines in automatic mode on its body all year round and in any weather for Siberia and the Far North has been designed. A new method is proposed for the construction of pipelines on pontoon modules, which are made of composite materials. Pontoons made of composite materials for bedding pipelines with track-forming guides for automated wheeled transport, pipelayer, are designed. The proposed system eliminates the construction of a road along the route, ensures the buoyancy and smoothness of the self-propelled automated stacker in the form of a "centipede", which has a number of significant advantages in the construction and operation of the entire complex in the swamp and watered areas without overburden.
NASA Astrophysics Data System (ADS)
Gao, Kaizhou; Wang, Ling; Luo, Jianping; Jiang, Hua; Sadollah, Ali; Pan, Quanke
2018-06-01
In this article, scheduling and rescheduling problems with increasing processing time and new job insertion are studied for reprocessing problems in the remanufacturing process. To handle the unpredictability of reprocessing time, an experience-based strategy is used. Rescheduling strategies are applied for considering the effect of increasing reprocessing time and the new subassembly insertion. To optimize the scheduling and rescheduling objective, a discrete harmony search (DHS) algorithm is proposed. To speed up the convergence rate, a local search method is designed. The DHS is applied to two real-life cases for minimizing the maximum completion time and the mean of earliness and tardiness (E/T). These two objectives are also considered together as a bi-objective problem. Computational optimization results and comparisons show that the proposed DHS is able to solve the scheduling and rescheduling problems effectively and productively. Using the proposed approach, satisfactory optimization results can be achieved for scheduling and rescheduling on a real-life shop floor.
Solving multi-objective job shop scheduling problems using a non-dominated sorting genetic algorithm
NASA Astrophysics Data System (ADS)
Piroozfard, Hamed; Wong, Kuan Yew
2015-05-01
The efforts of finding optimal schedules for the job shop scheduling problems are highly important for many real-world industrial applications. In this paper, a multi-objective based job shop scheduling problem by simultaneously minimizing makespan and tardiness is taken into account. The problem is considered to be more complex due to the multiple business criteria that must be satisfied. To solve the problem more efficiently and to obtain a set of non-dominated solutions, a meta-heuristic based non-dominated sorting genetic algorithm is presented. In addition, task based representation is used for solution encoding, and tournament selection that is based on rank and crowding distance is applied for offspring selection. Swapping and insertion mutations are employed to increase diversity of population and to perform intensive search. To evaluate the modified non-dominated sorting genetic algorithm, a set of modified benchmarking job shop problems obtained from the OR-Library is used, and the results are considered based on the number of non-dominated solutions and quality of schedules obtained by the algorithm.
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Smith, Steven S.
1996-01-01
This final report summarizes research performed under NASA contract NCC 2-531 toward generalization of constraint-based scheduling theories and techniques for application to space telescope observation scheduling problems. Our work into theories and techniques for solution of this class of problems has led to the development of the Heuristic Scheduling Testbed System (HSTS), a software system for integrated planning and scheduling. Within HSTS, planning and scheduling are treated as two complementary aspects of the more general process of constructing a feasible set of behaviors of a target system. We have validated the HSTS approach by applying it to the generation of observation schedules for the Hubble Space Telescope. This report summarizes the HSTS framework and its application to the Hubble Space Telescope domain. First, the HSTS software architecture is described, indicating (1) how the structure and dynamics of a system is modeled in HSTS, (2) how schedules are represented at multiple levels of abstraction, and (3) the problem solving machinery that is provided. Next, the specific scheduler developed within this software architecture for detailed management of Hubble Space Telescope operations is presented. Finally, experimental performance results are given that confirm the utility and practicality of the approach.
Spike: Artificial intelligence scheduling for Hubble space telescope
NASA Technical Reports Server (NTRS)
Johnston, Mark; Miller, Glenn; Sponsler, Jeff; Vick, Shon; Jackson, Robert
1990-01-01
Efficient utilization of spacecraft resources is essential, but the accompanying scheduling problems are often computationally intractable and are difficult to approximate because of the presence of numerous interacting constraints. Artificial intelligence techniques were applied to the scheduling of the NASA/ESA Hubble Space Telescope (HST). This presents a particularly challenging problem since a yearlong observing program can contain some tens of thousands of exposures which are subject to a large number of scientific, operational, spacecraft, and environmental constraints. New techniques were developed for machine reasoning about scheduling constraints and goals, especially in cases where uncertainty is an important scheduling consideration and where resolving conflicts among conflicting preferences is essential. These technique were utilized in a set of workstation based scheduling tools (Spike) for HST. Graphical displays of activities, constraints, and schedules are an important feature of the system. High level scheduling strategies using both rule based and neural network approaches were developed. While the specific constraints implemented are those most relevant to HST, the framework developed is far more general and could easily handle other kinds of scheduling problems. The concept and implementation of the Spike system are described along with some experiments in adapting Spike to other spacecraft scheduling domains.
Scheduler Design Criteria: Requirements and Considerations
NASA Technical Reports Server (NTRS)
Lee, Hanbong
2016-01-01
This presentation covers fundamental requirements and considerations for developing schedulers in airport operations. We first introduce performance and functional requirements for airport surface schedulers. Among various optimization problems in airport operations, we focus on airport surface scheduling problem, including runway and taxiway operations. We then describe a basic methodology for airport surface scheduling such as node-link network model and scheduling algorithms previously developed. Next, we explain how to design a mathematical formulation in more details, which consists of objectives, decision variables, and constraints. Lastly, we review other considerations, including optimization tools, computational performance, and performance metrics for evaluation.
Åkerstedt, Torbjörn; Kecklund, Göran
2017-03-01
The purpose was to investigate which detailed characteristics of shift schedules that are seen as problems to those exposed. A representative national sample of non-day workers (N = 2031) in Sweden was asked whether they had each of a number of particular work schedule characteristics and, if yes, to what extent this constituted a "big problem in life". It was also inquired whether the individual's work schedules had negative consequences for fatigue, sleep and social life. The characteristic with the highest percentage reporting a big problem was "short notice (<1 month) of a new work schedule" (30.5%), <11 h off between shifts (27.8%), and split duty (>1.5 h break at mid-shift, 27.2%). Overtime (>10 h/week), night work, morning work, day/night shifts showed lower prevalences of being a "big problem". Women indicated more problems in general. Short notice was mainly related to negative social effects, while <11 h off between shifts was related to disturbed sleep, fatigue and social difficulties. It was concluded that schedules involving unpredictable working hours (short notice), short daily rest between shifts, and split duty shifts constitute big problems. The results challenge current views of what aspects of shift work need improvement, and negative social consequences seem more important than those related to health. Copyright © 2016 Elsevier Ltd. All rights reserved.
Service-Oriented Node Scheduling Scheme for Wireless Sensor Networks Using Markov Random Field Model
Cheng, Hongju; Su, Zhihuang; Lloret, Jaime; Chen, Guolong
2014-01-01
Future wireless sensor networks are expected to provide various sensing services and energy efficiency is one of the most important criterions. The node scheduling strategy aims to increase network lifetime by selecting a set of sensor nodes to provide the required sensing services in a periodic manner. In this paper, we are concerned with the service-oriented node scheduling problem to provide multiple sensing services while maximizing the network lifetime. We firstly introduce how to model the data correlation for different services by using Markov Random Field (MRF) model. Secondly, we formulate the service-oriented node scheduling issue into three different problems, namely, the multi-service data denoising problem which aims at minimizing the noise level of sensed data, the representative node selection problem concerning with selecting a number of active nodes while determining the services they provide, and the multi-service node scheduling problem which aims at maximizing the network lifetime. Thirdly, we propose a Multi-service Data Denoising (MDD) algorithm, a novel multi-service Representative node Selection and service Determination (RSD) algorithm, and a novel MRF-based Multi-service Node Scheduling (MMNS) scheme to solve the above three problems respectively. Finally, extensive experiments demonstrate that the proposed scheme efficiently extends the network lifetime. PMID:25384005
A bicriteria heuristic for an elective surgery scheduling problem.
Marques, Inês; Captivo, M Eugénia; Vaz Pato, Margarida
2015-09-01
Resource rationalization and reduction of waiting lists for surgery are two main guidelines for hospital units outlined in the Portuguese National Health Plan. This work is dedicated to an elective surgery scheduling problem arising in a Lisbon public hospital. In order to increase the surgical suite's efficiency and to reduce the waiting lists for surgery, two objectives are considered: maximize surgical suite occupation and maximize the number of surgeries scheduled. This elective surgery scheduling problem consists of assigning an intervention date, an operating room and a starting time for elective surgeries selected from the hospital waiting list. Accordingly, a bicriteria surgery scheduling problem arising in the hospital under study is presented. To search for efficient solutions of the bicriteria optimization problem, the minimization of a weighted Chebyshev distance to a reference point is used. A constructive and improvement heuristic procedure specially designed to address the objectives of the problem is developed and results of computational experiments obtained with empirical data from the hospital are presented. This study shows that by using the bicriteria approach presented here it is possible to build surgical plans with very good performance levels. This method can be used within an interactive approach with the decision maker. It can also be easily adapted to other hospitals with similar scheduling conditions.
NASA Astrophysics Data System (ADS)
Izah Anuar, Nurul; Saptari, Adi
2016-02-01
This paper addresses the types of particle representation (encoding) procedures in a population-based stochastic optimization technique in solving scheduling problems known in the job-shop manufacturing environment. It intends to evaluate and compare the performance of different particle representation procedures in Particle Swarm Optimization (PSO) in the case of solving Job-shop Scheduling Problems (JSP). Particle representation procedures refer to the mapping between the particle position in PSO and the scheduling solution in JSP. It is an important step to be carried out so that each particle in PSO can represent a schedule in JSP. Three procedures such as Operation and Particle Position Sequence (OPPS), random keys representation and random-key encoding scheme are used in this study. These procedures have been tested on FT06 and FT10 benchmark problems available in the OR-Library, where the objective function is to minimize the makespan by the use of MATLAB software. Based on the experimental results, it is discovered that OPPS gives the best performance in solving both benchmark problems. The contribution of this paper is the fact that it demonstrates to the practitioners involved in complex scheduling problems that different particle representation procedures can have significant effects on the performance of PSO in solving JSP.
Automatic Generation of Heuristics for Scheduling
NASA Technical Reports Server (NTRS)
Morris, Robert A.; Bresina, John L.; Rodgers, Stuart M.
1997-01-01
This paper presents a technique, called GenH, that automatically generates search heuristics for scheduling problems. The impetus for developing this technique is the growing consensus that heuristics encode advice that is, at best, useful in solving most, or typical, problem instances, and, at worst, useful in solving only a narrowly defined set of instances. In either case, heuristic problem solvers, to be broadly applicable, should have a means of automatically adjusting to the idiosyncrasies of each problem instance. GenH generates a search heuristic for a given problem instance by hill-climbing in the space of possible multi-attribute heuristics, where the evaluation of a candidate heuristic is based on the quality of the solution found under its guidance. We present empirical results obtained by applying GenH to the real world problem of telescope observation scheduling. These results demonstrate that GenH is a simple and effective way of improving the performance of an heuristic scheduler.
Optimizing an F-16 Squadron Weekly Pilot Schedule for the Turkish Air Force
2010-03-01
disrupted schedules are rescheduled , minimizing the total number of changes with respect to the previous schedule’s objective function. Output...producing rosters for a nursing staff in a large general hospital (Dowsland, 1998) and afterwards Aickelin and Dowsland use an Indirect Genetic...algorithm to improve the solutions of the nurse scheduling problem which is similar to the fighter squadron pilot scheduling problem (Aickelin and
Multi-trip vehicle routing and scheduling problem with time window in real life
NASA Astrophysics Data System (ADS)
Sze, San-Nah; Chiew, Kang-Leng; Sze, Jeeu-Fong
2012-09-01
This paper studies a manpower scheduling problem with multiple maintenance operations and vehicle routing considerations. Service teams located at a common service centre are required to travel to different customer sites. All customers must be served within given time window, which are known in advance. The scheduling process must take into consideration complex constraints such as a meal break during the team's shift, multiple travelling trips, synchronisation of service teams and working shifts. The main objective of this study is to develop a heuristic that can generate high quality solution in short time for large problem instances. A Two-stage Scheduling Heuristic is developed for different variants of the problem. Empirical results show that the proposed solution performs effectively and efficiently. In addition, our proposed approximation algorithm is very flexible and can be easily adapted to different scheduling environments and operational requirements.
NASA Technical Reports Server (NTRS)
Morrell, R. A.; Odoherty, R. J.; Ramsey, H. R.; Reynolds, C. C.; Willoughby, J. K.; Working, R. D.
1975-01-01
Data and analyses related to a variety of algorithms for solving typical large-scale scheduling and resource allocation problems are presented. The capabilities and deficiencies of various alternative problem solving strategies are discussed from the viewpoint of computer system design.
A Genetic Algorithm for Flow Shop Scheduling with Assembly Operations to Minimize Makespan
NASA Astrophysics Data System (ADS)
Bhongade, A. S.; Khodke, P. M.
2014-04-01
Manufacturing systems, in which, several parts are processed through machining workstations and later assembled to form final products, is common. Though scheduling of such problems are solved using heuristics, available solution approaches can provide solution for only moderate sized problems due to large computation time required. In this work, scheduling approach is developed for such flow-shop manufacturing system having machining workstations followed by assembly workstations. The initial schedule is generated using Disjunctive method and genetic algorithm (GA) is applied further for generating schedule for large sized problems. GA is found to give near optimal solution based on the deviation of makespan from lower bound. The lower bound of makespan of such problem is estimated and percent deviation of makespan from lower bounds is used as a performance measure to evaluate the schedules. Computational experiments are conducted on problems developed using fractional factorial orthogonal array, varying the number of parts per product, number of products, and number of workstations (ranging upto 1,520 number of operations). A statistical analysis indicated the significance of all the three factors considered. It is concluded that GA method can obtain optimal makespan.
Permutation flow-shop scheduling problem to optimize a quadratic objective function
NASA Astrophysics Data System (ADS)
Ren, Tao; Zhao, Peng; Zhang, Da; Liu, Bingqian; Yuan, Huawei; Bai, Danyu
2017-09-01
A flow-shop scheduling model enables appropriate sequencing for each job and for processing on a set of machines in compliance with identical processing orders. The objective is to achieve a feasible schedule for optimizing a given criterion. Permutation is a special setting of the model in which the processing order of the jobs on the machines is identical for each subsequent step of processing. This article addresses the permutation flow-shop scheduling problem to minimize the criterion of total weighted quadratic completion time. With a probability hypothesis, the asymptotic optimality of the weighted shortest processing time schedule under a consistency condition (WSPT-CC) is proven for sufficiently large-scale problems. However, the worst case performance ratio of the WSPT-CC schedule is the square of the number of machines in certain situations. A discrete differential evolution algorithm, where a new crossover method with multiple-point insertion is used to improve the final outcome, is presented to obtain high-quality solutions for moderate-scale problems. A sequence-independent lower bound is designed for pruning in a branch-and-bound algorithm for small-scale problems. A set of random experiments demonstrates the performance of the lower bound and the effectiveness of the proposed algorithms.
Göbl, Rüdiger; Navab, Nassir; Hennersperger, Christoph
2018-06-01
Research in ultrasound imaging is limited in reproducibility by two factors: First, many existing ultrasound pipelines are protected by intellectual property, rendering exchange of code difficult. Second, most pipelines are implemented in special hardware, resulting in limited flexibility of implemented processing steps on such platforms. With SUPRA, we propose an open-source pipeline for fully software-defined ultrasound processing for real-time applications to alleviate these problems. Covering all steps from beamforming to output of B-mode images, SUPRA can help improve the reproducibility of results and make modifications to the image acquisition mode accessible to the research community. We evaluate the pipeline qualitatively, quantitatively, and regarding its run time. The pipeline shows image quality comparable to a clinical system and backed by point spread function measurements a comparable resolution. Including all processing stages of a usual ultrasound pipeline, the run-time analysis shows that it can be executed in 2D and 3D on consumer GPUs in real time. Our software ultrasound pipeline opens up the research in image acquisition. Given access to ultrasound data from early stages (raw channel data, radiofrequency data), it simplifies the development in imaging. Furthermore, it tackles the reproducibility of research results, as code can be shared easily and even be executed without dedicated ultrasound hardware.
Yue, Lei; Guan, Zailin; Saif, Ullah; Zhang, Fei; Wang, Hao
2016-01-01
Group scheduling is significant for efficient and cost effective production system. However, there exist setup times between the groups, which require to decrease it by sequencing groups in an efficient way. Current research is focused on a sequence dependent group scheduling problem with an aim to minimize the makespan in addition to minimize the total weighted tardiness simultaneously. In most of the production scheduling problems, the processing time of jobs is assumed as fixed. However, the actual processing time of jobs may be reduced due to "learning effect". The integration of sequence dependent group scheduling problem with learning effects has been rarely considered in literature. Therefore, current research considers a single machine group scheduling problem with sequence dependent setup times and learning effects simultaneously. A novel hybrid Pareto artificial bee colony algorithm (HPABC) with some steps of genetic algorithm is proposed for current problem to get Pareto solutions. Furthermore, five different sizes of test problems (small, small medium, medium, large medium, large) are tested using proposed HPABC. Taguchi method is used to tune the effective parameters of the proposed HPABC for each problem category. The performance of HPABC is compared with three famous multi objective optimization algorithms, improved strength Pareto evolutionary algorithm (SPEA2), non-dominated sorting genetic algorithm II (NSGAII) and particle swarm optimization algorithm (PSO). Results indicate that HPABC outperforms SPEA2, NSGAII and PSO and gives better Pareto optimal solutions in terms of diversity and quality for almost all the instances of the different sizes of problems.
NASA Astrophysics Data System (ADS)
Tomarov, G. V.; Shipkov, A. A.; Lovchev, V. N.; Gutsev, D. F.
2016-10-01
Problems of metal flow-accelerated corrosion (FAC) in the pipelines and equipment of the condensate- feeding and wet-steam paths of NPP power-generating units (PGU) are examined. Goals, objectives, and main principles of the methodology for the implementation of an integrated program of AO Concern Rosenergoatom for the prevention of unacceptable FAC thinning and for increasing operational flow-accelerated corrosion resistance of NPP EaP are worded (further the Program). A role is determined and potentialities are shown for the use of Russian software packages in the evaluation and prediction of FAC rate upon solving practical problems for the timely detection of unacceptable FAC thinning in the elements of pipelines and equipment (EaP) of the secondary circuit of NPP PGU. Information is given concerning the structure, properties, and functions of the software systems for plant personnel support in the monitoring and planning of the inservice inspection of FAC thinning elements of pipelines and equipment of the secondary circuit of NPP PGUs, which are created and implemented at some Russian NPPs equipped with VVER-1000, VVER-440, and BN-600 reactors. It is noted that one of the most important practical results of software packages for supporting NPP personnel concerning the issue of flow-accelerated corrosion consists in revealing elements under a hazard of intense local FAC thinning. Examples are given for successful practice at some Russian NPP concerning the use of software systems for supporting the personnel in early detection of secondary-circuit pipeline elements with FAC thinning close to an unacceptable level. Intermediate results of working on the Program are presented and new tasks set in 2012 as a part of the updated program are denoted. The prospects of the developed methods and tools in the scope of the Program measures at the stages of design and construction of NPP PGU are discussed. The main directions of the work on solving the problems of flow-accelerated corrosion of pipelines and equipment in Russian NPP PGU are defined.
Flow-accelerated corrosion 2016 international conference
NASA Astrophysics Data System (ADS)
Tomarov, G. V.; Shipkov, A. A.
2017-05-01
The paper discusses materials and results of the most representative world forum on the problems of flow-accelerated metal corrosion in power engineering—Flow-Accelerated Corrosion (FAC) 2016, the international conference, which was held in Lille (France) from May 23 through May 27, 2016, sponsored by EdF-DTG with the support of the International Atomic Energy Agency (IAEA) and the World Association of Nuclear Operators (WANO). The information on major themes of reports and materials of the exhibition arranged within the framework of the congress is presented. The statistics on operation time and intensity of FAC wall thinning of NPP pipelines and equipment in the world is set out. The paper describes typical examples of flow-accelerated corrosion damage of condensate-feed and wet-steam pipeline components of nuclear and thermal power plants that caused forced shutdowns or accidents. The importance of research projects on the problem of flow-accelerated metal corrosion of nuclear power units coordinated by the IAEA with the participation of leading experts in this field from around the world is considered. The reports presented at the conference considered issues of implementation of an FAC mechanism in single- and two-phase flows, the impact of hydrodynamic and water-chemical factors, the chemical composition of the metal, and other parameters on the intensity and location of FAC wall thinning localized areas in pipeline components and power equipment. Features and patterns of local and general FAC leading to local metal thinning and contamination of the working environment with ferriferous compounds are considered. Main trends of modern practices preventing FAC wear of NPP pipelines and equipment are defined. An increasing role of computer codes for the assessment and prediction of FAC rate, as well as software systems of support of the NPP personnel for the inspection planning and prevention of FAC wall thinning of equipment operating in singleand two-phase flows, is accepted. Different lines of attack on the problem of FAC of pipelines and equipment components of existing and future nuclear power units are reviewed. Promising methods of nondestructive inspection of pipelines and equipment are presented.
Integrated resource scheduling in a distributed scheduling environment
NASA Technical Reports Server (NTRS)
Zoch, David; Hall, Gardiner
1988-01-01
The Space Station era presents a highly-complex multi-mission planning and scheduling environment exercised over a highly distributed system. In order to automate the scheduling process, customers require a mechanism for communicating their scheduling requirements to NASA. A request language that a remotely-located customer can use to specify his scheduling requirements to a NASA scheduler, thus automating the customer-scheduler interface, is described. This notation, Flexible Envelope-Request Notation (FERN), allows the user to completely specify his scheduling requirements such as resource usage, temporal constraints, and scheduling preferences and options. The FERN also contains mechanisms for representing schedule and resource availability information, which are used in the inter-scheduler inconsistency resolution process. Additionally, a scheduler is described that can accept these requests, process them, generate schedules, and return schedule and resource availability information to the requester. The Request-Oriented Scheduling Engine (ROSE) was designed to function either as an independent scheduler or as a scheduling element in a network of schedulers. When used in a network of schedulers, each ROSE communicates schedule and resource usage information to other schedulers via the FERN notation, enabling inconsistencies to be resolved between schedulers. Individual ROSE schedules are created by viewing the problem as a constraint satisfaction problem with a heuristically guided search strategy.
Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R.; Stewart, Walter F.; Malin, Bradley; Sun, Jimeng
2014-01-01
Objective Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: 1) cohort construction, 2) feature construction, 3) cross-validation, 4) feature selection, and 5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. Methods To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which 1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, 2) schedules the tasks in a topological ordering of the graph, and 3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. Results We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3 hours in parallel compared to 9 days if running sequentially. Conclusion This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines that are specialized for health data researchers. PMID:24370496
Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R; Stewart, Walter F; Malin, Bradley; Sun, Jimeng
2014-04-01
Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: (1) cohort construction, (2) feature construction, (3) cross-validation, (4) feature selection, and (5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which (1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, (2) schedules the tasks in a topological ordering of the graph, and (3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3h in parallel compared to 9days if running sequentially. This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines that are specialized for health data researchers. Copyright © 2013 Elsevier Inc. All rights reserved.
Application of Morphological Segmentation to Leaking Defect Detection in Sewer Pipelines
Su, Tung-Ching; Yang, Ming-Der
2014-01-01
As one of major underground pipelines, sewerage is an important infrastructure in any modern city. The most common problem occurring in sewerage is leaking, whose position and failure level is typically idengified through closed circuit television (CCTV) inspection in order to facilitate rehabilitation process. This paper proposes a novel method of computer vision, morphological segmentation based on edge detection (MSED), to assist inspectors in detecting pipeline defects in CCTV inspection images. In addition to MSED, other mathematical morphology-based image segmentation methods, including opening top-hat operation (OTHO) and closing bottom-hat operation (CBHO), were also applied to the defect detection in vitrified clay sewer pipelines. The CCTV inspection images of the sewer system in the 9th district, Taichung City, Taiwan were selected as the experimental materials. The segmentation results demonstrate that MSED and OTHO are useful for the detection of cracks and open joints, respectively, which are the typical leakage defects found in sewer pipelines. PMID:24841247
Decomposition of timed automata for solving scheduling problems
NASA Astrophysics Data System (ADS)
Nishi, Tatsushi; Wakatake, Masato
2014-03-01
A decomposition algorithm for scheduling problems based on timed automata (TA) model is proposed. The problem is represented as an optimal state transition problem for TA. The model comprises of the parallel composition of submodels such as jobs and resources. The procedure of the proposed methodology can be divided into two steps. The first step is to decompose the TA model into several submodels by using decomposable condition. The second step is to combine individual solution of subproblems for the decomposed submodels by the penalty function method. A feasible solution for the entire model is derived through the iterated computation of solving the subproblem for each submodel. The proposed methodology is applied to solve flowshop and jobshop scheduling problems. Computational experiments demonstrate the effectiveness of the proposed algorithm compared with a conventional TA scheduling algorithm without decomposition.
A controlled genetic algorithm by fuzzy logic and belief functions for job-shop scheduling.
Hajri, S; Liouane, N; Hammadi, S; Borne, P
2000-01-01
Most scheduling problems are highly complex combinatorial problems. However, stochastic methods such as genetic algorithm yield good solutions. In this paper, we present a controlled genetic algorithm (CGA) based on fuzzy logic and belief functions to solve job-shop scheduling problems. For better performance, we propose an efficient representational scheme, heuristic rules for creating the initial population, and a new methodology for mixing and computing genetic operator probabilities.
Scheduling in the Face of Uncertain Resource Consumption and Utility
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Dearden, Richard
2003-01-01
We discuss the problem of scheduling tasks that consume uncertain amounts of a resource with known capacity and where the tasks have uncertain utility. In these circumstances, we would like to find schedules that exceed a lower bound on the expected utility when executed. We show that the problems are NP- complete, and present some results that characterize the behavior of some simple heuristics over a variety of problem classes.
Scheduling Earth Observing Fleets Using Evolutionary Algorithms: Problem Description and Approach
NASA Technical Reports Server (NTRS)
Globus, Al; Crawford, James; Lohn, Jason; Morris, Robert; Clancy, Daniel (Technical Monitor)
2002-01-01
We describe work in progress concerning multi-instrument, multi-satellite scheduling. Most, although not all, Earth observing instruments currently in orbit are unique. In the relatively near future, however, we expect to see fleets of Earth observing spacecraft, many carrying nearly identical instruments. This presents a substantially new scheduling challenge. Inspired by successful commercial applications of evolutionary algorithms in scheduling domains, this paper presents work in progress regarding the use of evolutionary algorithms to solve a set of Earth observing related model problems. Both the model problems and the software are described. Since the larger problems will require substantial computation and evolutionary algorithms are embarrassingly parallel, we discuss our parallelization techniques using dedicated and cycle-scavenged workstations.
The moment problem and vibrations damping of beams and plates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atamuratov, Andrey G.; Mikhailov, Igor E.; Muravey, Leonid A.
2016-06-08
Beams and plates are the elements of different complex mechanical structures, for example, pipelines and aerospace platforms. That is why the problem of damping of their vibrations caused by unwanted perturbations is actual task.
Contingency rescheduling of spacecraft operations
NASA Technical Reports Server (NTRS)
Britt, Daniel L.; Geoffroy, Amy L.; Gohring, John R.
1988-01-01
Spacecraft activity scheduling was a focus of attention in artificial intelligence recently. Several scheduling systems were devised which more-or-less successfully address various aspects of the activity scheduling problem, though most of these are not yet mature, with the notable expection of NASA's ESP. Few current scheduling systems, however, make any attempt to deal fully with the problem of modifying a schedule in near-real-time in the event of contingencies which may arise during schedule execution. These contingencies can include resources becoming unavailable unpredictably, a change in spacecraft conditions or environment, or the need to perform an activity not scheduled. In these cases it becomes necessary to repair an existing schedule, disrupting ongoing operations as little as possible. Normal scheduling is just a part of that which must be accomplished during contingency rescheduling. A prototype system named MAESTRO was developed for spacecraft activity scheduling. MAESTRO is briefly described with a focus on recent work in the area of real-time contingency handling. Included is a discussion of some of the complexities of the scheduling problem and how they affect contingency rescheduling, such as temporal constraints between activities, activities which may be interrupted and continued in any of several ways, and different ways to choose a resource complement which will allow continuation of an activity. Various heuristics used in MAESTRO for contingency rescheduling is discussed, as are operational concerns such as interaction of the scheduler with spacecraft subsystems controllers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Couto, J.A.
1975-06-01
Liquid hydrocarbons contained in Argentina's Pico Truncade natural gas caused a number of serious pipeline transmission and gas processing problems. Gas del Estado has installed a series of efficient liquid removal devices at the producing fields. A flow chart of the gasoline stripping process is illustrated, as are 2 types of heat exchangers. This process of gasoline stripping (gas condensate recovery) integrates various operations which normally are performed independently: separation of the poor condensate in the gas, stabilization of the same, and incorporation of the light components (products of the stabilization) in the main gas flow.
NASA Astrophysics Data System (ADS)
Chang, Yong; Zi, Yanyang; Zhao, Jiyuan; Yang, Zhe; He, Wangpeng; Sun, Hailiang
2017-03-01
In guided wave pipeline inspection, echoes reflected from closely spaced reflectors generally overlap, meaning useful information is lost. To solve the overlapping problem, sparse deconvolution methods have been developed in the past decade. However, conventional sparse deconvolution methods have limitations in handling guided wave signals, because the input signal is directly used as the prototype of the convolution matrix, without considering the waveform change caused by the dispersion properties of the guided wave. In this paper, an adaptive sparse deconvolution (ASD) method is proposed to overcome these limitations. First, the Gaussian echo model is employed to adaptively estimate the column prototype of the convolution matrix instead of directly using the input signal as the prototype. Then, the convolution matrix is constructed upon the estimated results. Third, the split augmented Lagrangian shrinkage (SALSA) algorithm is introduced to solve the deconvolution problem with high computational efficiency. To verify the effectiveness of the proposed method, guided wave signals obtained from pipeline inspection are investigated numerically and experimentally. Compared to conventional sparse deconvolution methods, e.g. the {{l}1} -norm deconvolution method, the proposed method shows better performance in handling the echo overlap problem in the guided wave signal.
NASA Astrophysics Data System (ADS)
Daude, F.; Galon, P.
2018-06-01
A Finite-Volume scheme for the numerical computations of compressible single- and two-phase flows in flexible pipelines is proposed based on an approximate Godunov-type approach. The spatial discretization is here obtained using the HLLC scheme. In addition, the numerical treatment of abrupt changes in area and network including several pipelines connected at junctions is also considered. The proposed approach is based on the integral form of the governing equations making it possible to tackle general equations of state. A coupled approach for the resolution of fluid-structure interaction of compressible fluid flowing in flexible pipes is considered. The structural problem is solved using Euler-Bernoulli beam finite elements. The present Finite-Volume method is applied to ideal gas and two-phase steam-water based on the Homogeneous Equilibrium Model (HEM) in conjunction with a tabulated equation of state in order to demonstrate its ability to tackle general equations of state. The extensive application of the scheme for both shock tube and other transient flow problems demonstrates its capability to resolve such problems accurately and robustly. Finally, the proposed 1-D fluid-structure interaction model appears to be computationally efficient.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rieber, M.; Soo, S.L.
1977-08-01
A coal slurry pipeline system requires that the coal go through a number of processing stages before it is used by the power plant. Once mined, the coal is delivered to a preparation plant where it is pulverized to sizes between 18 and 325 mesh and then suspended in about an equal weight of water. This 50-50 slurry mixture has a consistency approximating toothpaste. It is pushed through the pipeline via electric pumping stations 70 to 100 miles apart. Flow velocity through the line must be maintained within a narrow range. For example, if a 3.5 mph design is usedmore » at 5 mph, the system must be able to withstand double the horsepower, peak pressure, and wear. Minimum flowrate must be maintained to avoid particle settling and plugging. However, in general, once a pipeline system has been designed, because of economic considerations on the one hand and design limits on the other, flowrate is rather inflexible. Pipelines that have a slowly moving throughput and a water carrier may be subject to freezing in northern areas during periods of severe cold. One of the problems associated with slurry pipeline analyses is the lack of operating experience.« less
ToTem: a tool for variant calling pipeline optimization.
Tom, Nikola; Tom, Ondrej; Malcikova, Jitka; Pavlova, Sarka; Kubesova, Blanka; Rausch, Tobias; Kolarik, Miroslav; Benes, Vladimir; Bystry, Vojtech; Pospisilova, Sarka
2018-06-26
High-throughput bioinformatics analyses of next generation sequencing (NGS) data often require challenging pipeline optimization. The key problem is choosing appropriate tools and selecting the best parameters for optimal precision and recall. Here we introduce ToTem, a tool for automated pipeline optimization. ToTem is a stand-alone web application with a comprehensive graphical user interface (GUI). ToTem is written in Java and PHP with an underlying connection to a MySQL database. Its primary role is to automatically generate, execute and benchmark different variant calling pipeline settings. Our tool allows an analysis to be started from any level of the process and with the possibility of plugging almost any tool or code. To prevent an over-fitting of pipeline parameters, ToTem ensures the reproducibility of these by using cross validation techniques that penalize the final precision, recall and F-measure. The results are interpreted as interactive graphs and tables allowing an optimal pipeline to be selected, based on the user's priorities. Using ToTem, we were able to optimize somatic variant calling from ultra-deep targeted gene sequencing (TGS) data and germline variant detection in whole genome sequencing (WGS) data. ToTem is a tool for automated pipeline optimization which is freely available as a web application at https://totem.software .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weber, E.R.
1983-09-01
The appendixes for the Saguaro Power Plant includes the following: receiver configuration selection report; cooperating modes and transitions; failure modes analysis; control system analysis; computer codes and simulation models; procurement package scope descriptions; responsibility matrix; solar system flow diagram component purpose list; thermal storage component and system test plans; solar steam generator tube-to-tubesheet weld analysis; pipeline listing; management control schedule; and system list and definitions.
Fisher, Wayne W.; Greer, Brian D.; Fuhrman, Ashley M.; Querim, Angie C.
2016-01-01
Multiple schedules with signaled periods of reinforcement and extinction have been used to thin reinforcement schedules during functional communication training (FCT) to make the intervention more practical for parents and teachers. We evaluated whether these signals would also facilitate rapid transfer of treatment effects from one setting to the next and from one therapist to the next. With two children, we conducted FCT in the context of mixed (baseline) and multiple (treatment) schedules introduced across settings or therapists using a multiple baseline design. Results indicated that when the multiple schedules were introduced, the functional communication response came under rapid discriminative control, and problem behavior remained at near-zero rates. We extended these findings with another individual by using a more traditional baseline in which problem behavior produced reinforcement. Results replicated those of the previous participants and showed rapid reductions in problem behavior when multiple schedules were implemented across settings. PMID:26384141
NASA Technical Reports Server (NTRS)
Golias, Mihalis M.
2011-01-01
Berth scheduling is a critical function at marine container terminals and determining the best berth schedule depends on several factors including the type and function of the port, size of the port, location, nearby competition, and type of contractual agreement between the terminal and the carriers. In this paper we formulate the berth scheduling problem as a bi-objective mixed-integer problem with the objective to maximize customer satisfaction and reliability of the berth schedule under the assumption that vessel handling times are stochastic parameters following a discrete and known probability distribution. A combination of an exact algorithm, a Genetic Algorithms based heuristic and a simulation post-Pareto analysis is proposed as the solution approach to the resulting problem. Based on a number of experiments it is concluded that the proposed berth scheduling policy outperforms the berth scheduling policy where reliability is not considered.
Fisher, Wayne W; Greer, Brian D; Fuhrman, Ashley M; Querim, Angie C
2015-12-01
Multiple schedules with signaled periods of reinforcement and extinction have been used to thin reinforcement schedules during functional communication training (FCT) to make the intervention more practical for parents and teachers. We evaluated whether these signals would also facilitate rapid transfer of treatment effects across settings and therapists. With 2 children, we conducted FCT in the context of mixed (baseline) and multiple (treatment) schedules introduced across settings or therapists using a multiple baseline design. Results indicated that when the multiple schedules were introduced, the functional communication response came under rapid discriminative control, and problem behavior remained at near-zero rates. We extended these findings with another individual by using a more traditional baseline in which problem behavior produced reinforcement. Results replicated those of the previous participants and showed rapid reductions in problem behavior when multiple schedules were implemented across settings. © Society for the Experimental Analysis of Behavior.
A Systematic Methodology for Verifying Superscalar Microprocessors
NASA Technical Reports Server (NTRS)
Srivas, Mandayam; Hosabettu, Ravi; Gopalakrishnan, Ganesh
1999-01-01
We present a systematic approach to decompose and incrementally build the proof of correctness of pipelined microprocessors. The central idea is to construct the abstraction function by using completion functions, one per unfinished instruction, each of which specifies the effect (on the observables) of completing the instruction. In addition to avoiding the term size and case explosion problem that limits the pure flushing approach, our method helps localize errors, and also handles stages with interactive loops. The technique is illustrated on pipelined and superscalar pipelined implementations of a subset of the DLX architecture. It has also been applied to a processor with out-of-order execution.
NASA Astrophysics Data System (ADS)
Tomarov, G. V.; Povarov, V. P.; Shipkov, A. A.; Gromov, A. F.; Budanov, V. A.; Golubeva, T. N.
2015-03-01
Matters concerned with making efficient use of the information-analytical system on the flow-accelerated corrosion problem in setting up in-service examination of the metal of pipeline elements operating in the secondary coolant circuit of the VVER-440-based power units at the Novovoronezh NPP are considered. The principles used to select samples of pipeline elements in planning ultrasonic thickness measurements for timely revealing metal thinning due to flow-accelerated corrosion along with reducing the total amount of measurements in the condensate-feedwater path are discussed.
NASA Astrophysics Data System (ADS)
Ramli, Razamin; Tein, Lim Huai
2016-08-01
A good work schedule can improve hospital operations by providing better coverage with appropriate staffing levels in managing nurse personnel. Hence, constructing the best nurse work schedule is the appropriate effort. In doing so, an improved selection operator in the Evolutionary Algorithm (EA) strategy for a nurse scheduling problem (NSP) is proposed. The smart and efficient scheduling procedures were considered. Computation of the performance of each potential solution or schedule was done through fitness evaluation. The best so far solution was obtained via special Maximax&Maximin (MM) parent selection operator embedded in the EA, which fulfilled all constraints considered in the NSP.
Feasibility study for wax deposition imaging in oil pipelines by PGNAA technique.
Cheng, Can; Jia, Wenbao; Hei, Daqian; Wei, Zhiyong; Wang, Hongtao
2017-10-01
Wax deposition in pipelines is a crucial problem in the oil industry. A method based on the prompt gamma-ray neutron activation analysis technique was applied to reconstruct the image of wax deposition in oil pipelines. The 2.223MeV hydrogen capture gamma rays were used to reconstruct the wax deposition image. To validate the method, both MCNP simulation and experiments were performed for wax deposited with a maximum thickness of 20cm. The performance of the method was simulated using the MCNP code. The experiment was conducted with a 252 Cf neutron source and a LaBr 3 : Ce detector. A good correspondence between the simulations and the experiments was observed. The results obtained indicate that the present approach is efficient for wax deposition imaging in oil pipelines. Copyright © 2017 Elsevier Ltd. All rights reserved.
Scheduling multirobot operations in manufacturing by truncated Petri nets
NASA Astrophysics Data System (ADS)
Chen, Qin; Luh, J. Y.
1995-08-01
Scheduling of operational sequences in manufacturing processes is one of the important problems in automation. Methods of applying Petri nets to model and analyze the problem with constraints on precedence relations, multiple resources allocation, etc. have been available in literature. Searching for an optimum schedule can be implemented by combining the branch-and-bound technique with the execution of the timed Petri net. The process usually produces a large Petri net which is practically not manageable. This disadvantage, however, can be handled by a truncation technique which divides the original large Petri net into several smaller size subnets. The complexity involved in the analysis of each subnet individually is greatly reduced. However, when the locally optimum schedules of the resulting subnets are combined together, it may not yield an overall optimum schedule for the original Petri net. To circumvent this problem, algorithms are developed based on the concepts of Petri net execution and modified branch-and-bound process. The developed technique is applied to a multi-robot task scheduling problem of the manufacturing work cell.
NASA Astrophysics Data System (ADS)
de Bruijn, Renée; Dabekaussen, Willem; Hijma, Marc; Wiersma, Ane; Abspoel-Bukman, Linda; Boeije, Remco; Courage, Wim; van der Geest, Johan; Hamburg, Marc; Harmsma, Edwin; Helmholt, Kristian; van den Heuvel, Frank; Kruse, Henk; Langius, Erik; Lazovik, Elena
2017-04-01
Due to heterogeneity of the subsurface in the delta environment of the Netherlands, differential subsidence over short distances results in tension and subsequent wear of subsurface infrastructure, such as water and gas pipelines. Due to uncertainties in the build-up of the subsurface, however, it is unknown where this problem is the most prominent. This is a problem for asset managers deciding when a pipeline needs replacement: damaged pipelines endanger security of supply and pose a significant threat to safety, yet premature replacement raises needless expenses. In both cases, costs - financial or other - are high. Therefore, an interdisciplinary research team of geotechnicians, geologists and Big Data engineers from research institutes TNO, Deltares and SkyGeo developed a stochastic model to predict differential subsidence and the probability of consequent pipeline failure on a (sub-)street level. In this project pipeline data from company databases is combined with a stochastic geological model and information on (historical) groundwater levels and overburden material. Probability of pipeline failure is modelled by a coupling with a subsidence model and two separate models on pipeline behaviour under stress, using a probabilistic approach. The total length of pipelines (approx. 200.000 km operational in the Netherlands) and the complexity of the model chain that is needed to calculate a chance of failure, results in large computational challenges, as it requires massive evaluation of possible scenarios to reach the required level of confidence. To cope with this, a scalable computational infrastructure has been developed, composing a model workflow in which components have a heterogeneous technological basis. Three pilot areas covering an urban, a rural and a mixed environment, characterised by different groundwater-management strategies and different overburden histories, are used to evaluate the differences in subsidence and uncertainties that come with different types of land use. Furthermore, the model provides results with a measure of reliability, and determines what is the limiting input factor causing most uncertainty. The model results can be validated and further improved using inSAR data for these pilot areas, by iteratively revising model parameters. The design of the model is such, that it can be applied to the whole of the Netherlands. By assessing differential subsidence and its effect on pipelines over time, the model helps to establish when and where maintenance is due, by indicating what areas are particularly vulnerable, thereby increasing safety and lowering maintenance costs.
ERIC Educational Resources Information Center
Ehrman, Sheryl H.; Castellanos, Patricia; Dwivedi, Vivek; Diemer, R. Bertrum
2007-01-01
A particle technology design problem incorporating population balance modeling was developed and assigned to senior and first-year graduate students in a Particle Science and Technology course. The problem focused on particle collection, with a pipeline agglomerator, Cyclone, and baghouse comprising the collection system. The problem was developed…
Scheduling in the Face of Uncertain Resource Consumption and Utility
NASA Technical Reports Server (NTRS)
Koga, Dennis (Technical Monitor); Frank, Jeremy; Dearden, Richard
2003-01-01
We discuss the problem of scheduling tasks that consume a resource with known capacity and where the tasks have varying utility. We consider problems in which the resource consumption and utility of each activity is described by probability distributions. In these circumstances, we would like to find schedules that exceed a lower bound on the expected utility when executed. We first show that while some of these problems are NP-complete, others are only NP-Hard. We then describe various heuristic search algorithms to solve these problems and their drawbacks. Finally, we present empirical results that characterize the behavior of these heuristics over a variety of problem classes.
A novel discrete PSO algorithm for solving job shop scheduling problem to minimize makespan
NASA Astrophysics Data System (ADS)
Rameshkumar, K.; Rajendran, C.
2018-02-01
In this work, a discrete version of PSO algorithm is proposed to minimize the makespan of a job-shop. A novel schedule builder has been utilized to generate active schedules. The discrete PSO is tested using well known benchmark problems available in the literature. The solution produced by the proposed algorithms is compared with best known solution published in the literature and also compared with hybrid particle swarm algorithm and variable neighborhood search PSO algorithm. The solution construction methodology adopted in this study is found to be effective in producing good quality solutions for the various benchmark job-shop scheduling problems.
ERIC Educational Resources Information Center
Acholonu, Omogbemiboluwa I.
2011-01-01
The underrepresentation of women in Information Technology (IT) leadership positions is a problem that has affected the industry since its onset. It is posited that women face various barriers in advancing through the IT pipeline. This study reviews the findings of other qualitative and quantitative studies to identify any recurring patterns that…
Demons registration for in vivo and deformable laser scanning confocal endomicroscopy.
Chiew, Wei-Ming; Lin, Feng; Seah, Hock Soon
2017-09-01
A critical effect found in noninvasive in vivo endomicroscopic imaging modalities is image distortions due to sporadic movement exhibited by living organisms. In three-dimensional confocal imaging, this effect results in a dataset that is tilted across deeper slices. Apart from that, the sequential flow of the imaging-processing pipeline restricts real-time adjustments due to the unavailability of information obtainable only from subsequent stages. To solve these problems, we propose an approach to render Demons-registered datasets as they are being captured, focusing on the coupling between registration and visualization. To improve the acquisition process, we also propose a real-time visual analytics tool, which complements the imaging pipeline and the Demons registration pipeline with useful visual indicators to provide real-time feedback for immediate adjustments. We highlight the problem of deformation within the visualization pipeline for object-ordered and image-ordered rendering. Visualizations of critical information including registration forces and partial renderings of the captured data are also presented in the analytics system. We demonstrate the advantages of the algorithmic design through experimental results with both synthetically deformed datasets and actual in vivo, time-lapse tissue datasets expressing natural deformations. Remarkably, this algorithm design is for embedded implementation in intelligent biomedical imaging instrumentation with customizable circuitry. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Demons registration for in vivo and deformable laser scanning confocal endomicroscopy
NASA Astrophysics Data System (ADS)
Chiew, Wei Ming; Lin, Feng; Seah, Hock Soon
2017-09-01
A critical effect found in noninvasive in vivo endomicroscopic imaging modalities is image distortions due to sporadic movement exhibited by living organisms. In three-dimensional confocal imaging, this effect results in a dataset that is tilted across deeper slices. Apart from that, the sequential flow of the imaging-processing pipeline restricts real-time adjustments due to the unavailability of information obtainable only from subsequent stages. To solve these problems, we propose an approach to render Demons-registered datasets as they are being captured, focusing on the coupling between registration and visualization. To improve the acquisition process, we also propose a real-time visual analytics tool, which complements the imaging pipeline and the Demons registration pipeline with useful visual indicators to provide real-time feedback for immediate adjustments. We highlight the problem of deformation within the visualization pipeline for object-ordered and image-ordered rendering. Visualizations of critical information including registration forces and partial renderings of the captured data are also presented in the analytics system. We demonstrate the advantages of the algorithmic design through experimental results with both synthetically deformed datasets and actual in vivo, time-lapse tissue datasets expressing natural deformations. Remarkably, this algorithm design is for embedded implementation in intelligent biomedical imaging instrumentation with customizable circuitry.
Bioinformatic pipelines in Python with Leaf
2013-01-01
Background An incremental, loosely planned development approach is often used in bioinformatic studies when dealing with custom data analysis in a rapidly changing environment. Unfortunately, the lack of a rigorous software structuring can undermine the maintainability, communicability and replicability of the process. To ameliorate this problem we propose the Leaf system, the aim of which is to seamlessly introduce the pipeline formality on top of a dynamical development process with minimum overhead for the programmer, thus providing a simple layer of software structuring. Results Leaf includes a formal language for the definition of pipelines with code that can be transparently inserted into the user’s Python code. Its syntax is designed to visually highlight dependencies in the pipeline structure it defines. While encouraging the developer to think in terms of bioinformatic pipelines, Leaf supports a number of automated features including data and session persistence, consistency checks between steps of the analysis, processing optimization and publication of the analytic protocol in the form of a hypertext. Conclusions Leaf offers a powerful balance between plan-driven and change-driven development environments in the design, management and communication of bioinformatic pipelines. Its unique features make it a valuable alternative to other related tools. PMID:23786315
Aerial surveillance for gas and liquid hydrocarbon pipelines using a flame ionization detector (FID)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riquetti, P.V.; Fletcher, J.I.; Minty, C.D.
1996-12-31
A novel application for the detection of airborne hydrocarbons has been successfully developed by means of a highly sensitive, fast responding Flame Ionization Detector (FID). The traditional way to monitor pipeline leaks has been by ground crews using specific sensors or by airborne crews highly trained to observe anomalies associated with leaks during periodic surveys of the pipeline right-of-way. The goal has been to detect leaks in a fast and cost effective way before the associated spill becomes a costly and hazardous problem. This paper describes a leak detection system combined with a global positioning system (GPS) and a computerizedmore » data output designed to pinpoint the presence of hydrocarbons in the air space of the pipeline`s right of way. Fixed wing aircraft as well as helicopters have been successfully used as airborne platforms. Natural gas, crude oil and finished products pipelines in Canada and the US have been surveyed using this technology with excellent correlation between the aircraft detection and in situ ground detection. The information obtained is processed with a proprietary software and reduced to simple coordinates. Results are transferred to ground crews to effect the necessary repairs.« less
Color correction pipeline optimization for digital cameras
NASA Astrophysics Data System (ADS)
Bianco, Simone; Bruna, Arcangelo R.; Naccari, Filippo; Schettini, Raimondo
2013-04-01
The processing pipeline of a digital camera converts the RAW image acquired by the sensor to a representation of the original scene that should be as faithful as possible. There are mainly two modules responsible for the color-rendering accuracy of a digital camera: the former is the illuminant estimation and correction module, and the latter is the color matrix transformation aimed to adapt the color response of the sensor to a standard color space. These two modules together form what may be called the color correction pipeline. We design and test new color correction pipelines that exploit different illuminant estimation and correction algorithms that are tuned and automatically selected on the basis of the image content. Since the illuminant estimation is an ill-posed problem, illuminant correction is not error-free. An adaptive color matrix transformation module is optimized, taking into account the behavior of the first module in order to alleviate the amplification of color errors. The proposed pipelines are tested on a publicly available dataset of RAW images. Experimental results show that exploiting the cross-talks between the modules of the pipeline can lead to a higher color-rendition accuracy.
Discrete Optimization Model for Vehicle Routing Problem with Scheduling Side Cosntraints
NASA Astrophysics Data System (ADS)
Juliandri, Dedy; Mawengkang, Herman; Bu'ulolo, F.
2018-01-01
Vehicle Routing Problem (VRP) is an important element of many logistic systems which involve routing and scheduling of vehicles from a depot to a set of customers node. This is a hard combinatorial optimization problem with the objective to find an optimal set of routes used by a fleet of vehicles to serve the demands a set of customers It is required that these vehicles return to the depot after serving customers’ demand. The problem incorporates time windows, fleet and driver scheduling, pick-up and delivery in the planning horizon. The goal is to determine the scheduling of fleet and driver and routing policies of the vehicles. The objective is to minimize the overall costs of all routes over the planning horizon. We model the problem as a linear mixed integer program. We develop a combination of heuristics and exact method for solving the model.
Open shop scheduling problem to minimize total weighted completion time
NASA Astrophysics Data System (ADS)
Bai, Danyu; Zhang, Zhihai; Zhang, Qiang; Tang, Mengqian
2017-01-01
A given number of jobs in an open shop scheduling environment must each be processed for given amounts of time on each of a given set of machines in an arbitrary sequence. This study aims to achieve a schedule that minimizes total weighted completion time. Owing to the strong NP-hardness of the problem, the weighted shortest processing time block (WSPTB) heuristic is presented to obtain approximate solutions for large-scale problems. Performance analysis proves the asymptotic optimality of the WSPTB heuristic in the sense of probability limits. The largest weight block rule is provided to seek optimal schedules in polynomial time for a special case. A hybrid discrete differential evolution algorithm is designed to obtain high-quality solutions for moderate-scale problems. Simulation experiments demonstrate the effectiveness of the proposed algorithms.
Empirical results on scheduling and dynamic backtracking
NASA Technical Reports Server (NTRS)
Boddy, Mark S.; Goldman, Robert P.
1994-01-01
At the Honeywell Technology Center (HTC), we have been working on a scheduling problem related to commercial avionics. This application is large, complex, and hard to solve. To be a little more concrete: 'large' means almost 20,000 activities, 'complex' means several activity types, periodic behavior, and assorted types of temporal constraints, and 'hard to solve' means that we have been unable to eliminate backtracking through the use of search heuristics. At this point, we can generate solutions, where solutions exist, or report failure and sometimes why the system failed. To the best of our knowledge, this is among the largest and most complex scheduling problems to have been solved as a constraint satisfaction problem, at least that has appeared in the published literature. This abstract is a preliminary report on what we have done and how. In the next section, we present our approach to treating scheduling as a constraint satisfaction problem. The following sections present the application in more detail and describe how we solve scheduling problems in the application domain. The implemented system makes use of Ginsberg's Dynamic Backtracking algorithm, with some minor extensions to improve its utility for scheduling. We describe those extensions and the performance of the resulting system. The paper concludes with some general remarks, open questions and plans for future work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lepage, A.
Despite scheduling complications caused by annual monsoons, the Yadana project to bring offshore Myanmar gas ashore and into neighboring Thailand has met it first-gas target of July 1, 1998. The Yadana field is a dry-gas reservoir in the reef upper Birman limestone formation t 1,260 m and a pressure of 174 bara (approximately 2,500 psi). It extends nearly 7 km (west to east) and 10 km (south to north). The water-saturated reservoir gas contains mostly methane mixed with CO{sub 2} and N{sub 2}. No production of condensate is anticipated. The Yadana field contains certified gas reserves of 5.7 tcf, calculatedmore » on the basis of 2D and 3D seismic data-acquisition campaigns and of seven appraisal wells. The paper discusses early interest, development sequences, offshore platforms, the gas-export pipeline, safety, environmental steps, and schedule constraints.« less
Sensitivity and bias under conditions of equal and unequal academic task difficulty.
Reed, Derek D; Martens, Brian K
2008-01-01
We conducted an experimental analysis of children's relative problem-completion rates across two workstations under conditions of equal (Experiment 1) and unequal (Experiment 2) problem difficulty. Results were described using the generalized matching equation and were evaluated for degree of schedule versus stimulus control. Experiment 1 involved a symmetrical choice arrangement in which the children could earn points exchangeable for rewards contingent on correct math problem completion. Points were delivered according to signaled variable-interval schedules at each workstation. For 2 children, relative rates of problem completion appeared to have been controlled by the schedule requirements in effect and matched relative rates of reinforcement, with sensitivity values near 1 and bias values near 0. Experiment 2 involved increasing the difficulty of math problems at one of the workstations. Sensitivity values for all 3 participants were near 1, but a substantial increase in bias toward the easier math problems was observed. This bias was possibly associated with responding at the more difficult workstation coming under stimulus control rather than schedule control.
Analysis of Feeder Bus Network Design and Scheduling Problems
Almasi, Mohammad Hadi; Karim, Mohamed Rehan
2014-01-01
A growing concern for public transit is its inability to shift passenger's mode from private to public transport. In order to overcome this problem, a more developed feeder bus network and matched schedules will play important roles. The present paper aims to review some of the studies performed on Feeder Bus Network Design and Scheduling Problem (FNDSP) based on three distinctive parts of the FNDSP setup, namely, problem description, problem characteristics, and solution approaches. The problems consist of different subproblems including data preparation, feeder bus network design, route generation, and feeder bus scheduling. Subsequently, descriptive analysis and classification of previous works are presented to highlight the main characteristics and solution methods. Finally, some of the issues and trends for future research are identified. This paper is targeted at dealing with the FNDSP to exhibit strategic and tactical goals and also contributes to the unification of the field which might be a useful complement to the few existing reviews. PMID:24526890
A genetic algorithm-based approach to flexible flow-line scheduling with variable lot sizes.
Lee, I; Sikora, R; Shaw, M J
1997-01-01
Genetic algorithms (GAs) have been used widely for such combinatorial optimization problems as the traveling salesman problem (TSP), the quadratic assignment problem (QAP), and job shop scheduling. In all of these problems there is usually a well defined representation which GA's use to solve the problem. We present a novel approach for solving two related problems-lot sizing and sequencing-concurrently using GAs. The essence of our approach lies in the concept of using a unified representation for the information about both the lot sizes and the sequence and enabling GAs to evolve the chromosome by replacing primitive genes with good building blocks. In addition, a simulated annealing procedure is incorporated to further improve the performance. We evaluate the performance of applying the above approach to flexible flow line scheduling with variable lot sizes for an actual manufacturing facility, comparing it to such alternative approaches as pair wise exchange improvement, tabu search, and simulated annealing procedures. The results show the efficacy of this approach for flexible flow line scheduling.
NASA Astrophysics Data System (ADS)
Ausaf, Muhammad Farhan; Gao, Liang; Li, Xinyu
2015-12-01
For increasing the overall performance of modern manufacturing systems, effective integration of process planning and scheduling functions has been an important area of consideration among researchers. Owing to the complexity of handling process planning and scheduling simultaneously, most of the research work has been limited to solving the integrated process planning and scheduling (IPPS) problem for a single objective function. As there are many conflicting objectives when dealing with process planning and scheduling, real world problems cannot be fully captured considering only a single objective for optimization. Therefore considering multi-objective IPPS (MOIPPS) problem is inevitable. Unfortunately, only a handful of research papers are available on solving MOIPPS problem. In this paper, an optimization algorithm for solving MOIPPS problem is presented. The proposed algorithm uses a set of dispatching rules coupled with priority assignment to optimize the IPPS problem for various objectives like makespan, total machine load, total tardiness, etc. A fixed sized external archive coupled with a crowding distance mechanism is used to store and maintain the non-dominated solutions. To compare the results with other algorithms, a C-matric based method has been used. Instances from four recent papers have been solved to demonstrate the effectiveness of the proposed algorithm. The experimental results show that the proposed method is an efficient approach for solving the MOIPPS problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlsson, Mats; Johansson, Mikael; Larson, Jeffrey
Previous approaches for scheduling a league with round-robin and divisional tournaments involved decomposing the problem into easier subproblems. This approach, used to schedule the top Swedish handball league Elitserien, reduces the problem complexity but can result in suboptimal schedules. This paper presents an integrated constraint programming model that allows to perform the scheduling in a single step. Particular attention is given to identifying implied and symmetry-breaking constraints that reduce the computational complexity significantly. The experimental evaluation of the integrated approach takes considerably less computational effort than the previous approach.
NASA Astrophysics Data System (ADS)
Feng, Shuo; Liu, Dejun; Cheng, Xing; Fang, Huafeng; Li, Caifang
2017-04-01
Magnetic anomalies produced by underground ferromagnetic pipelines because of the polarization of earth's magnetic field are used to obtain the information on the location, buried depth and other parameters of pipelines. In order to achieve a fast inversion and interpretation of measured data, it is necessary to develop a fast and stable forward method. Magnetic dipole reconstruction (MDR), as a kind of integration numerical method, is well suited for simulating a thin pipeline anomaly. In MDR the pipeline model must be cut into small magnetic dipoles through different segmentation methods. The segmentation method has an impact on the stability and speed of forward calculation. Rapid and accurate simulation of deep-buried pipelines has been achieved by exciting segmentation method. However, in practical measurement, the depth of underground pipe is uncertain. When it comes to the shallow-buried pipeline, the present segmentation may generate significant errors. This paper aims at solving this problem in three stages. First, the cause of inaccuracy is analyzed by simulation experiment. Secondly, new variable interval section segmentation is proposed based on the existing segmentation. It can help MDR method to obtain simulation results in a fast way under the premise of ensuring the accuracy of different depth models. Finally, the measured data is inversed based on new segmentation method. The result proves that the inversion based on the new segmentation can achieve fast and accurate inversion of depth parameters of underground pipes without being limited by pipeline depth.
NASA Astrophysics Data System (ADS)
Konno, Yohko; Suzuki, Keiji
This paper describes an approach to development of a solution algorithm of a general-purpose for large scale problems using “Local Clustering Organization (LCO)” as a new solution for Job-shop scheduling problem (JSP). Using a performance effective large scale scheduling in the study of usual LCO, a solving JSP keep stability induced better solution is examined. In this study for an improvement of a performance of a solution for JSP, processes to a optimization by LCO is examined, and a scheduling solution-structure is extended to a new solution-structure based on machine-division. A solving method introduced into effective local clustering for the solution-structure is proposed as an extended LCO. An extended LCO has an algorithm which improves scheduling evaluation efficiently by clustering of parallel search which extends over plural machines. A result verified by an application of extended LCO on various scale of problems proved to conduce to minimizing make-span and improving on the stable performance.
Optimization of Airport Surface Traffic: A Case-Study of Incheon International Airport
NASA Technical Reports Server (NTRS)
Eun, Yeonju; Jeon, Daekeun; Lee, Hanbong; Jung, Yoon C.; Zhu, Zhifan; Jeong, Myeongsook; Kim, Hyounkong; Oh, Eunmi; Hong, Sungkwon
2017-01-01
This study aims to develop a controllers decision support tool for departure and surface management of ICN. Airport surface traffic optimization for Incheon International Airport (ICN) in South Korea was studied based on the operational characteristics of ICN and airspace of Korea. For surface traffic optimization, a multiple runway scheduling problem and a taxi scheduling problem were formulated into two Mixed Integer Linear Programming (MILP) optimization models. The Miles-In-Trail (MIT) separation constraint at the departure fix shared by the departure flights from multiple runways and the runway crossing constraints due to the taxi route configuration specific to ICN were incorporated into the runway scheduling and taxiway scheduling problems, respectively. Since the MILP-based optimization model for the multiple runway scheduling problem may be computationally intensive, computation times and delay costs of different solving methods were compared for a practical implementation. This research was a collaboration between Korea Aerospace Research Institute (KARI) and National Aeronautics and Space Administration (NASA).
Optimization of Airport Surface Traffic: A Case-Study of Incheon International Airport
NASA Technical Reports Server (NTRS)
Eun, Yeonju; Jeon, Daekeun; Lee, Hanbong; Jung, Yoon Chul; Zhu, Zhifan; Jeong, Myeong-Sook; Kim, Hyoun Kyoung; Oh, Eunmi; Hong, Sungkwon
2017-01-01
This study aims to develop a controllers' decision support tool for departure and surface management of ICN. Airport surface traffic optimization for Incheon International Airport (ICN) in South Korea was studied based on the operational characteristics of ICN and airspace of Korea. For surface traffic optimization, a multiple runway scheduling problem and a taxi scheduling problem were formulated into two Mixed Integer Linear Programming (MILP) optimization models. The Miles-In-Trail (MIT) separation constraint at the departure fix shared by the departure flights from multiple runways and the runway crossing constraints due to the taxi route configuration specific to ICN were incorporated into the runway scheduling and taxiway scheduling problems, respectively. Since the MILP-based optimization model for the multiple runway scheduling problem may be computationally intensive, computation times and delay costs of different solving methods were compared for a practical implementation. This research was a collaboration between Korea Aerospace Research Institute (KARI) and National Aeronautics and Space Administration (NASA).
Multi-Objective Scheduling for the Cluster II Constellation
NASA Technical Reports Server (NTRS)
Johnston, Mark D.; Giuliano, Mark
2011-01-01
This paper describes the application of the MUSE multiobjecctive scheduling framework to the Cluster II WBD scheduling domain. Cluster II is an ESA four-spacecraft constellation designed to study the plasma environment of the Earth and it's magnetosphere. One of the instruments on each of the four spacecraft is the Wide Band Data (WBD) plasma wave experiment. We have applied the MUSE evolutionary algorithm to the scheduling problem represented by this instrument, and the result has been adopted and utilized by the WBD schedulers for nearly a year. This paper describes the WBD scheduling problem, its representation in MUSE, and some of the visualization elements that provide insight into objective value tradeoffs.
Cloud CPFP: a shotgun proteomics data analysis pipeline using cloud and high performance computing.
Trudgian, David C; Mirzaei, Hamid
2012-12-07
We have extended the functionality of the Central Proteomics Facilities Pipeline (CPFP) to allow use of remote cloud and high performance computing (HPC) resources for shotgun proteomics data processing. CPFP has been modified to include modular local and remote scheduling for data processing jobs. The pipeline can now be run on a single PC or server, a local cluster, a remote HPC cluster, and/or the Amazon Web Services (AWS) cloud. We provide public images that allow easy deployment of CPFP in its entirety in the AWS cloud. This significantly reduces the effort necessary to use the software, and allows proteomics laboratories to pay for compute time ad hoc, rather than obtaining and maintaining expensive local server clusters. Alternatively the Amazon cloud can be used to increase the throughput of a local installation of CPFP as necessary. We demonstrate that cloud CPFP allows users to process data at higher speed than local installations but with similar cost and lower staff requirements. In addition to the computational improvements, the web interface to CPFP is simplified, and other functionalities are enhanced. The software is under active development at two leading institutions and continues to be released under an open-source license at http://cpfp.sourceforge.net.
Exact and Metaheuristic Approaches for a Bi-Objective School Bus Scheduling Problem.
Chen, Xiaopan; Kong, Yunfeng; Dang, Lanxue; Hou, Yane; Ye, Xinyue
2015-01-01
As a class of hard combinatorial optimization problems, the school bus routing problem has received considerable attention in the last decades. For a multi-school system, given the bus trips for each school, the school bus scheduling problem aims at optimizing bus schedules to serve all the trips within the school time windows. In this paper, we propose two approaches for solving the bi-objective school bus scheduling problem: an exact method of mixed integer programming (MIP) and a metaheuristic method which combines simulated annealing with local search. We develop MIP formulations for homogenous and heterogeneous fleet problems respectively and solve the models by MIP solver CPLEX. The bus type-based formulation for heterogeneous fleet problem reduces the model complexity in terms of the number of decision variables and constraints. The metaheuristic method is a two-stage framework for minimizing the number of buses to be used as well as the total travel distance of buses. We evaluate the proposed MIP and the metaheuristic method on two benchmark datasets, showing that on both instances, our metaheuristic method significantly outperforms the respective state-of-the-art methods.
Adaptive track scheduling to optimize concurrency and vectorization in GeantV
Apostolakis, J.; Bandieramonte, M.; Bitzes, G.; ...
2015-05-22
The GeantV project is focused on the R&D of new particle transport techniques to maximize parallelism on multiple levels, profiting from the use of both SIMD instructions and co-processors for the CPU-intensive calculations specific to this type of applications. In our approach, vectors of tracks belonging to multiple events and matching different locality criteria must be gathered and dispatched to algorithms having vector signatures. While the transport propagates tracks and changes their individual states, data locality becomes harder to maintain. The scheduling policy has to be changed to maintain efficient vectors while keeping an optimal level of concurrency. The modelmore » has complex dynamics requiring tuning the thresholds to switch between the normal regime and special modes, i.e. prioritizing events to allow flushing memory, adding new events in the transport pipeline to boost locality, dynamically adjusting the particle vector size or switching between vector to single track mode when vectorization causes only overhead. Lastly, this work requires a comprehensive study for optimizing these parameters to make the behaviour of the scheduler self-adapting, presenting here its initial results.« less
An Investigation of the Cryogenic Freezing of Water in Non-Metallic Pipelines
NASA Astrophysics Data System (ADS)
Martin, C. I.; Richardson, R. N.; Bowen, R. J.
2004-06-01
Pipe freezing is increasingly used in a range of industries to solve otherwise intractable pipe line maintenance and servicing problems. This paper presents the interim results from an experimental study on deliberate freezing of polymeric pipelines. Previous and contemporary works are reviewed. The object of the current research is to confirm the feasibility of ice plug formation within a polymeric pipe as a method of isolation. Tests have been conducted on a range of polymeric pipes of various sizes. The results reported here all relate to freezing of horizontal pipelines. In each case the process of plug formation was photographed, the frozen plug pressure tested and the pipe inspected for signs of damage resulting from the freeze procedure. The time to freeze was recorded and various temperatures logged. These tests have demonstrated that despite the poor thermal and mechanical properties of the polymers, freezing offers a viable alternative method of isolation in polymeric pipelines.
Detection of underground pipeline based on Golay waveform design
NASA Astrophysics Data System (ADS)
Dai, Jingjing; Xu, Dazhuan
2017-08-01
The detection of underground pipeline is an important problem in the development of the city, but the research about it is not mature at present. In this paper, based on the principle of waveform design in wireless communication, we design an acoustic signal detection system to detect the location of underground pipelines. According to the principle of acoustic localization, we chose DSP-F28335 as the development board, and use DA and AD module as the master control chip. The DA module uses complementary Golay sequence as emission signal. The AD module acquisiting data synchronously, so that the echo signals which containing position information of the target is recovered through the signal processing. The test result shows that the method in this paper can not only calculate the sound velocity of the soil, but also can locate the location of underground pipelines accurately.
A Model for Oil-Gas Pipelines Cost Prediction Based on a Data Mining Process
NASA Astrophysics Data System (ADS)
Batzias, Fragiskos A.; Spanidis, Phillip-Mark P.
2009-08-01
This paper addresses the problems associated with the cost estimation of oil/gas pipelines during the elaboration of feasibility assessments. Techno-economic parameters, i.e., cost, length and diameter, are critical for such studies at the preliminary design stage. A methodology for the development of a cost prediction model based on Data Mining (DM) process is proposed. The design and implementation of a Knowledge Base (KB), maintaining data collected from various disciplines of the pipeline industry, are presented. The formulation of a cost prediction equation is demonstrated by applying multiple regression analysis using data sets extracted from the KB. Following the methodology proposed, a learning context is inductively developed as background pipeline data are acquired, grouped and stored in the KB, and through a linear regression model provide statistically substantial results, useful for project managers or decision makers.
Applying the vantage PDMS to jack-up drilling ships
NASA Astrophysics Data System (ADS)
Yin, Peng; Chen, Yuan-Ming; Cui, Tong-Kai; Wang, Zi-Shen; Gong, Li-Jiang; Yu, Xiang-Fen
2009-09-01
The plant design management system (PDMS) is an integrated application which includes a database and is useful when designing complex 3-D industrial projects. It could be used to simplify the most difficult part of a subsea oil extraction project—detailed pipeline design. It could also be used to integrate the design of equipment, structures, HVAC, E-ways as well as the detailed designs of other specialists. This article mainly examines the applicability of the Vantage PDMS database to pipeline projects involving jack-up drilling ships. It discusses the catalogue (CATA) of the pipeline, the spec-world (SPWL) of the pipeline, the bolt tables (BLTA) and so on. This article explains the main methods for CATA construction as well as problem in the process of construction. In this article, the authors point out matters needing attention when using the Vantage PDMS database in the design process and discuss partial solutions to these questions.
The application of artificial intelligence to astronomical scheduling problems
NASA Technical Reports Server (NTRS)
Johnston, Mark D.
1992-01-01
Efficient utilization of expensive space- and ground-based observatories is an important goal for the astronomical community; the cost of modern observing facilities is enormous, and the available observing time is much less than the demand from astronomers around the world. The complexity and variety of scheduling constraints and goals has led several groups to investigate how artificial intelligence (AI) techniques might help solve these kinds of problems. The earliest and most successful of these projects was started at Space Telescope Science Institute in 1987 and has led to the development of the Spike scheduling system to support the scheduling of Hubble Space Telescope (HST). The aim of Spike at STScI is to allocate observations to timescales of days to a week observing all scheduling constraints and maximizing preferences that help ensure that observations are made at optimal times. Spike has been in use operationally for HST since shortly after the observatory was launched in Apr. 1990. Although developed specifically for HST scheduling, Spike was carefully designed to provide a general framework for similar (activity-based) scheduling problems. In particular, the tasks to be scheduled are defined in the system in general terms, and no assumptions about the scheduling timescale are built in. The mechanisms for describing, combining, and propagating temporal and other constraints and preferences are quite general. The success of this approach has been demonstrated by the application of Spike to the scheduling of other satellite observatories: changes to the system are required only in the specific constraints that apply, and not in the framework itself. In particular, the Spike framework is sufficiently flexible to handle both long-term and short-term scheduling, on timescales of years down to minutes or less. This talk will discuss recent progress made in scheduling search techniques, the lessons learned from early HST operations, the application of Spike to other problem domains, and plans for the future evolution of the system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wayne F. Boyer; Gurdeep S. Hura
2005-09-01
The Problem of obtaining an optimal matching and scheduling of interdependent tasks in distributed heterogeneous computing (DHC) environments is well known to be an NP-hard problem. In a DHC system, task execution time is dependent on the machine to which it is assigned and task precedence constraints are represented by a directed acyclic graph. Recent research in evolutionary techniques has shown that genetic algorithms usually obtain more efficient schedules that other known algorithms. We propose a non-evolutionary random scheduling (RS) algorithm for efficient matching and scheduling of inter-dependent tasks in a DHC system. RS is a succession of randomized taskmore » orderings and a heuristic mapping from task order to schedule. Randomized task ordering is effectively a topological sort where the outcome may be any possible task order for which the task precedent constraints are maintained. A detailed comparison to existing evolutionary techniques (GA and PSGA) shows the proposed algorithm is less complex than evolutionary techniques, computes schedules in less time, requires less memory and fewer tuning parameters. Simulation results show that the average schedules produced by RS are approximately as efficient as PSGA schedules for all cases studied and clearly more efficient than PSGA for certain cases. The standard formulation for the scheduling problem addressed in this paper is Rm|prec|Cmax.,« less
Telluric currents: A meeting of theory and observation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boteler, D.H.; Seager, W.H.
Pipe-to-soil (P/S) potential variations resulting from telluric currents have been observed on pipelines in many locations. However, it has never teen clear which parts of a pipeline will experience the worst effects. Two studies were conducted to answer this question. Distributed-source transmission line (DSTL) theory was applied to the problem of modeling geomagnetic induction in pipelines. This theory predicted that the largest P/S potential variations would occur at the ends of the pipeline. The theory also predicted that large P/S potential variations, of opposite sign, should occur on either side of an insulating flange. Independently, an observation program was conductedmore » to determine the change in telluric current P/S potential variations and to design counteractive measures along a pipeline in northern Canada. Observations showed that the amplitude of P/S potential fluctuations had maxima at the northern and southern ends of the pipeline. A further set of recordings around an insulating flange showed large P/S potential variations, of opposite sign, on either side of the flange. Agreement between the observations and theoretical predictions was remarkable. While the observations confirmed the theory, the theory explains how P/S potential variations are produced by telluric currents and provides the basis for design of cathodic protection systems for pipelines that can counteract any adverse telluric effects.« less
Neuroimaging Study Designs, Computational Analyses and Data Provenance Using the LONI Pipeline
Dinov, Ivo; Lozev, Kamen; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Pierce, Jonathan; Zamanyan, Alen; Chakrapani, Shruthi; Van Horn, John; Parker, D. Stott; Magsipoc, Rico; Leung, Kelvin; Gutman, Boris; Woods, Roger; Toga, Arthur
2010-01-01
Modern computational neuroscience employs diverse software tools and multidisciplinary expertise to analyze heterogeneous brain data. The classical problems of gathering meaningful data, fitting specific models, and discovering appropriate analysis and visualization tools give way to a new class of computational challenges—management of large and incongruous data, integration and interoperability of computational resources, and data provenance. We designed, implemented and validated a new paradigm for addressing these challenges in the neuroimaging field. Our solution is based on the LONI Pipeline environment [3], [4], a graphical workflow environment for constructing and executing complex data processing protocols. We developed study-design, database and visual language programming functionalities within the LONI Pipeline that enable the construction of complete, elaborate and robust graphical workflows for analyzing neuroimaging and other data. These workflows facilitate open sharing and communication of data and metadata, concrete processing protocols, result validation, and study replication among different investigators and research groups. The LONI Pipeline features include distributed grid-enabled infrastructure, virtualized execution environment, efficient integration, data provenance, validation and distribution of new computational tools, automated data format conversion, and an intuitive graphical user interface. We demonstrate the new LONI Pipeline features using large scale neuroimaging studies based on data from the International Consortium for Brain Mapping [5] and the Alzheimer's Disease Neuroimaging Initiative [6]. User guides, forums, instructions and downloads of the LONI Pipeline environment are available at http://pipeline.loni.ucla.edu. PMID:20927408
High Dynamic Range Spectral Imaging Pipeline For Multispectral Filter Array Cameras.
Lapray, Pierre-Jean; Thomas, Jean-Baptiste; Gouton, Pierre
2017-06-03
Spectral filter arrays imaging exhibits a strong similarity with color filter arrays. This permits us to embed this technology in practical vision systems with little adaptation of the existing solutions. In this communication, we define an imaging pipeline that permits high dynamic range (HDR)-spectral imaging, which is extended from color filter arrays. We propose an implementation of this pipeline on a prototype sensor and evaluate the quality of our implementation results on real data with objective metrics and visual examples. We demonstrate that we reduce noise, and, in particular we solve the problem of noise generated by the lack of energy balance. Data are provided to the community in an image database for further research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kern, J.J.
1978-01-01
The recently completed 800-mile trans-Alaska pipeline is reviewed from the perspective of its first six months of successful operation. Because of the many environmental and political constraints, the $7.7 billion project is viewed as a triumph of both engineering and capitalism. Design problems were imposed by the harsh climate and terrain and by the constant public and bureaucratic monitoring. Specifications are reviewed for the pipes, valves, river crossings, pump stations, control stations, and the terminal at Valdez, where special ballast treatment and a vapor-recovery system were required to protect the harbor's water and air quality. The article outlines operating proceduresmore » and contingency planning for the pipeline and terminal. (DCK)« less
Slatter, P T
2001-01-01
The need for the design engineer to have a sound basis for designing sludge pumping and pipelining plant is becoming more critical. This paper examines both a traditional text-book approach and one of the latest approaches from the literature, and compares them with experimental data. The pipelining problem can be divided into the following main areas; rheological characterisation, laminar, transitional and turbulent flow and each is addressed in turn. Experimental data for a digested sludge tested in large pipes is analysed and compared with the two different theoretical approaches. Discussion is centred on the differences between the two methods and the degree of agreement with the data. It is concluded that the new approach has merit and can be used for practical design.
Pacific Northwest Storms Situation Report # 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2006-12-17
Significant progress has been made in restoring power to customers in the Pacific Northwest region. Currently, 468,200 customers, including Canada, remain without power. This is down from 1.8 million customers who lost power following severe wind and snow storms on December 14-15, 2006. The customers without power represent about 16 percent of customers in the affected utility service areas of Oregon and Washington. See table below. The Olympic pipeline reports that the pipeline is operational; however, pipeline throughput remains reduced since one substation along the line remains without power. Complete power restoration is expected later today. There are no reportsmore » of problems regarding fuel distribution and production.« less
Scheduling Future Water Supply Investments Under Uncertainty
NASA Astrophysics Data System (ADS)
Huskova, I.; Matrosov, E. S.; Harou, J. J.; Kasprzyk, J. R.; Reed, P. M.
2014-12-01
Uncertain hydrological impacts of climate change, population growth and institutional changes pose a major challenge to planning of water supply systems. Planners seek optimal portfolios of supply and demand management schemes but also when to activate assets whilst considering many system goals and plausible futures. Incorporation of scheduling into the planning under uncertainty problem strongly increases its complexity. We investigate some approaches to scheduling with many-objective heuristic search. We apply a multi-scenario many-objective scheduling approach to the Thames River basin water supply system planning problem in the UK. Decisions include which new supply and demand schemes to implement, at what capacity and when. The impact of different system uncertainties on scheme implementation schedules are explored, i.e. how the choice of future scenarios affects the search process and its outcomes. The activation of schemes is influenced by the occurrence of extreme hydrological events in the ensemble of plausible scenarios and other factors. The approach and results are compared with a previous study where only the portfolio problem is addressed (without scheduling).
NASA Technical Reports Server (NTRS)
Zweben, Monte
1991-01-01
The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint-based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all the inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocation for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its application to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.
NASA Technical Reports Server (NTRS)
Zweben, Monte
1991-01-01
The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocations for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its applications to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.
NASA Technical Reports Server (NTRS)
Zweben, Monte
1993-01-01
The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint-based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all the inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocation for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its application to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.
Blood Glucose Levels and Problem Behavior
ERIC Educational Resources Information Center
Valdovinos, Maria G.; Weyand, David
2006-01-01
The relationship between varying blood glucose levels and problem behavior during daily scheduled activities was examined. The effects that varying blood glucose levels had on problem behavior during daily scheduled activities were examined. Prior research has shown that differing blood glucose levels can affect behavior and mood. Results of this…
Prediction of wax buildup in 24 inch cold, deep sea oil loading line
DOE Office of Scientific and Technical Information (OSTI.GOV)
Asperger, R.G.; Sattler, R.E.; Tolonen, W.J.
1981-10-01
When designing pipelines for cold environments, it is important to know how to predict potential problems due to wax deposition on the pipeline's inner surface. The goal of this work was to determine the rate of wax buildup and the maximum, equlibrium wax thickness for a North Sea field loading line. The experimental techniques and results used to evaluate the waxing potential of the crude oil (B) are described. Also, the theoretic model which was used for predicting the maximum wax deposit thickness in the crude oil (B) loading pipeline at controlled temperatures of 40 F (4.4 C) and 100more » F (38 C), is illustrated. Included is a recommendation of a procedure for using hot oil at the end of a tanker loading period in order to dewax the crude oil (B) line. This technique would give maximum heating of the pipeline and should be followed by shutting the hot oil into the pipeline at the end of the loading cycle which will provide a hot oil soaking to help soften existing wax. 14 references.« less
Planning as a Precursor to Scheduling for Space Station Payload Operations
NASA Technical Reports Server (NTRS)
Howell, Eric; Maxwell, Theresa
1995-01-01
Contemporary schedulers attempt to solve the problem of best fitting a set of activities into an available timeframe while still satisfying the necessary constraints. This approach produces results which are optimized for the region of time the scheduler is able to process, satisfying the near term goals of the operation. In general the scheduler is not able to reason about the activities which precede or follow the window into which it is inputs to scheduling so that the intermediate placing activities. This creates a problem for operations which are composed of many activities spanning long durations (which exceed the scheduler's reasoning horizon) such as the continuous operations environment for payload operations on the Space Station. Not only must the near term scheduling objectives be met, but somehow the results of near term scheduling must be made to support the attainment of long term goals.
Spike: AI scheduling for Hubble Space Telescope after 18 months of orbital operations
NASA Technical Reports Server (NTRS)
Johnston, Mark D.
1992-01-01
This paper is a progress report on the Spike scheduling system, developed by the Space Telescope Science Institute for long-term scheduling of Hubble Space Telescope (HST) observations. Spike is an activity-based scheduler which exploits artificial intelligence (AI) techniques for constraint representation and for scheduling search. The system has been in operational use since shortly after HST launch in April 1990. Spike was adopted for several other satellite scheduling problems; of particular interest was the demonstration that the Spike framework is sufficiently flexible to handle both long-term and short-term scheduling, on timescales of years down to minutes or less. We describe the recent progress made in scheduling search techniques, the lessons learned from early HST operations, and the application of Spike to other problem domains. We also describe plans for the future evolution of the system.
Solving the flexible job shop problem by hybrid metaheuristics-based multiagent model
NASA Astrophysics Data System (ADS)
Nouri, Houssem Eddine; Belkahla Driss, Olfa; Ghédira, Khaled
2018-03-01
The flexible job shop scheduling problem (FJSP) is a generalization of the classical job shop scheduling problem that allows to process operations on one machine out of a set of alternative machines. The FJSP is an NP-hard problem consisting of two sub-problems, which are the assignment and the scheduling problems. In this paper, we propose how to solve the FJSP by hybrid metaheuristics-based clustered holonic multiagent model. First, a neighborhood-based genetic algorithm (NGA) is applied by a scheduler agent for a global exploration of the search space. Second, a local search technique is used by a set of cluster agents to guide the research in promising regions of the search space and to improve the quality of the NGA final population. The efficiency of our approach is explained by the flexible selection of the promising parts of the search space by the clustering operator after the genetic algorithm process, and by applying the intensification technique of the tabu search allowing to restart the search from a set of elite solutions to attain new dominant scheduling solutions. Computational results are presented using four sets of well-known benchmark literature instances. New upper bounds are found, showing the effectiveness of the presented approach.
Urbanowicz, Ryan J.; Granizo-Mackenzie, Ambrose; Moore, Jason H.
2014-01-01
Michigan-style learning classifier systems (M-LCSs) represent an adaptive and powerful class of evolutionary algorithms which distribute the learned solution over a sizable population of rules. However their application to complex real world data mining problems, such as genetic association studies, has been limited. Traditional knowledge discovery strategies for M-LCS rule populations involve sorting and manual rule inspection. While this approach may be sufficient for simpler problems, the confounding influence of noise and the need to discriminate between predictive and non-predictive attributes calls for additional strategies. Additionally, tests of significance must be adapted to M-LCS analyses in order to make them a viable option within fields that require such analyses to assess confidence. In this work we introduce an M-LCS analysis pipeline that combines uniquely applied visualizations with objective statistical evaluation for the identification of predictive attributes, and reliable rule generalizations in noisy single-step data mining problems. This work considers an alternative paradigm for knowledge discovery in M-LCSs, shifting the focus from individual rules to a global, population-wide perspective. We demonstrate the efficacy of this pipeline applied to the identification of epistasis (i.e., attribute interaction) and heterogeneity in noisy simulated genetic association data. PMID:25431544
Detection of Two Buried Cross Pipelines by Observation of the Scattered Electromagnetic Field
NASA Astrophysics Data System (ADS)
Mangini, Fabio; Di Gregorio, Pietro Paolo; Frezza, Fabrizio; Muzi, Marco; Tedeschi, Nicola
2015-04-01
In this work we present a numerical study on the effects that can be observed in the electromagnetic scattering of a plane wave due to the presence of two crossed pipelines buried in a half-space occupied by cement. The pipeline, supposed to be used for water conveyance, is modeled as a cylindrical shell made of metallic or poly-vinyl chloride (PVC) material. In order to make the model simpler, the pipelines are supposed running parallel to the air-cement interface on two different parallel planes; moreover, initially we suppose that the two tubes make an angle of 90 degrees. We consider a circularly-polarized plane wave impinging normally to the interface between air and the previously-mentioned medium, which excites the structure in order to determine the most useful configuration in terms of scattered-field sensitivity. To perform the study, a commercially available simulator which implements the Finite Element Method was adopted. A preliminary frequency sweep allows us to choose the most suitable operating frequency depending on the dimensions of the commercial pipeline cross-section. We monitor the three components of the scattered electric field along a line just above the interface between the two media. The electromagnetic properties of the materials employed in this study are taken from the literature and, since a frequency-domain technique is adopted, no further approximation is needed. Once the ideal problem has been studied, i.e. having considered orthogonal and tangential scenario, we further complicate the model by considering different crossing angles and distances between the tubes, in two cases of PVC and metallic material. The results obtained in these cases are compared with those of the initial problem with the goal of determining the scattered field dependence on the geometrical characteristics of the cross between two pipelines. One of the practical applications in the field of Civil Engineering of this study may be the use of ground penetrating radar (GPR) techniques to monitor the fouling conditions of water pipelines without the need to intervene destructively on the structure. Acknowledgements: This work is a contribution to COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar".
Periodic Heterogeneous Vehicle Routing Problem With Driver Scheduling
NASA Astrophysics Data System (ADS)
Mardiana Panggabean, Ellis; Mawengkang, Herman; Azis, Zainal; Filia Sari, Rina
2018-01-01
The paper develops a model for the optimal management of logistic delivery of a given commodity. The company has different type of vehicles with different capacity to deliver the commodity for customers. The problem is then called Periodic Heterogeneous Vehicle Routing Problem (PHVRP). The goal is to schedule the deliveries according to feasible combinations of delivery days and to determine the scheduling of fleet and driver and routing policies of the vehicles. The objective is to minimize the sum of the costs of all routes over the planning horizon. We propose a combined approach of heuristic algorithm and exact method to solve the problem.
Strategic Gang Scheduling for Railroad Maintenance
DOT National Transportation Integrated Search
2012-08-14
We address the railway track maintenance scheduling problem. The problem stems from the : significant percentage of the annual budget invested by the railway industry for maintaining its railway : tracks. The process requires consideration of human r...
Estimates of the absolute error and a scheme for an approximate solution to scheduling problems
NASA Astrophysics Data System (ADS)
Lazarev, A. A.
2009-02-01
An approach is proposed for estimating absolute errors and finding approximate solutions to classical NP-hard scheduling problems of minimizing the maximum lateness for one or many machines and makespan is minimized. The concept of a metric (distance) between instances of the problem is introduced. The idea behind the approach is, given the problem instance, to construct another instance for which an optimal or approximate solution can be found at the minimum distance from the initial instance in the metric introduced. Instead of solving the original problem (instance), a set of approximating polynomially/pseudopolynomially solvable problems (instances) are considered, an instance at the minimum distance from the given one is chosen, and the resulting schedule is then applied to the original instance.
Working Notes from the 1992 AAAI Spring Symposium on Practical Approaches to Scheduling and Planning
NASA Technical Reports Server (NTRS)
Drummond, Mark; Fox, Mark; Tate, Austin; Zweben, Monte
1992-01-01
The symposium presented issues involved in the development of scheduling systems that can deal with resource and time limitations. To qualify, a system must be implemented and tested to some degree on non-trivial problems (ideally, on real-world problems). However, a system need not be fully deployed to qualify. Systems that schedule actions in terms of metric time constraints typically represent and reason about an external numeric clock or calendar and can be contrasted with those systems that represent time purely symbolically. The following topics are discussed: integrating planning and scheduling; integrating symbolic goals and numerical utilities; managing uncertainty; incremental rescheduling; managing limited computation time; anytime scheduling and planning algorithms, systems; dependency analysis and schedule reuse; management of schedule and plan execution; and incorporation of discrete event techniques.
Job Scheduling in a Heterogeneous Grid Environment
NASA Technical Reports Server (NTRS)
Shan, Hong-Zhang; Smith, Warren; Oliker, Leonid; Biswas, Rupak
2004-01-01
Computational grids have the potential for solving large-scale scientific problems using heterogeneous and geographically distributed resources. However, a number of major technical hurdles must be overcome before this potential can be realized. One problem that is critical to effective utilization of computational grids is the efficient scheduling of jobs. This work addresses this problem by describing and evaluating a grid scheduling architecture and three job migration algorithms. The architecture is scalable and does not assume control of local site resources. The job migration policies use the availability and performance of computer systems, the network bandwidth available between systems, and the volume of input and output data associated with each job. An extensive performance comparison is presented using real workloads from leading computational centers. The results, based on several key metrics, demonstrate that the performance of our distributed migration algorithms is significantly greater than that of a local scheduling framework and comparable to a non-scalable global scheduling approach.
Uncertainty management by relaxation of conflicting constraints in production process scheduling
NASA Technical Reports Server (NTRS)
Dorn, Juergen; Slany, Wolfgang; Stary, Christian
1992-01-01
Mathematical-analytical methods as used in Operations Research approaches are often insufficient for scheduling problems. This is due to three reasons: the combinatorial complexity of the search space, conflicting objectives for production optimization, and the uncertainty in the production process. Knowledge-based techniques, especially approximate reasoning and constraint relaxation, are promising ways to overcome these problems. A case study from an industrial CIM environment, namely high-grade steel production, is presented to demonstrate how knowledge-based scheduling with the desired capabilities could work. By using fuzzy set theory, the applied knowledge representation technique covers the uncertainty inherent in the problem domain. Based on this knowledge representation, a classification of jobs according to their importance is defined which is then used for the straightforward generation of a schedule. A control strategy which comprises organizational, spatial, temporal, and chemical constraints is introduced. The strategy supports the dynamic relaxation of conflicting constraints in order to improve tentative schedules.
Simulated annealing with probabilistic analysis for solving traveling salesman problems
NASA Astrophysics Data System (ADS)
Hong, Pei-Yee; Lim, Yai-Fung; Ramli, Razamin; Khalid, Ruzelan
2013-09-01
Simulated Annealing (SA) is a widely used meta-heuristic that was inspired from the annealing process of recrystallization of metals. Therefore, the efficiency of SA is highly affected by the annealing schedule. As a result, in this paper, we presented an empirical work to provide a comparable annealing schedule to solve symmetric traveling salesman problems (TSP). Randomized complete block design is also used in this study. The results show that different parameters do affect the efficiency of SA and thus, we propose the best found annealing schedule based on the Post Hoc test. SA was tested on seven selected benchmarked problems of symmetric TSP with the proposed annealing schedule. The performance of SA was evaluated empirically alongside with benchmark solutions and simple analysis to validate the quality of solutions. Computational results show that the proposed annealing schedule provides a good quality of solution.
Guidance and Control Software,
1980-05-01
commitments of function, cost, and schedule . The phrase "software engineering" was intended to contrast with the phrase "computer science" the latter aims...the software problems of cost, delivery schedule , and quality were gradually being recognized at the highest management levels. Thus, in a project... schedule dates. Although the analysis of software problems indicated that the entire software development process (figure 1) needed new methods, only
High performance techniques for space mission scheduling
NASA Technical Reports Server (NTRS)
Smith, Stephen F.
1994-01-01
In this paper, we summarize current research at Carnegie Mellon University aimed at development of high performance techniques and tools for space mission scheduling. Similar to prior research in opportunistic scheduling, our approach assumes the use of dynamic analysis of problem constraints as a basis for heuristic focusing of problem solving search. This methodology, however, is grounded in representational assumptions more akin to those adopted in recent temporal planning research, and in a problem solving framework which similarly emphasizes constraint posting in an explicitly maintained solution constraint network. These more general representational assumptions are necessitated by the predominance of state-dependent constraints in space mission planning domains, and the consequent need to integrate resource allocation and plan synthesis processes. First, we review the space mission problems we have considered to date and indicate the results obtained in these application domains. Next, we summarize recent work in constraint posting scheduling procedures, which offer the promise of better future solutions to this class of problems.
NASA Astrophysics Data System (ADS)
Chen, Jung-Chieh
This paper presents a low complexity algorithmic framework for finding a broadcasting schedule in a low-altitude satellite system, i. e., the satellite broadcast scheduling (SBS) problem, based on the recent modeling and computational methodology of factor graphs. Inspired by the huge success of the low density parity check (LDPC) codes in the field of error control coding, in this paper, we transform the SBS problem into an LDPC-like problem through a factor graph instead of using the conventional neural network approaches to solve the SBS problem. Based on a factor graph framework, the soft-information, describing the probability that each satellite will broadcast information to a terminal at a specific time slot, is exchanged among the local processing in the proposed framework via the sum-product algorithm to iteratively optimize the satellite broadcasting schedule. Numerical results show that the proposed approach not only can obtain optimal solution but also enjoys the low complexity suitable for integral-circuit implementation.
Optimal processor assignment for pipeline computations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Simha, Rahul; Choudhury, Alok N.; Narahari, Bhagirath
1991-01-01
The availability of large scale multitasked parallel architectures introduces the following processor assignment problem for pipelined computations. Given a set of tasks and their precedence constraints, along with their experimentally determined individual responses times for different processor sizes, find an assignment of processor to tasks. Two objectives are of interest: minimal response given a throughput requirement, and maximal throughput given a response time requirement. These assignment problems differ considerably from the classical mapping problem in which several tasks share a processor; instead, it is assumed that a large number of processors are to be assigned to a relatively small number of tasks. Efficient assignment algorithms were developed for different classes of task structures. For a p processor system and a series parallel precedence graph with n constituent tasks, an O(np2) algorithm is provided that finds the optimal assignment for the response time optimization problem; it was found that the assignment optimizing the constrained throughput in O(np2log p) time. Special cases of linear, independent, and tree graphs are also considered.
A modify ant colony optimization for the grid jobs scheduling problem with QoS requirements
NASA Astrophysics Data System (ADS)
Pu, Xun; Lu, XianLiang
2011-10-01
Job scheduling with customers' quality of service (QoS) requirement is challenging in grid environment. In this paper, we present a modify Ant colony optimization (MACO) for the Job scheduling problem in grid. Instead of using the conventional construction approach to construct feasible schedules, the proposed algorithm employs a decomposition method to satisfy the customer's deadline and cost requirements. Besides, a new mechanism of service instances state updating is embedded to improve the convergence of MACO. Experiments demonstrate the effectiveness of the proposed algorithm.
The role of artificial intelligence techniques in scheduling systems
NASA Technical Reports Server (NTRS)
Geoffroy, Amy L.; Britt, Daniel L.; Gohring, John R.
1990-01-01
Artificial Intelligence (AI) techniques provide good solutions for many of the problems which are characteristic of scheduling applications. However, scheduling is a large, complex heterogeneous problem. Different applications will require different solutions. Any individual application will require the use of a variety of techniques, including both AI and conventional software methods. The operational context of the scheduling system will also play a large role in design considerations. The key is to identify those places where a specific AI technique is in fact the preferable solution, and to integrate that technique into the overall architecture.
A new distributed systems scheduling algorithm: a swarm intelligence approach
NASA Astrophysics Data System (ADS)
Haghi Kashani, Mostafa; Sarvizadeh, Raheleh; Jameii, Mahdi
2011-12-01
The scheduling problem in distributed systems is known as an NP-complete problem, and methods based on heuristic or metaheuristic search have been proposed to obtain optimal and suboptimal solutions. The task scheduling is a key factor for distributed systems to gain better performance. In this paper, an efficient method based on memetic algorithm is developed to solve the problem of distributed systems scheduling. With regard to load balancing efficiently, Artificial Bee Colony (ABC) has been applied as local search in the proposed memetic algorithm. The proposed method has been compared to existing memetic-Based approach in which Learning Automata method has been used as local search. The results demonstrated that the proposed method outperform the above mentioned method in terms of communication cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
El-Jundi, I.M.
Qatar NGL/2 plant, commissioned in December, 1979, was designed to process the associated gas from the offshore crude oil fields of Qatar. The dehydrated sour lean gas and wet sour liquids are transported via two separate lines to Umm Said NGL Complex about 120 kms. from the central offshore station. The liquids line 300 mm diameter (12 inch) has suffered general and severe pitting corrosion. The lean gas line 600 mm diameter (24 inch) has suffered corrosion and extensively hydrogen induced cracking (HIC), also known as HIPC. Both lines never performed to their design parameters and many problems in themore » downstream facilities have been experienced. All efforts to clean the liquids lines from the solids (debris) have failed. This inturn interfered with the planned corrosion control programe, thus allowing corrosion to continue. Investigation work has been done by various specialists in an attempt to find the origin of the solids and to recommend necessary remedial actions. Should lines fall from pitting corrosion, the effect of liquids leak at a pressure of about 11000 kpa will be very dangerous especially if it occurs onshore. In order to protect the NGL-2 operations against possible risks, both interms of safety as well as losses in revenue, critically sections of the pipelines have been replaced, whilst the whole gas liquids pipelines would be replaced shortly. Supplementary documents to the API standards were prepared by QPC for the replaced pipelines.« less
Solution and reasoning reuse in space planning and scheduling applications
NASA Technical Reports Server (NTRS)
Verfaillie, Gerard; Schiex, Thomas
1994-01-01
In the space domain, as in other domains, the CSP (Constraint Satisfaction Problems) techniques are increasingly used to represent and solve planning and scheduling problems. But these techniques have been developed to solve CSP's which are composed of fixed sets of variables and constraints, whereas many planning and scheduling problems are dynamic. It is therefore important to develop methods which allow a new solution to be rapidly found, as close as possible to the previous one, when some variables or constraints are added or removed. After presenting some existing approaches, this paper proposes a simple and efficient method, which has been developed on the basis of the dynamic backtracking algorithm. This method allows previous solution and reasoning to be reused in the framework of a CSP which is close to the previous one. Some experimental results on general random CSPs and on operation scheduling problems for remote sensing satellites are given.
A graph-based approach for designing extensible pipelines
2012-01-01
Background In bioinformatics, it is important to build extensible and low-maintenance systems that are able to deal with the new tools and data formats that are constantly being developed. The traditional and simplest implementation of pipelines involves hardcoding the execution steps into programs or scripts. This approach can lead to problems when a pipeline is expanding because the incorporation of new tools is often error prone and time consuming. Current approaches to pipeline development such as workflow management systems focus on analysis tasks that are systematically repeated without significant changes in their course of execution, such as genome annotation. However, more dynamism on the pipeline composition is necessary when each execution requires a different combination of steps. Results We propose a graph-based approach to implement extensible and low-maintenance pipelines that is suitable for pipeline applications with multiple functionalities that require different combinations of steps in each execution. Here pipelines are composed automatically by compiling a specialised set of tools on demand, depending on the functionality required, instead of specifying every sequence of tools in advance. We represent the connectivity of pipeline components with a directed graph in which components are the graph edges, their inputs and outputs are the graph nodes, and the paths through the graph are pipelines. To that end, we developed special data structures and a pipeline system algorithm. We demonstrate the applicability of our approach by implementing a format conversion pipeline for the fields of population genetics and genetic epidemiology, but our approach is also helpful in other fields where the use of multiple software is necessary to perform comprehensive analyses, such as gene expression and proteomics analyses. The project code, documentation and the Java executables are available under an open source license at http://code.google.com/p/dynamic-pipeline. The system has been tested on Linux and Windows platforms. Conclusions Our graph-based approach enables the automatic creation of pipelines by compiling a specialised set of tools on demand, depending on the functionality required. It also allows the implementation of extensible and low-maintenance pipelines and contributes towards consolidating openness and collaboration in bioinformatics systems. It is targeted at pipeline developers and is suited for implementing applications with sequential execution steps and combined functionalities. In the format conversion application, the automatic combination of conversion tools increased both the number of possible conversions available to the user and the extensibility of the system to allow for future updates with new file formats. PMID:22788675
Efficient algorithms for dilated mappings of binary trees
NASA Technical Reports Server (NTRS)
Iqbal, M. Ashraf
1990-01-01
The problem is addressed to find a 1-1 mapping of the vertices of a binary tree onto those of a target binary tree such that the son of a node on the first binary tree is mapped onto a descendent of the image of that node in the second binary tree. There are two natural measures of the cost of this mapping, namely the dilation cost, i.e., the maximum distance in the target binary tree between the images of vertices that are adjacent in the original tree. The other measure, expansion cost, is defined as the number of extra nodes/edges to be added to the target binary tree in order to ensure a 1-1 mapping. An efficient algorithm to find a mapping of one binary tree onto another is described. It is shown that it is possible to minimize one cost of mapping at the expense of the other. This problem arises when designing pipelined arithmetic logic units (ALU) for special purpose computers. The pipeline is composed of ALU chips connected in the form of a binary tree. The operands to the pipeline can be supplied to the leaf nodes of the binary tree which then process and pass the results up to their parents. The final result is available at the root. As each new application may require a distinct nesting of operations, it is useful to be able to find a good mapping of a new binary tree over existing ALU tree. Another problem arises if every distinct required binary tree is known beforehand. Here it is useful to hardwire the pipeline in the form of a minimal supertree that contains all required binary trees.
Toward interactive scheduling systems for managing medical resources.
Oddi, A; Cesta, A
2000-10-01
Managers of medico-hospital facilities are facing two general problems when allocating resources to activities: (1) to find an agreement between several and contrasting requirements; (2) to manage dynamic and uncertain situations when constraints suddenly change over time due to medical needs. This paper describes the results of a research aimed at applying constraint-based scheduling techniques to the management of medical resources. A mixed-initiative problem solving approach is adopted in which a user and a decision support system interact to incrementally achieve a satisfactory solution to the problem. A running prototype is described called Interactive Scheduler which offers a set of functionalities for a mixed-initiative interaction to cope with the medical resource management. Interactive Scheduler is endowed with a representation schema used for describing the medical environment, a set of algorithms that address the specific problems of the domain, and an innovative interaction module that offers functionalities for the dialogue between the support system and its user. A particular contribution of this work is the explicit representation of constraint violations, and the definition of scheduling algorithms that aim at minimizing the amount of constraint violations in a solution.
Applications of spaceborne laser ranger on EOS
NASA Technical Reports Server (NTRS)
Degnan, John J.; Cohen, Steven C.
1988-01-01
An account is given of the design concept and potential applications in science and engineering of the spaceborne laser ranging and altimeter apparatus employed by the Geodynamics Laser Ranging System; this is scheduled for 1997 launch as part of the multiple-satellite Earth Observing System. In the retrograding mode for geodynamics, the system will use a Nd:YAG laser's green and UV output for distance determination to ground retroreflectors. Engineering applications encompass land management and long-term ground stability studies relevant to nuclear power plant, pipeline, and aqueduct locations.
NASA Astrophysics Data System (ADS)
Chen, Xiangqun; Huang, Rui; Shen, Liman; chen, Hao; Xiong, Dezhi; Xiao, Xiangqi; Liu, Mouhai; Xu, Renheng
2018-03-01
In this paper, the semi-active RFID watt-hour meter is applied to automatic test lines and intelligent warehouse management, from the transmission system, test system and auxiliary system, monitoring system, realize the scheduling of watt-hour meter, binding, control and data exchange, and other functions, make its more accurate positioning, high efficiency of management, update the data quickly, all the information at a glance. Effectively improve the quality, efficiency and automation of verification, and realize more efficient data management and warehouse management.
A Scheduling Algorithm for Cloud Computing System Based on the Driver of Dynamic Essential Path.
Xie, Zhiqiang; Shao, Xia; Xin, Yu
2016-01-01
To solve the problem of task scheduling in the cloud computing system, this paper proposes a scheduling algorithm for cloud computing based on the driver of dynamic essential path (DDEP). This algorithm applies a predecessor-task layer priority strategy to solve the problem of constraint relations among task nodes. The strategy assigns different priority values to every task node based on the scheduling order of task node as affected by the constraint relations among task nodes, and the task node list is generated by the different priority value. To address the scheduling order problem in which task nodes have the same priority value, the dynamic essential long path strategy is proposed. This strategy computes the dynamic essential path of the pre-scheduling task nodes based on the actual computation cost and communication cost of task node in the scheduling process. The task node that has the longest dynamic essential path is scheduled first as the completion time of task graph is indirectly influenced by the finishing time of task nodes in the longest dynamic essential path. Finally, we demonstrate the proposed algorithm via simulation experiments using Matlab tools. The experimental results indicate that the proposed algorithm can effectively reduce the task Makespan in most cases and meet a high quality performance objective.
A Scheduling Algorithm for Cloud Computing System Based on the Driver of Dynamic Essential Path
Xie, Zhiqiang; Shao, Xia; Xin, Yu
2016-01-01
To solve the problem of task scheduling in the cloud computing system, this paper proposes a scheduling algorithm for cloud computing based on the driver of dynamic essential path (DDEP). This algorithm applies a predecessor-task layer priority strategy to solve the problem of constraint relations among task nodes. The strategy assigns different priority values to every task node based on the scheduling order of task node as affected by the constraint relations among task nodes, and the task node list is generated by the different priority value. To address the scheduling order problem in which task nodes have the same priority value, the dynamic essential long path strategy is proposed. This strategy computes the dynamic essential path of the pre-scheduling task nodes based on the actual computation cost and communication cost of task node in the scheduling process. The task node that has the longest dynamic essential path is scheduled first as the completion time of task graph is indirectly influenced by the finishing time of task nodes in the longest dynamic essential path. Finally, we demonstrate the proposed algorithm via simulation experiments using Matlab tools. The experimental results indicate that the proposed algorithm can effectively reduce the task Makespan in most cases and meet a high quality performance objective. PMID:27490901
Multiagent scheduling method with earliness and tardiness objectives in flexible job shops.
Wu, Zuobao; Weng, Michael X
2005-04-01
Flexible job-shop scheduling problems are an important extension of the classical job-shop scheduling problems and present additional complexity. Such problems are mainly due to the existence of a considerable amount of overlapping capacities with modern machines. Classical scheduling methods are generally incapable of addressing such capacity overlapping. We propose a multiagent scheduling method with job earliness and tardiness objectives in a flexible job-shop environment. The earliness and tardiness objectives are consistent with the just-in-time production philosophy which has attracted significant attention in both industry and academic community. A new job-routing and sequencing mechanism is proposed. In this mechanism, two kinds of jobs are defined to distinguish jobs with one operation left from jobs with more than one operation left. Different criteria are proposed to route these two kinds of jobs. Job sequencing enables to hold a job that may be completed too early. Two heuristic algorithms for job sequencing are developed to deal with these two kinds of jobs. The computational experiments show that the proposed multiagent scheduling method significantly outperforms the existing scheduling methods in the literature. In addition, the proposed method is quite fast. In fact, the simulation time to find a complete schedule with over 2000 jobs on ten machines is less than 1.5 min.
NASA Technical Reports Server (NTRS)
Rash, James
2014-01-01
NASA's space data-communications infrastructure-the Space Network and the Ground Network-provide scheduled (as well as some limited types of unscheduled) data-communications services to user spacecraft. The Space Network operates several orbiting geostationary platforms (the Tracking and Data Relay Satellite System (TDRSS)), each with its own servicedelivery antennas onboard. The Ground Network operates service-delivery antennas at ground stations located around the world. Together, these networks enable data transfer between user spacecraft and their mission control centers on Earth. Scheduling data-communications events for spacecraft that use the NASA communications infrastructure-the relay satellites and the ground stations-can be accomplished today with software having an operational heritage dating from the 1980s or earlier. An implementation of the scheduling methods and algorithms disclosed and formally specified herein will produce globally optimized schedules with not only optimized service delivery by the space data-communications infrastructure but also optimized satisfaction of all user requirements and prescribed constraints, including radio frequency interference (RFI) constraints. Evolutionary algorithms, a class of probabilistic strategies for searching large solution spaces, is the essential technology invoked and exploited in this disclosure. Also disclosed are secondary methods and algorithms for optimizing the execution efficiency of the schedule-generation algorithms themselves. The scheduling methods and algorithms as presented are adaptable to accommodate the complexity of scheduling the civilian and/or military data-communications infrastructure within the expected range of future users and space- or ground-based service-delivery assets. Finally, the problem itself, and the methods and algorithms, are generalized and specified formally. The generalized methods and algorithms are applicable to a very broad class of combinatorial-optimization problems that encompasses, among many others, the problem of generating optimal space-data communications schedules.
NASA Astrophysics Data System (ADS)
Santosa, B.; Siswanto, N.; Fiqihesa
2018-04-01
This paper proposes a discrete Particle Swam Optimization (PSO) to solve limited-wait hybrid flowshop scheduing problem with multi objectives. Flow shop schedulimg represents the condition when several machines are arranged in series and each job must be processed at each machine with same sequence. The objective functions are minimizing completion time (makespan), total tardiness time, and total machine idle time. Flow shop scheduling model always grows to cope with the real production system accurately. Since flow shop scheduling is a NP-Hard problem then the most suitable method to solve is metaheuristics. One of metaheuristics algorithm is Particle Swarm Optimization (PSO), an algorithm which is based on the behavior of a swarm. Originally, PSO was intended to solve continuous optimization problems. Since flow shop scheduling is a discrete optimization problem, then, we need to modify PSO to fit the problem. The modification is done by using probability transition matrix mechanism. While to handle multi objectives problem, we use Pareto Optimal (MPSO). The results of MPSO is better than the PSO because the MPSO solution set produced higher probability to find the optimal solution. Besides the MPSO solution set is closer to the optimal solution
Exact and Metaheuristic Approaches for a Bi-Objective School Bus Scheduling Problem
Chen, Xiaopan; Kong, Yunfeng; Dang, Lanxue; Hou, Yane; Ye, Xinyue
2015-01-01
As a class of hard combinatorial optimization problems, the school bus routing problem has received considerable attention in the last decades. For a multi-school system, given the bus trips for each school, the school bus scheduling problem aims at optimizing bus schedules to serve all the trips within the school time windows. In this paper, we propose two approaches for solving the bi-objective school bus scheduling problem: an exact method of mixed integer programming (MIP) and a metaheuristic method which combines simulated annealing with local search. We develop MIP formulations for homogenous and heterogeneous fleet problems respectively and solve the models by MIP solver CPLEX. The bus type-based formulation for heterogeneous fleet problem reduces the model complexity in terms of the number of decision variables and constraints. The metaheuristic method is a two-stage framework for minimizing the number of buses to be used as well as the total travel distance of buses. We evaluate the proposed MIP and the metaheuristic method on two benchmark datasets, showing that on both instances, our metaheuristic method significantly outperforms the respective state-of-the-art methods. PMID:26176764
NASA Technical Reports Server (NTRS)
Wang, Lui; Valenzuela-Rendon, Manuel
1993-01-01
The Space Station Freedom will require the supply of items in a regular fashion. A schedule for the delivery of these items is not easy to design due to the large span of time involved and the possibility of cancellations and changes in shuttle flights. This paper presents the basic concepts of a genetic algorithm model, and also presents the results of an effort to apply genetic algorithms to the design of propellant resupply schedules. As part of this effort, a simple simulator and an encoding by which a genetic algorithm can find near optimal schedules have been developed. Additionally, this paper proposes ways in which robust schedules, i.e., schedules that can tolerate small changes, can be found using genetic algorithms.
Learning Search Control Knowledge for Deep Space Network Scheduling
NASA Technical Reports Server (NTRS)
Gratch, Jonathan; Chien, Steve; DeJong, Gerald
1993-01-01
While the general class of most scheduling problems is NP-hard in worst-case complexity, in practice, for specific distributions of problems and constraints, domain-specific solutions have been shown to perform in much better than exponential time.
Vehicle and driver scheduling for public transit.
DOT National Transportation Integrated Search
2009-08-01
The problem of driver scheduling involves the construction of a legal set of shifts, including allowance : of overtime, which cover the blocks in a particular vehicle schedule. A shift is the work scheduled to be performed by : a driver in one day, w...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Cheng-Hsien; Department of Water Resources and Environmental Engineering, Tamkang University, New Taipei City 25137, Taiwan; Low, Ying Min, E-mail: ceelowym@nus.edu.sg
2016-05-15
Sediment transport is fundamentally a two-phase phenomenon involving fluid and sediments; however, many existing numerical models are one-phase approaches, which are unable to capture the complex fluid-particle and inter-particle interactions. In the last decade, two-phase models have gained traction; however, there are still many limitations in these models. For example, several existing two-phase models are confined to one-dimensional problems; in addition, the existing two-dimensional models simulate only the region outside the sand bed. This paper develops a new three-dimensional two-phase model for simulating sediment transport in the sheet flow condition, incorporating recently published rheological characteristics of sediments. The enduring-contact, inertial,more » and fluid viscosity effects are considered in determining sediment pressure and stresses, enabling the model to be applicable to a wide range of particle Reynolds number. A k − ε turbulence model is adopted to compute the Reynolds stresses. In addition, a novel numerical scheme is proposed, thus avoiding numerical instability caused by high sediment concentration and allowing the sediment dynamics to be computed both within and outside the sand bed. The present model is applied to two classical problems, namely, sheet flow and scour under a pipeline with favorable results. For sheet flow, the computed velocity is consistent with measured data reported in the literature. For pipeline scour, the computed scour rate beneath the pipeline agrees with previous experimental observations. However, the present model is unable to capture vortex shedding; consequently, the sediment deposition behind the pipeline is overestimated. Sensitivity analyses reveal that model parameters associated with turbulence have strong influence on the computed results.« less
An Efficient Downlink Scheduling Strategy Using Normal Graphs for Multiuser MIMO Wireless Systems
NASA Astrophysics Data System (ADS)
Chen, Jung-Chieh; Wu, Cheng-Hsuan; Lee, Yao-Nan; Wen, Chao-Kai
Inspired by the success of the low-density parity-check (LDPC) codes in the field of error-control coding, in this paper we propose transforming the downlink multiuser multiple-input multiple-output scheduling problem into an LDPC-like problem using the normal graph. Based on the normal graph framework, soft information, which indicates the probability that each user will be scheduled to transmit packets at the access point through a specified angle-frequency sub-channel, is exchanged among the local processors to iteratively optimize the multiuser transmission schedule. Computer simulations show that the proposed algorithm can efficiently schedule simultaneous multiuser transmission which then increases the overall channel utilization and reduces the average packet delay.
Predit: A temporal predictive framework for scheduling systems
NASA Technical Reports Server (NTRS)
Paolucci, E.; Patriarca, E.; Sem, M.; Gini, G.
1992-01-01
Scheduling can be formalized as a Constraint Satisfaction Problem (CSP). Within this framework activities belonging to a plan are interconnected via temporal constraints that account for slack among them. Temporal representation must include methods for constraints propagation and provide a logic for symbolic and numerical deductions. In this paper we describe a support framework for opportunistic reasoning in constraint directed scheduling. In order to focus the attention of an incremental scheduler on critical problem aspects, some discrete temporal indexes are presented. They are also useful for the prediction of the degree of resources contention. The predictive method expressed through our indexes can be seen as a Knowledge Source for an opportunistic scheduler with a blackboard architecture.
Distributed Sleep Scheduling in Wireless Sensor Networks via Fractional Domatic Partitioning
NASA Astrophysics Data System (ADS)
Schumacher, André; Haanpää, Harri
We consider setting up sleep scheduling in sensor networks. We formulate the problem as an instance of the fractional domatic partition problem and obtain a distributed approximation algorithm by applying linear programming approximation techniques. Our algorithm is an application of the Garg-Könemann (GK) scheme that requires solving an instance of the minimum weight dominating set (MWDS) problem as a subroutine. Our two main contributions are a distributed implementation of the GK scheme for the sleep-scheduling problem and a novel asynchronous distributed algorithm for approximating MWDS based on a primal-dual analysis of Chvátal's set-cover algorithm. We evaluate our algorithm with
A Solution Method of Scheduling Problem with Worker Allocation by a Genetic Algorithm
NASA Astrophysics Data System (ADS)
Osawa, Akira; Ida, Kenichi
In a scheduling problem with worker allocation (SPWA) proposed by Iima et al, the worker's skill level to each machine is all the same. However, each worker has a different skill level for each machine in the real world. For that reason, we propose a new model of SPWA in which a worker has the different skill level to each machine. To solve the problem, we propose a new GA for SPWA consisting of the following new three procedures, shortening of idle time, modifying infeasible solution to feasible solution, and a new selection method for GA. The effectiveness of the proposed algorithm is clarified by numerical experiments using benchmark problems for job-shop scheduling.
ERIC Educational Resources Information Center
Tsakanikos, Elias; Underwood, Lisa; Sturmey, Peter; Bouras, Nick; McCarthy, Jane
2011-01-01
The present study employed the Disability Assessment Schedule (DAS) to assess problem behaviors in a large sample of adults with ID (N = 568) and evaluate the psychometric properties of this instrument. Although the DAS problem behaviors were found to be internally consistent (Cronbach's [alpha] = 0.87), item analysis revealed one weak item…
NASA Technical Reports Server (NTRS)
Rash, James L.
2010-01-01
NASA's space data-communications infrastructure, the Space Network and the Ground Network, provide scheduled (as well as some limited types of unscheduled) data-communications services to user spacecraft via orbiting relay satellites and ground stations. An implementation of the methods and algorithms disclosed herein will be a system that produces globally optimized schedules with not only optimized service delivery by the space data-communications infrastructure but also optimized satisfaction of all user requirements and prescribed constraints, including radio frequency interference (RFI) constraints. Evolutionary search, a class of probabilistic strategies for searching large solution spaces, constitutes the essential technology in this disclosure. Also disclosed are methods and algorithms for optimizing the execution efficiency of the schedule-generation algorithm itself. The scheduling methods and algorithms as presented are adaptable to accommodate the complexity of scheduling the civilian and/or military data-communications infrastructure. Finally, the problem itself, and the methods and algorithms, are generalized and specified formally, with applicability to a very broad class of combinatorial optimization problems.
Scheduling: A guide for program managers
NASA Technical Reports Server (NTRS)
1994-01-01
The following topics are discussed concerning scheduling: (1) milestone scheduling; (2) network scheduling; (3) program evaluation and review technique; (4) critical path method; (5) developing a network; (6) converting an ugly duckling to a swan; (7) network scheduling problem; (8) (9) network scheduling when resources are limited; (10) multi-program considerations; (11) influence on program performance; (12) line-of-balance technique; (13) time management; (14) recapitulization; and (15) analysis.
Automated Scheduling Via Artificial Intelligence
NASA Technical Reports Server (NTRS)
Biefeld, Eric W.; Cooper, Lynne P.
1991-01-01
Artificial-intelligence software that automates scheduling developed in Operations Mission Planner (OMP) research project. Software used in both generation of new schedules and modification of existing schedules in view of changes in tasks and/or available resources. Approach based on iterative refinement. Although project focused upon scheduling of operations of scientific instruments and other equipment aboard spacecraft, also applicable to such terrestrial problems as scheduling production in factory.
Scheduling for energy and reliability management on multiprocessor real-time systems
NASA Astrophysics Data System (ADS)
Qi, Xuan
Scheduling algorithms for multiprocessor real-time systems have been studied for years with many well-recognized algorithms proposed. However, it is still an evolving research area and many problems remain open due to their intrinsic complexities. With the emergence of multicore processors, it is necessary to re-investigate the scheduling problems and design/develop efficient algorithms for better system utilization, low scheduling overhead, high energy efficiency, and better system reliability. Focusing cluster schedulings with optimal global schedulers, we study the utilization bound and scheduling overhead for a class of cluster-optimal schedulers. Then, taking energy/power consumption into consideration, we developed energy-efficient scheduling algorithms for real-time systems, especially for the proliferating embedded systems with limited energy budget. As the commonly deployed energy-saving technique (e.g. dynamic voltage frequency scaling (DVFS)) will significantly affect system reliability, we study schedulers that have intelligent mechanisms to recuperate system reliability to satisfy the quality assurance requirements. Extensive simulation is conducted to evaluate the performance of the proposed algorithms on reduction of scheduling overhead, energy saving, and reliability improvement. The simulation results show that the proposed reliability-aware power management schemes could preserve the system reliability while still achieving substantial energy saving.
Dataflow Design Tool: User's Manual
NASA Technical Reports Server (NTRS)
Jones, Robert L., III
1996-01-01
The Dataflow Design Tool is a software tool for selecting a multiprocessor scheduling solution for a class of computational problems. The problems of interest are those that can be described with a dataflow graph and are intended to be executed repetitively on a set of identical processors. Typical applications include signal processing and control law problems. The software tool implements graph-search algorithms and analysis techniques based on the dataflow paradigm. Dataflow analyses provided by the software are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool provides performance optimization through the inclusion of artificial precedence constraints among the schedulable tasks. The user interface and tool capabilities are described. Examples are provided to demonstrate the analysis, scheduling, and optimization functions facilitated by the tool.
Solving cyclical nurse scheduling problem using preemptive goal programming
NASA Astrophysics Data System (ADS)
Sundari, V. E.; Mardiyati, S.
2017-07-01
Nurse scheduling system in a hospital is being modeled as a preemptive goal programming problem that is solved by using LINGO software with the objective function to minimize deviation variable at each goal. The scheduling is done cyclically, so every nurse is treated fairly since they have the same work shift portion with the other nurses. By paying attention to the hospital's rules regarding nursing work shift cyclically, it can be obtained that numbers of nurse needed in every ward are 18 nurses and the numbers of scheduling periods are 18 periods where every period consists of 21 days.
Neighbourhood generation mechanism applied in simulated annealing to job shop scheduling problems
NASA Astrophysics Data System (ADS)
Cruz-Chávez, Marco Antonio
2015-11-01
This paper presents a neighbourhood generation mechanism for the job shop scheduling problems (JSSPs). In order to obtain a feasible neighbour with the generation mechanism, it is only necessary to generate a permutation of an adjacent pair of operations in a scheduling of the JSSP. If there is no slack time between the adjacent pair of operations that is permuted, then it is proven, through theory and experimentation, that the new neighbour (schedule) generated is feasible. It is demonstrated that the neighbourhood generation mechanism is very efficient and effective in a simulated annealing.
Infrared thermography for inspecting of pipeline specimen
NASA Astrophysics Data System (ADS)
Chen, Dapeng; Li, Xiaoli; Sun, Zuoming; Zhang, Xiaolong
2018-02-01
Infrared thermography is a fast and effective non-destructive testing method, which has an increasing application in the field of Aeronautics, Astronautic, architecture and medical, et al. Most of the reports about the application of this technology are focus on the specimens of planar, pulse light is often used as the heat stimulation and a plane heat source is generated on the surface of the specimen by the using of a lampshade, however, this method is not suitable for the specimen of non-planar, such as the pipeline. Therefore, in this paper, according the NDT problem of a steel and composite pipeline specimen, ultrasonic and hot water are applied as the heat source respectively, and an IR camera is used to record the temperature varies of the surface of the specimen, defects are revealed by the thermal images sequence processing. Furthermore, the results of light pulse thermography are also shown as comparison, it is indicated that choose the right stimulation method, can get a more effective NDT results for the pipeline specimen.
Block Scheduling in High Schools.
ERIC Educational Resources Information Center
Irmsher, Karen
1996-01-01
Block Scheduling has been considered a cure for a lengthy list of educational problems. This report reviews the literature on block schedules and describes some Oregon high schools that have integrated block scheduling. Major disadvantages included resistance to change and requirements that teachers change their teaching strategies. There is…
Song, Jia; Zheng, Sisi; Nguyen, Nhung; Wang, Youjun; Zhou, Yubin; Lin, Kui
2017-10-03
Because phylogenetic inference is an important basis for answering many evolutionary problems, a large number of algorithms have been developed. Some of these algorithms have been improved by integrating gene evolution models with the expectation of accommodating the hierarchy of evolutionary processes. To the best of our knowledge, however, there still is no single unifying model or algorithm that can take all evolutionary processes into account through a stepwise or simultaneous method. On the basis of three existing phylogenetic inference algorithms, we built an integrated pipeline for inferring the evolutionary history of a given gene family; this pipeline can model gene sequence evolution, gene duplication-loss, gene transfer and multispecies coalescent processes. As a case study, we applied this pipeline to the STIMATE (TMEM110) gene family, which has recently been reported to play an important role in store-operated Ca 2+ entry (SOCE) mediated by ORAI and STIM proteins. We inferred their phylogenetic trees in 69 sequenced chordate genomes. By integrating three tree reconstruction algorithms with diverse evolutionary models, a pipeline for inferring the evolutionary history of a gene family was developed, and its application was demonstrated.
Xing, KeYi; Han, LiBin; Zhou, MengChu; Wang, Feng
2012-06-01
Deadlock-free control and scheduling are vital for optimizing the performance of automated manufacturing systems (AMSs) with shared resources and route flexibility. Based on the Petri net models of AMSs, this paper embeds the optimal deadlock avoidance policy into the genetic algorithm and develops a novel deadlock-free genetic scheduling algorithm for AMSs. A possible solution of the scheduling problem is coded as a chromosome representation that is a permutation with repetition of parts. By using the one-step look-ahead method in the optimal deadlock control policy, the feasibility of a chromosome is checked, and infeasible chromosomes are amended into feasible ones, which can be easily decoded into a feasible deadlock-free schedule. The chromosome representation and polynomial complexity of checking and amending procedures together support the cooperative aspect of genetic search for scheduling problems strongly.
ERIC Educational Resources Information Center
Sedwal, Mona; Kamat, Sangeeta
2008-01-01
The Scheduled Castes (SCs, also known as Dalits) and Scheduled Tribes (STs, also known as Adivasis) are among the most socially and educationally disadvantaged groups in India. This paper examines issues concerning school access and equity for Scheduled Caste and Scheduled Tribe communities and also highlights their unique problems, which may…
Recent Advances and Achievements at The Catalina Sky Survey
NASA Astrophysics Data System (ADS)
Leonard, Gregory J.; Christensen, Eric J.; Fuls, Carson; Gibbs, Alex; Grauer, Al; Johnson, Jess A.; Kowalski, Richard; Larson, Stephen M.; Matheny, Rose; Seaman, Rob; Shelly, Frank
2017-10-01
The Catalina Sky Survey (CSS) is a NASA-funded project fully dedicated to discover and track near-Earth objects (NEOs). Since its founding nearly 20 years ago CSS remains at the forefront of NEO surveys, and recent improvements in both instrumentation and software have increased both survey productivity and data quality. In 2016 new large-format (10K x 10K) cameras were installed on both CSS survey telescopes, the 1.5-m reflector and the 0.7-m Schmidt, increasing the field of view, and hence nightly sky coverage by 4x and 2.4x respectively. The new cameras, coupled with improvements in the reduction and detection pipelines, and revised sky-coverage strategies have yielded a dramatic upward trend of NEO discovery rates. CSS has also developed a custom adaptive queue manager for scheduling NEO follow-up astrometry using a remotely operated and recently renovated 1-m Cassegrain reflector telescope, improvements that have increased the production of follow-up astrometry for newly discovered NEOs and arc extensions for previously discovered objects by CSS and other surveys. Additionally, reprocessing of archival CSS data (which includes some 46 million individual astrometric measurements) through the new reduction and detection pipeline will allow for improved orbit determinations and increased arc extensions for hundreds of thousands of asteroids. Reprocessed data will soon feed into a new public archive of CSS images and catalog data products made available through NASA’s Planetary Data System (PDS). For the future, CSS is working towards improved NEO follow-up capabilities through a combination of access to larger telescopes, instrument upgrades and follow-up scheduling tools.
A modified genetic algorithm with fuzzy roulette wheel selection for job-shop scheduling problems
NASA Astrophysics Data System (ADS)
Thammano, Arit; Teekeng, Wannaporn
2015-05-01
The job-shop scheduling problem is one of the most difficult production planning problems. Since it is in the NP-hard class, a recent trend in solving the job-shop scheduling problem is shifting towards the use of heuristic and metaheuristic algorithms. This paper proposes a novel metaheuristic algorithm, which is a modification of the genetic algorithm. This proposed algorithm introduces two new concepts to the standard genetic algorithm: (1) fuzzy roulette wheel selection and (2) the mutation operation with tabu list. The proposed algorithm has been evaluated and compared with several state-of-the-art algorithms in the literature. The experimental results on 53 JSSPs show that the proposed algorithm is very effective in solving the combinatorial optimization problems. It outperforms all state-of-the-art algorithms on all benchmark problems in terms of the ability to achieve the optimal solution and the computational time.
Cost-efficient scheduling of FAST observations
NASA Astrophysics Data System (ADS)
Luo, Qi; Zhao, Laiping; Yu, Ce; Xiao, Jian; Sun, Jizhou; Zhu, Ming; Zhong, Yi
2018-03-01
A cost-efficient schedule for the Five-hundred-meter Aperture Spherical radio Telescope (FAST) requires to maximize the number of observable proposals and the overall scientific priority, and minimize the overall slew-cost generated by telescope shifting, while taking into account the constraints including the astronomical objects visibility, user-defined observable times, avoiding Radio Frequency Interference (RFI). In this contribution, first we solve the problem of maximizing the number of observable proposals and scientific priority by modeling it as a Minimum Cost Maximum Flow (MCMF) problem. The optimal schedule can be found by any MCMF solution algorithm. Then, for minimizing the slew-cost of the generated schedule, we devise a maximally-matchable edges detection-based method to reduce the problem size, and propose a backtracking algorithm to find the perfect matching with minimum slew-cost. Experiments on a real dataset from NASA/IPAC Extragalactic Database (NED) show that, the proposed scheduler can increase the usage of available times with high scientific priority and reduce the slew-cost significantly in a very short time.
Using the principles of circadian physiology enhances shift schedule design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Connolly, J.J.; Moore-Ede, M.C.
1987-01-01
Nuclear power plants must operate 24 h, 7 days a week. For the most part, shift schedules currently in use at nuclear power plants have been designed to meet operational needs without considering the biological clocks of the human operators. The development of schedules that also take circadian principles into account is a positive step that can be taken to improve plant safety by optimizing operator alertness. These schedules reduce the probability of human errors especially during backshifts. In addition, training programs that teach round-the-clock workers how to deal with the problems of shiftwork can help to optimize performance andmore » alertness. These programs teach shiftworkers the underlying causes of the sleep problems associated with shiftwork and also provide coping strategies for improving sleep and dealing with the transition between shifts. When these training programs are coupled with an improved schedule, the problems associated with working round-the-clock can be significantly reduced.« less
Meta-RaPS Algorithm for the Aerial Refueling Scheduling Problem
NASA Technical Reports Server (NTRS)
Kaplan, Sezgin; Arin, Arif; Rabadi, Ghaith
2011-01-01
The Aerial Refueling Scheduling Problem (ARSP) can be defined as determining the refueling completion times for each fighter aircraft (job) on multiple tankers (machines). ARSP assumes that jobs have different release times and due dates, The total weighted tardiness is used to evaluate schedule's quality. Therefore, ARSP can be modeled as a parallel machine scheduling with release limes and due dates to minimize the total weighted tardiness. Since ARSP is NP-hard, it will be more appropriate to develop a pproimate or heuristic algorithm to obtain solutions in reasonable computation limes. In this paper, Meta-Raps-ATC algorithm is implemented to create high quality solutions. Meta-RaPS (Meta-heuristic for Randomized Priority Search) is a recent and promising meta heuristic that is applied by introducing randomness to a construction heuristic. The Apparent Tardiness Rule (ATC), which is a good rule for scheduling problems with tardiness objective, is used to construct initial solutions which are improved by an exchanging operation. Results are presented for generated instances.
EUROPA2: Plan Database Services for Planning and Scheduling Applications
NASA Technical Reports Server (NTRS)
Bedrax-Weiss, Tania; Frank, Jeremy; Jonsson, Ari; McGann, Conor
2004-01-01
NASA missions require solving a wide variety of planning and scheduling problems with temporal constraints; simple resources such as robotic arms, communications antennae and cameras; complex replenishable resources such as memory, power and fuel; and complex constraints on geometry, heat and lighting angles. Planners and schedulers that solve these problems are used in ground tools as well as onboard systems. The diversity of planning problems and applications of planners and schedulers precludes a one-size fits all solution. However, many of the underlying technologies are common across planning domains and applications. We describe CAPR, a formalism for planning that is general enough to cover a wide variety of planning and scheduling domains of interest to NASA. We then describe EUROPA(sub 2), a software framework implementing CAPR. EUROPA(sub 2) provides efficient, customizable Plan Database Services that enable the integration of CAPR into a wide variety of applications. We describe the design of EUROPA(sub 2) from the perspective of both modeling, customization and application integration to different classes of NASA missions.
Aerial image databases for pipeline rights-of-way management
NASA Astrophysics Data System (ADS)
Jadkowski, Mark A.
1996-03-01
Pipeline companies that own and manage extensive rights-of-way corridors are faced with ever-increasing regulatory pressures, operating issues, and the need to remain competitive in today's marketplace. Automation has long been an answer to the problem of having to do more work with less people, and Automated Mapping/Facilities Management/Geographic Information Systems (AM/FM/GIS) solutions have been implemented at several pipeline companies. Until recently, the ability to cost-effectively acquire and incorporate up-to-date aerial imagery into these computerized systems has been out of the reach of most users. NASA's Earth Observations Commercial Applications Program (EOCAP) is providing a means by which pipeline companies can bridge this gap. The EOCAP project described in this paper includes a unique partnership with NASA and James W. Sewall Company to develop an aircraft-mounted digital camera system and a ground-based computer system to geometrically correct and efficiently store and handle the digital aerial images in an AM/FM/GIS environment. This paper provides a synopsis of the project, including details on (1) the need for aerial imagery, (2) NASA's interest and role in the project, (3) the design of a Digital Aerial Rights-of-Way Monitoring System, (4) image georeferencing strategies for pipeline applications, and (5) commercialization of the EOCAP technology through a prototype project at Algonquin Gas Transmission Company which operates major gas pipelines in New England, New York, and New Jersey.
Hypertext-based design of a user interface for scheduling
NASA Technical Reports Server (NTRS)
Woerner, Irene W.; Biefeld, Eric
1993-01-01
Operations Mission Planner (OMP) is an ongoing research project at JPL that utilizes AI techniques to create an intelligent, automated planning and scheduling system. The information space reflects the complexity and diversity of tasks necessary in most real-world scheduling problems. Thus the problem of the user interface is to present as much information as possible at a given moment and allow the user to quickly navigate through the various types of displays. This paper describes a design which applies the hypertext model to solve these user interface problems. The general paradigm is to provide maps and search queries to allow the user to quickly find an interesting conflict or problem, and then allow the user to navigate through the displays in a hypertext fashion.
Scheduling the resident 80-hour work week: an operations research algorithm.
Day, T Eugene; Napoli, Joseph T; Kuo, Paul C
2006-01-01
The resident 80-hour work week requires that programs now schedule duty hours. Typically, scheduling is performed in an empirical "trial-and-error" fashion. However, this is a classic "scheduling" problem from the field of operations research (OR). It is similar to scheduling issues that airlines must face with pilots and planes routing through various airports at various times. The authors hypothesized that an OR approach using iterative computer algorithms could provide a rational scheduling solution. Institution-specific constraints of the residency problem were formulated. A total of 56 residents are rotating through 4 hospitals. Additional constraints were dictated by the Residency Review Committee (RRC) rules or the specific surgical service. For example, at Hospital 1, during the weekday hours between 6 am and 6 pm, there will be a PGY4 or PGY5 and a PGY2 or PGY3 on-duty to cover Service "A." A series of equations and logic statements was generated to satisfy all constraints and requirements. These were restated in the Optimization Programming Language used by the ILOG software suite for solving mixed integer programming problems. An integer programming solution was generated to this resource-constrained assignment problem. A total of 30,900 variables and 12,443 constraints were required. A total of man-hours of programming were used; computer run-time was 25.9 hours. A weekly schedule was generated for each resident that satisfied the RRC regulations while fulfilling all stated surgical service requirements. Each required between 64 and 80 weekly resident duty hours. The authors conclude that OR is a viable approach to schedule resident work hours. This technique is sufficiently robust to accommodate changes in resident numbers, service requirements, and service and hospital rotations.
NASA Astrophysics Data System (ADS)
Tang, Dunbing; Dai, Min
2015-09-01
The traditional production planning and scheduling problems consider performance indicators like time, cost and quality as optimization objectives in manufacturing processes. However, environmentally-friendly factors like energy consumption of production have not been completely taken into consideration. Against this background, this paper addresses an approach to modify a given schedule generated by a production planning and scheduling system in a job shop floor, where machine tools can work at different cutting speeds. It can adjust the cutting speeds of the operations while keeping the original assignment and processing sequence of operations of each job fixed in order to obtain energy savings. First, the proposed approach, based on a mixed integer programming mathematical model, changes the total idle time of the given schedule to minimize energy consumption in the job shop floor while accepting the optimal solution of the scheduling objective, makespan. Then, a genetic-simulated annealing algorithm is used to explore the optimal solution due to the fact that the problem is strongly NP-hard. Finally, the effectiveness of the approach is performed smalland large-size instances, respectively. The experimental results show that the approach can save 5%-10% of the average energy consumption while accepting the optimal solution of the makespan in small-size instances. In addition, the average maximum energy saving ratio can reach to 13%. And it can save approximately 1%-4% of the average energy consumption and approximately 2.4% of the average maximum energy while accepting the near-optimal solution of the makespan in large-size instances. The proposed research provides an interesting point to explore an energy-aware schedule optimization for a traditional production planning and scheduling problem.
Li, Shanlin; Li, Maoqin
2015-01-01
We consider an integrated production and distribution scheduling problem faced by a typical make-to-order manufacturer which relies on a third-party logistics (3PL) provider for finished product delivery to customers. In the beginning of a planning horizon, the manufacturer has received a set of orders to be processed on a single production line. Completed orders are delivered to customers by a finite number of vehicles provided by the 3PL company which follows a fixed daily or weekly shipping schedule such that the vehicles have fixed departure dates which are not part of the decisions. The problem is to find a feasible schedule that minimizes one of the following objective functions when processing times and weights are oppositely ordered: (1) the total weight of late orders and (2) the number of vehicles used subject to the condition that the total weight of late orders is minimum. We show that both problems are solvable in polynomial time.
Scheduling Non-Preemptible Jobs to Minimize Peak Demand
Yaw, Sean; Mumey, Brendan
2017-10-28
Our paper examines an important problem in smart grid energy scheduling; peaks in power demand are proportionally more expensive to generate and provision for. The issue is exacerbated in local microgrids that do not benefit from the aggregate smoothing experienced by large grids. Demand-side scheduling can reduce these peaks by taking advantage of the fact that there is often flexibility in job start times. We then focus attention on the case where the jobs are non-preemptible, meaning once started, they run to completion. The associated optimization problem is called the peak demand minimization problem, and has been previously shown tomore » be NP-hard. These results include an optimal fixed-parameter tractable algorithm, a polynomial-time approximation algorithm, as well as an effective heuristic that can also be used in an online setting of the problem. Simulation results show that these methods can reduce peak demand by up to 50% versus on-demand scheduling for household power jobs.« less
Li, Shanlin; Li, Maoqin
2015-01-01
We consider an integrated production and distribution scheduling problem faced by a typical make-to-order manufacturer which relies on a third-party logistics (3PL) provider for finished product delivery to customers. In the beginning of a planning horizon, the manufacturer has received a set of orders to be processed on a single production line. Completed orders are delivered to customers by a finite number of vehicles provided by the 3PL company which follows a fixed daily or weekly shipping schedule such that the vehicles have fixed departure dates which are not part of the decisions. The problem is to find a feasible schedule that minimizes one of the following objective functions when processing times and weights are oppositely ordered: (1) the total weight of late orders and (2) the number of vehicles used subject to the condition that the total weight of late orders is minimum. We show that both problems are solvable in polynomial time. PMID:25785285
Scheduling Non-Preemptible Jobs to Minimize Peak Demand
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yaw, Sean; Mumey, Brendan
Our paper examines an important problem in smart grid energy scheduling; peaks in power demand are proportionally more expensive to generate and provision for. The issue is exacerbated in local microgrids that do not benefit from the aggregate smoothing experienced by large grids. Demand-side scheduling can reduce these peaks by taking advantage of the fact that there is often flexibility in job start times. We then focus attention on the case where the jobs are non-preemptible, meaning once started, they run to completion. The associated optimization problem is called the peak demand minimization problem, and has been previously shown tomore » be NP-hard. These results include an optimal fixed-parameter tractable algorithm, a polynomial-time approximation algorithm, as well as an effective heuristic that can also be used in an online setting of the problem. Simulation results show that these methods can reduce peak demand by up to 50% versus on-demand scheduling for household power jobs.« less
NASA Astrophysics Data System (ADS)
Chen, Miawjane; Yan, Shangyao; Wang, Sin-Siang; Liu, Chiu-Lan
2015-02-01
An effective project schedule is essential for enterprises to increase their efficiency of project execution, to maximize profit, and to minimize wastage of resources. Heuristic algorithms have been developed to efficiently solve the complicated multi-mode resource-constrained project scheduling problem with discounted cash flows (MRCPSPDCF) that characterize real problems. However, the solutions obtained in past studies have been approximate and are difficult to evaluate in terms of optimality. In this study, a generalized network flow model, embedded in a time-precedence network, is proposed to formulate the MRCPSPDCF with the payment at activity completion times. Mathematically, the model is formulated as an integer network flow problem with side constraints, which can be efficiently solved for optimality, using existing mathematical programming software. To evaluate the model performance, numerical tests are performed. The test results indicate that the model could be a useful planning tool for project scheduling in the real world.
NASA Astrophysics Data System (ADS)
Mirabi, Mohammad; Fatemi Ghomi, S. M. T.; Jolai, F.
2014-04-01
Flow-shop scheduling problem (FSP) deals with the scheduling of a set of n jobs that visit a set of m machines in the same order. As the FSP is NP-hard, there is no efficient algorithm to reach the optimal solution of the problem. To minimize the holding, delay and setup costs of large permutation flow-shop scheduling problems with sequence-dependent setup times on each machine, this paper develops a novel hybrid genetic algorithm (HGA) with three genetic operators. Proposed HGA applies a modified approach to generate a pool of initial solutions, and also uses an improved heuristic called the iterated swap procedure to improve the initial solutions. We consider the make-to-order production approach that some sequences between jobs are assumed as tabu based on maximum allowable setup cost. In addition, the results are compared to some recently developed heuristics and computational experimental results show that the proposed HGA performs very competitively with respect to accuracy and efficiency of solution.
1993-02-01
the (re)planning framework, incorporating the demonstrators CALIGULA and ALLOCATOR for resource allocation and scheduling respectively. In the Command...demonstrator CALIGULA for the problem of allocating frequencies to a radio link network. The problems in the domain of scheduling are dealt with. which has...demonstrating the (re)planning framework, incorporating the demonstrators CALIGULA and ALLOCATOR for resource allocation and scheduling respectively
Intercell scheduling: A negotiation approach using multi-agent coalitions
NASA Astrophysics Data System (ADS)
Tian, Yunna; Li, Dongni; Zheng, Dan; Jia, Yunde
2016-10-01
Intercell scheduling problems arise as a result of intercell transfers in cellular manufacturing systems. Flexible intercell routes are considered in this article, and a coalition-based scheduling (CBS) approach using distributed multi-agent negotiation is developed. Taking advantage of the extended vision of the coalition agents, the global optimization is improved and the communication cost is reduced. The objective of the addressed problem is to minimize mean tardiness. Computational results show that, compared with the widely used combinatorial rules, CBS provides better performance not only in minimizing the objective, i.e. mean tardiness, but also in minimizing auxiliary measures such as maximum completion time, mean flow time and the ratio of tardy parts. Moreover, CBS is better than the existing intercell scheduling approach for the same problem with respect to the solution quality and computational costs.
A Solution Method of Job-shop Scheduling Problems by the Idle Time Shortening Type Genetic Algorithm
NASA Astrophysics Data System (ADS)
Ida, Kenichi; Osawa, Akira
In this paper, we propose a new idle time shortening method for Job-shop scheduling problems (JSPs). We insert its method into a genetic algorithm (GA). The purpose of JSP is to find a schedule with the minimum makespan. We suppose that it is effective to reduce idle time of a machine in order to improve the makespan. The left shift is a famous algorithm in existing algorithms for shortening idle time. The left shift can not arrange the work to idle time. For that reason, some idle times are not shortened by the left shift. We propose two kinds of algorithms which shorten such idle time. Next, we combine these algorithms and the reversal of a schedule. We apply GA with its algorithm to benchmark problems and we show its effectiveness.
Active Solution Space and Search on Job-shop Scheduling Problem
NASA Astrophysics Data System (ADS)
Watanabe, Masato; Ida, Kenichi; Gen, Mitsuo
In this paper we propose a new searching method of Genetic Algorithm for Job-shop scheduling problem (JSP). The coding method that represent job number in order to decide a priority to arrange a job to Gannt Chart (called the ordinal representation with a priority) in JSP, an active schedule is created by using left shift. We define an active solution at first. It is solution which can create an active schedule without using left shift, and set of its defined an active solution space. Next, we propose an algorithm named Genetic Algorithm with active solution space search (GA-asol) which can create an active solution while solution is evaluated, in order to search the active solution space effectively. We applied it for some benchmark problems to compare with other method. The experimental results show good performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
El-Jundi, I.M.
The Qatar NGL-2 plant, commissioned in December 1979, was designed to process the associated gas from the offshore crude oil fields of Qatar. The dehydrated, sour, lean gas and wet, sour liquids are transported by two separate lines to the Umm Said NGL complex about 120 km (75 miles) from the central offshore station. The 300-mm (12-in.) -diameter liquids line has suffered general pitting corrosion, and the 600-mm (24-in.) -diameter lean gas line has suffered corrosion and extensive hydrogen-induced cracking (HIC or HIPC). Neither line performed to its design parameters, and many problems in the downstream facilities have been experienced.more » All efforts to clean the solids (debris) from the liquids lines have failed. This in turn interfered with the planned corrosion control program, thus allowing corrosion to continue. Various specialists have investigated the lines in an attempt to find the origin of the solids and to recommend necessary remedial actions. Should the lines fail from pitting corrosion, the effect of a leak at a pressure of about 11 000 kPa (1,595 psi) will be very dangerous, especially if it occurs onshore. To protect the NGL-2 operations against possible risks - both in terms of safety and of losses in revenue - critical sections of the pipelines have been replaced, and all gas liquids pipelines will be replaced soon. Supplementary documents to the API standards were prepared for the replaced pipelines.« less
Vector processing efficiency of plasma MHD codes by use of the FACOM 230-75 APU
NASA Astrophysics Data System (ADS)
Matsuura, T.; Tanaka, Y.; Naraoka, K.; Takizuka, T.; Tsunematsu, T.; Tokuda, S.; Azumi, M.; Kurita, G.; Takeda, T.
1982-06-01
In the framework of pipelined vector architecture, the efficiency of vector processing is assessed with respect to plasma MHD codes in nuclear fusion research. By using a vector processor, the FACOM 230-75 APU, the limit of the enhancement factor due to parallelism of current vector machines is examined for three numerical codes based on a fluid model. Reasonable speed-up factors of approximately 6,6 and 4 times faster than the highly optimized scalar version are obtained for ERATO (linear stability code), AEOLUS-R1 (nonlinear stability code) and APOLLO (1-1/2D transport code), respectively. Problems of the pipelined vector processors are discussed from the viewpoint of restructuring, optimization and choice of algorithms. In conclusion, the important concept of "concurrency within pipelined parallelism" is emphasized.
a Fully Automated Pipeline for Classification Tasks with AN Application to Remote Sensing
NASA Astrophysics Data System (ADS)
Suzuki, K.; Claesen, M.; Takeda, H.; De Moor, B.
2016-06-01
Nowadays deep learning has been intensively in spotlight owing to its great victories at major competitions, which undeservedly pushed `shallow' machine learning methods, relatively naive/handy algorithms commonly used by industrial engineers, to the background in spite of their facilities such as small requisite amount of time/dataset for training. We, with a practical point of view, utilized shallow learning algorithms to construct a learning pipeline such that operators can utilize machine learning without any special knowledge, expensive computation environment, and a large amount of labelled data. The proposed pipeline automates a whole classification process, namely feature-selection, weighting features and the selection of the most suitable classifier with optimized hyperparameters. The configuration facilitates particle swarm optimization, one of well-known metaheuristic algorithms for the sake of generally fast and fine optimization, which enables us not only to optimize (hyper)parameters but also to determine appropriate features/classifier to the problem, which has conventionally been a priori based on domain knowledge and remained untouched or dealt with naïve algorithms such as grid search. Through experiments with the MNIST and CIFAR-10 datasets, common datasets in computer vision field for character recognition and object recognition problems respectively, our automated learning approach provides high performance considering its simple setting (i.e. non-specialized setting depending on dataset), small amount of training data, and practical learning time. Moreover, compared to deep learning the performance stays robust without almost any modification even with a remote sensing object recognition problem, which in turn indicates that there is a high possibility that our approach contributes to general classification problems.
Dry-running gas seals save $200,000/yr in retrofit hydrogen recycle compressor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pennacchi, R.P.; Germain, A.
1987-10-01
Texaco Chemical Company was using three drums of oil per day in the seal oil system of a hydrogen recycle compressor, resulting in maintenance and operational expenses of more than $160,000 per year. Running 24 hours/day, 365 days/yr, the 26-yr-old compressor is the heart of the benzene manufacturing process unit at the Port Arthur, Texas plant. In the event of an unscheduled shutdown, the important aromatics unit process would halt and cause production losses of thousands of dollars per day. In addition, the close monitoring and minimization of leakage are essential since the gas consists of over 75% hydrogen, withmore » methane, ethane, propane, isobutane, N-butane and pentanes. Texaco Chemical Company decided that retrofit of the hydrogen recycle compressor should be undertaken if the system could be developed to sharply reduce operations and maintenance costs, and increase efficiencies. Texaco engineers selected a dry running-type gas sealing system developed for pipeline compressors in the United States, Canada, and overseas. A tandem-type sealing system was designed to meet specific needs of a hydrogen recycle compressor. The retrofit was scheduled for August 1986 to coincide with the plant's preventative maintenance program. The seal system installation required five days. The retrofit progressed according to schedule, with no problems experienced at the first and several startups since the initial installation. Oil consumption has been eliminated, along with seal support and parasitic energy requirements. With the savings in seal oil, energy, operations and maintenance, payback period for the retrofit sealing system was just over six months. Savings are expected to continue at an annual rate of over $200,000.« less
Improved, Low-Stress Economical Submerged Pipeline
NASA Technical Reports Server (NTRS)
Jones, Jack A.; Chao, Yi
2011-01-01
A preliminary study has shown that the use of a high-strength composite fiber cloth material may greatly reduce fabrication and deployment costs of a subsea offshore pipeline. The problem is to develop an inexpensive submerged pipeline that can safely and economically transport large quantities of fresh water, oil, and natural gas underwater for long distances. Above-water pipelines are often not feasible due to safety, cost, and environmental problems, and present, fixed-wall, submerged pipelines are often very expensive. The solution is to have a submerged, compliant-walled tube that when filled, is lighter than the surrounding medium. Some examples include compliant tubes for transporting fresh water under the ocean, for transporting crude oil underneath salt or fresh water, and for transporting high-pressure natural gas from offshore to onshore. In each case, the fluid transported is lighter than its surrounding fluid, and thus the flexible tube will tend to float. The tube should be ballasted to the ocean floor so as to limit the motion of the tube in the horizontal and vertical directions. The tube should be placed below 100-m depth to minimize biofouling and turbulence from surface storms. The tube may also have periodic pumps to maintain flow without over-pressurizing, or it can have a single pump at the beginning. The tube may have periodic valves that allow sections of the tube to be repaired or maintained. Some examples of tube materials that may be particularly suited for these applications are non-porous composite tubes made of high-performance fibers such as Kevlar, Spectra, PBO, Aramid, carbon fibers, or high-strength glass. Above-ground pipes for transporting water, oil, and natural gas have typically been fabricated from fiber-reinforced plastic or from more costly high-strength steel. Also, previous suggested subsea pipeline designs have only included heavy fixed-wall pipes that can be very expensive initially, and can be difficult and expensive to deploy for long distances. A much less expensive Kevlar pipeline can be coiled up on a ship s deck and deployed in the water as the ship moves. Support ships can be used to drop sand into conduits below the uninflated tube, so that the tube remains in place when more buoyant fresh water later fills the tubes.
Effective Iterated Greedy Algorithm for Flow-Shop Scheduling Problems with Time lags
NASA Astrophysics Data System (ADS)
ZHAO, Ning; YE, Song; LI, Kaidian; CHEN, Siyu
2017-05-01
Flow shop scheduling problem with time lags is a practical scheduling problem and attracts many studies. Permutation problem(PFSP with time lags) is concentrated but non-permutation problem(non-PFSP with time lags) seems to be neglected. With the aim to minimize the makespan and satisfy time lag constraints, efficient algorithms corresponding to PFSP and non-PFSP problems are proposed, which consist of iterated greedy algorithm for permutation(IGTLP) and iterated greedy algorithm for non-permutation (IGTLNP). The proposed algorithms are verified using well-known simple and complex instances of permutation and non-permutation problems with various time lag ranges. The permutation results indicate that the proposed IGTLP can reach near optimal solution within nearly 11% computational time of traditional GA approach. The non-permutation results indicate that the proposed IG can reach nearly same solution within less than 1% computational time compared with traditional GA approach. The proposed research combines PFSP and non-PFSP together with minimal and maximal time lag consideration, which provides an interesting viewpoint for industrial implementation.
Application of a hybrid generation/utility assessment heuristic to a class of scheduling problems
NASA Technical Reports Server (NTRS)
Heyward, Ann O.
1989-01-01
A two-stage heuristic solution approach for a class of multiobjective, n-job, 1-machine scheduling problems is described. Minimization of job-to-job interference for n jobs is sought. The first stage generates alternative schedule sequences by interchanging pairs of schedule elements. The set of alternative sequences can represent nodes of a decision tree; each node is reached via decision to interchange job elements. The second stage selects the parent node for the next generation of alternative sequences through automated paired comparison of objective performance for all current nodes. An application of the heuristic approach to communications satellite systems planning is presented.
Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments
Kadima, Hubert; Granado, Bertrand
2013-01-01
We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach. PMID:24319361
Multi-objective approach for energy-aware workflow scheduling in cloud computing environments.
Yassa, Sonia; Chelouah, Rachid; Kadima, Hubert; Granado, Bertrand
2013-01-01
We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach.
Learning dominance relations in combinatorial search problems
NASA Technical Reports Server (NTRS)
Yu, Chee-Fen; Wah, Benjamin W.
1988-01-01
Dominance relations commonly are used to prune unnecessary nodes in search graphs, but they are problem-dependent and cannot be derived by a general procedure. The authors identify machine learning of dominance relations and the applicable learning mechanisms. A study of learning dominance relations using learning by experimentation is described. This system has been able to learn dominance relations for the 0/1-knapsack problem, an inventory problem, the reliability-by-replication problem, the two-machine flow shop problem, a number of single-machine scheduling problems, and a two-machine scheduling problem. It is considered that the same methodology can be extended to learn dominance relations in general.
Scheduling IT Staff at a Bank: A Mathematical Programming Approach
Labidi, M.; Mrad, M.; Gharbi, A.; Louly, M. A.
2014-01-01
We address a real-world optimization problem: the scheduling of a Bank Information Technologies (IT) staff. This problem can be defined as the process of constructing optimized work schedules for staff. In a general sense, it requires the allocation of suitably qualified staff to specific shifts to meet the demands for services of an organization while observing workplace regulations and attempting to satisfy individual work preferences. A monthly shift schedule is prepared to determine the shift duties of each staff considering shift coverage requirements, seniority-based workload rules, and staff work preferences. Due to the large number of conflicting constraints, a multiobjective programming model has been proposed to automate the schedule generation process. The suggested mathematical model has been implemented using Lingo software. The results indicate that high quality solutions can be obtained within a few seconds compared to the manually prepared schedules. PMID:24772032
Scheduling IT staff at a bank: a mathematical programming approach.
Labidi, M; Mrad, M; Gharbi, A; Louly, M A
2014-01-01
We address a real-world optimization problem: the scheduling of a Bank Information Technologies (IT) staff. This problem can be defined as the process of constructing optimized work schedules for staff. In a general sense, it requires the allocation of suitably qualified staff to specific shifts to meet the demands for services of an organization while observing workplace regulations and attempting to satisfy individual work preferences. A monthly shift schedule is prepared to determine the shift duties of each staff considering shift coverage requirements, seniority-based workload rules, and staff work preferences. Due to the large number of conflicting constraints, a multiobjective programming model has been proposed to automate the schedule generation process. The suggested mathematical model has been implemented using Lingo software. The results indicate that high quality solutions can be obtained within a few seconds compared to the manually prepared schedules.
Research on schedulers for astronomical observatories
NASA Astrophysics Data System (ADS)
Colome, Josep; Colomer, Pau; Guàrdia, Josep; Ribas, Ignasi; Campreciós, Jordi; Coiffard, Thierry; Gesa, Lluis; Martínez, Francesc; Rodler, Florian
2012-09-01
The main task of a scheduler applied to astronomical observatories is the time optimization of the facility and the maximization of the scientific return. Scheduling of astronomical observations is an example of the classical task allocation problem known as the job-shop problem (JSP), where N ideal tasks are assigned to M identical resources, while minimizing the total execution time. A problem of higher complexity, called the Flexible-JSP (FJSP), arises when the tasks can be executed by different resources, i.e. by different telescopes, and it focuses on determining a routing policy (i.e., which machine to assign for each operation) other than the traditional scheduling decisions (i.e., to determine the starting time of each operation). In most cases there is no single best approach to solve the planning system and, therefore, various mathematical algorithms (Genetic Algorithms, Ant Colony Optimization algorithms, Multi-Objective Evolutionary algorithms, etc.) are usually considered to adapt the application to the system configuration and task execution constraints. The scheduling time-cycle is also an important ingredient to determine the best approach. A shortterm scheduler, for instance, has to find a good solution with the minimum computation time, providing the system with the capability to adapt the selected task to varying execution constraints (i.e., environment conditions). We present in this contribution an analysis of the task allocation problem and the solutions currently in use at different astronomical facilities. We also describe the schedulers for three different projects (CTA, CARMENES and TJO) where the conclusions of this analysis are applied to develop a suitable routine.
Systemic Sustainability in RtI Using Intervention-Based Scheduling Methodologies
ERIC Educational Resources Information Center
Dallas, William P.
2017-01-01
This study evaluated a scheduling methodology referred to as intervention-based scheduling to address the problem of practice regarding the fidelity of implementing Response to Intervention (RtI) in an existing school schedule design. Employing panel data, this study used fixed-effects regressions and first differences ordinary least squares (OLS)…
Scheduling Independent Partitions in Integrated Modular Avionics Systems
Du, Chenglie; Han, Pengcheng
2016-01-01
Recently the integrated modular avionics (IMA) architecture has been widely adopted by the avionics industry due to its strong partition mechanism. Although the IMA architecture can achieve effective cost reduction and reliability enhancement in the development of avionics systems, it results in a complex allocation and scheduling problem. All partitions in an IMA system should be integrated together according to a proper schedule such that their deadlines will be met even under the worst case situations. In order to help provide a proper scheduling table for all partitions in IMA systems, we study the schedulability of independent partitions on a multiprocessor platform in this paper. We firstly present an exact formulation to calculate the maximum scaling factor and determine whether all partitions are schedulable on a limited number of processors. Then with a Game Theory analogy, we design an approximation algorithm to solve the scheduling problem of partitions, by allowing each partition to optimize its own schedule according to the allocations of the others. Finally, simulation experiments are conducted to show the efficiency and reliability of the approach proposed in terms of time consumption and acceptance ratio. PMID:27942013
Combined Delivery of Consolidating Pulps to the Remote Sites of Deposits
NASA Astrophysics Data System (ADS)
Golik, V. I.; Efremenkov, A. B.
2017-07-01
The problems of modern mining production include limitation of the scope of application of environmental and resource-saving technologies with application of consolidating pulps when developing the sites of the ore field remote from the stowing complexes which leads to the significant reduction of the performance indicators of underground mining of metallic ores. Experimental approach to the problem solution is characterized by the proof of technological capability and efficiency of the combined vibration-pneumatic-gravity-flowing method of pulps delivery at the distance exceeding the capacity of current delivery methods as it studies the vibration phenomenon in industrial special structure pipeline. The results of the full-scale experiment confirm the theoretical calculations of the capability of consolidating stowing delivery of common composition at the distance exceeding the capacity of usual pneumatic-gravity-flowing delivery method due to reduction of the friction-induced resistance of the consolidating stowing to the movement along the pipeline. The parameters of the interaction of the consolidating stowing components improve in the process of its delivery via the pipeline resulting in the stowing strength increase, completeness of subsurface use improves, the land is saved for agricultural application and the environmental stress is relieved.
Temporal planning for transportation planning and scheduling
NASA Technical Reports Server (NTRS)
Frederking, Robert E.; Muscettola, Nicola
1992-01-01
In this paper we describe preliminary work done in the CORTES project, applying the Heuristic Scheduling Testbed System (HSTS) to a transportation planning and scheduling domain. First, we describe in more detail the transportation problems that we are addressing. We then describe the fundamental characteristics of HSTS and we concentrate on the representation of multiple capacity resources. We continue with a more detailed description of the transportation planning problem that we have initially addressed in HSTS and of its solution. Finally we describe future directions for our research.
Graph Coloring Used to Model Traffic Lights.
ERIC Educational Resources Information Center
Williams, John
1992-01-01
Two scheduling problems, one involving setting up an examination schedule and the other describing traffic light problems, are modeled as colorings of graphs consisting of a set of vertices and edges. The chromatic number, the least number of colors necessary for coloring a graph, is employed in the solutions. (MDH)
ERIC Educational Resources Information Center
Borrero, Carrie S. W.; Vollmer, Timothy R.; Borrero, John C.; Bourret, Jason C.; Sloman, Kimberly N.; Samaha, Andrew L.; Dallery, Jesse
2010-01-01
This study evaluated how children who exhibited functionally equivalent problem and appropriate behavior allocate responding to experimentally arranged reinforcer rates. Relative reinforcer rates were arranged on concurrent variable-interval schedules and effects on relative response rates were interpreted using the generalized matching equation.…
Shiftwork Scheduling for the 1990s.
ERIC Educational Resources Information Center
Coleman, Richard M.
1989-01-01
The author discusses the problems of scheduling shift work, touching on such topics as employee desires, health requirements, and business needs. He presents a method for developing shift schedules that addresses these three areas. Implementation hints are also provided. (CH)
Hidri, Lotfi; Gharbi, Anis; Louly, Mohamed Aly
2014-01-01
We focus on the two-center hybrid flow shop scheduling problem with identical parallel machines and removal times. The job removal time is the required duration to remove it from a machine after its processing. The objective is to minimize the maximum completion time (makespan). A heuristic and a lower bound are proposed for this NP-Hard problem. These procedures are based on the optimal solution of the parallel machine scheduling problem with release dates and delivery times. The heuristic is composed of two phases. The first one is a constructive phase in which an initial feasible solution is provided, while the second phase is an improvement one. Intensive computational experiments have been conducted to confirm the good performance of the proposed procedures.
Efficient Bounding Schemes for the Two-Center Hybrid Flow Shop Scheduling Problem with Removal Times
2014-01-01
We focus on the two-center hybrid flow shop scheduling problem with identical parallel machines and removal times. The job removal time is the required duration to remove it from a machine after its processing. The objective is to minimize the maximum completion time (makespan). A heuristic and a lower bound are proposed for this NP-Hard problem. These procedures are based on the optimal solution of the parallel machine scheduling problem with release dates and delivery times. The heuristic is composed of two phases. The first one is a constructive phase in which an initial feasible solution is provided, while the second phase is an improvement one. Intensive computational experiments have been conducted to confirm the good performance of the proposed procedures. PMID:25610911
Outside the pipeline: reimagining science education for nonscientists.
Feinstein, Noah Weeth; Allen, Sue; Jenkins, Edgar
2013-04-19
Educational policy increasingly emphasizes knowledge and skills for the preprofessional "science pipeline" rather than helping students use science in daily life. We synthesize research on public engagement with science to develop a research-based plan for cultivating competent outsiders: nonscientists who can access and make sense of science relevant to their lives. Schools should help students access and interpret the science they need in response to specific practical problems, judge the credibility of scientific claims based on both evidence and institutional cues, and cultivate deep amateur involvement in science.
An Extended Deterministic Dendritic Cell Algorithm for Dynamic Job Shop Scheduling
NASA Astrophysics Data System (ADS)
Qiu, X. N.; Lau, H. Y. K.
The problem of job shop scheduling in a dynamic environment where random perturbation exists in the system is studied. In this paper, an extended deterministic Dendritic Cell Algorithm (dDCA) is proposed to solve such a dynamic Job Shop Scheduling Problem (JSSP) where unexpected events occurred randomly. This algorithm is designed based on dDCA and makes improvements by considering all types of signals and the magnitude of the output values. To evaluate this algorithm, ten benchmark problems are chosen and different kinds of disturbances are injected randomly. The results show that the algorithm performs competitively as it is capable of triggering the rescheduling process optimally with much less run time for deciding the rescheduling action. As such, the proposed algorithm is able to minimize the rescheduling times under the defined objective and to keep the scheduling process stable and efficient.
Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Abdulhamid, Shafi'i Muhammad; Usman, Mohammed Joda
2017-01-01
Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.
Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Usman, Mohammed Joda
2017-01-01
Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing. PMID:28467505
NASA Technical Reports Server (NTRS)
Madden, Michael G.; Wyrick, Roberta; O'Neill, Dale E.
2005-01-01
Space Shuttle Processing is a complicated and highly variable project. The planning and scheduling problem, categorized as a Resource Constrained - Stochastic Project Scheduling Problem (RC-SPSP), has a great deal of variability in the Orbiter Processing Facility (OPF) process flow from one flight to the next. Simulation Modeling is a useful tool in estimation of the makespan of the overall process. However, simulation requires a model to be developed, which itself is a labor and time consuming effort. With such a dynamic process, often the model would potentially be out of synchronization with the actual process, limiting the applicability of the simulation answers in solving the actual estimation problem. Integration of TEAMS model enabling software with our existing schedule program software is the basis of our solution. This paper explains the approach used to develop an auto-generated simulation model from planning and schedule efforts and available data.
Optimisation of assembly scheduling in VCIM systems using genetic algorithm
NASA Astrophysics Data System (ADS)
Dao, Son Duy; Abhary, Kazem; Marian, Romeo
2017-09-01
Assembly plays an important role in any production system as it constitutes a significant portion of the lead time and cost of a product. Virtual computer-integrated manufacturing (VCIM) system is a modern production system being conceptually developed to extend the application of traditional computer-integrated manufacturing (CIM) system to global level. Assembly scheduling in VCIM systems is quite different from one in traditional production systems because of the difference in the working principles of the two systems. In this article, the assembly scheduling problem in VCIM systems is modeled and then an integrated approach based on genetic algorithm (GA) is proposed to search for a global optimised solution to the problem. Because of dynamic nature of the scheduling problem, a novel GA with unique chromosome representation and modified genetic operations is developed herein. Robustness of the proposed approach is verified by a numerical example.
Some single-machine scheduling problems with learning effects and two competing agents.
Li, Hongjie; Li, Zeyuan; Yin, Yunqiang
2014-01-01
This study considers a scheduling environment in which there are two agents and a set of jobs, each of which belongs to one of the two agents and its actual processing time is defined as a decreasing linear function of its starting time. Each of the two agents competes to process its respective jobs on a single machine and has its own scheduling objective to optimize. The objective is to assign the jobs so that the resulting schedule performs well with respect to the objectives of both agents. The objective functions addressed in this study include the maximum cost, the total weighted completion time, and the discounted total weighted completion time. We investigate three problems arising from different combinations of the objectives of the two agents. The computational complexity of the problems is discussed and solution algorithms where possible are presented.
NASA Astrophysics Data System (ADS)
Wang, Ji-Bo; Wang, Ming-Zheng; Ji, Ping
2012-05-01
In this article, we consider a single machine scheduling problem with a time-dependent learning effect and deteriorating jobs. By the effects of time-dependent learning and deterioration, we mean that the job processing time is defined by a function of its starting time and total normal processing time of jobs in front of it in the sequence. The objective is to determine an optimal schedule so as to minimize the total completion time. This problem remains open for the case of -1 < a < 0, where a denotes the learning index; we show that an optimal schedule of the problem is V-shaped with respect to job normal processing times. Three heuristic algorithms utilising the V-shaped property are proposed, and computational experiments show that the last heuristic algorithm performs effectively and efficiently in obtaining near-optimal solutions.
Electricity Usage Scheduling in Smart Building Environments Using Smart Devices
Lee, Eunji; Bahn, Hyokyung
2013-01-01
With the recent advances in smart grid technologies as well as the increasing dissemination of smart meters, the electricity usage of every moment can be detected in modern smart building environments. Thus, the utility company adopts different price of electricity at each time slot considering the peak time. This paper presents a new electricity usage scheduling algorithm for smart buildings that adopts real-time pricing of electricity. The proposed algorithm detects the change of electricity prices by making use of a smart device and changes the power mode of each electric device dynamically. Specifically, we formulate the electricity usage scheduling problem as a real-time task scheduling problem and show that it is a complex search problem that has an exponential time complexity. An efficient heuristic based on genetic algorithms is performed on a smart device to cut down the huge searching space and find a reasonable schedule within a feasible time budget. Experimental results with various building conditions show that the proposed algorithm reduces the electricity charge of a smart building by 25.6% on average and up to 33.4%. PMID:24453860
Jiang, Yuyi; Shao, Zhiqing; Guo, Yi
2014-01-01
A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems. PMID:25143977
Jiang, Yuyi; Shao, Zhiqing; Guo, Yi
2014-01-01
A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems.
NASA Technical Reports Server (NTRS)
Sadovsky, Alexander V.; Davis, Damek; Isaacson, Douglas R.
2012-01-01
A class of problems in air traffic management asks for a scheduling algorithm that supplies the air traffic services authority not only with a schedule of arrivals and departures, but also with speed advisories. Since advisories must be finite, a scheduling algorithm must ultimately produce a finite data set, hence must either start with a purely discrete model or involve a discretization of a continuous one. The former choice, often preferred for intuitive clarity, naturally leads to mixed-integer programs, hindering proofs of correctness and computational cost bounds (crucial for real-time operations). In this paper, a hybrid control system is used to model air traffic scheduling, capturing both the discrete and continuous aspects. This framework is applied to a class of problems, called the Fully Routed Nominal Problem. We prove a number of geometric results on feasible schedules and use these results to formulate an algorithm that attempts to compute a collective speed advisory, effectively finite, and has computational cost polynomial in the number of aircraft. This work is a first step toward optimization and models refined with more realistic detail.
Electricity usage scheduling in smart building environments using smart devices.
Lee, Eunji; Bahn, Hyokyung
2013-01-01
With the recent advances in smart grid technologies as well as the increasing dissemination of smart meters, the electricity usage of every moment can be detected in modern smart building environments. Thus, the utility company adopts different price of electricity at each time slot considering the peak time. This paper presents a new electricity usage scheduling algorithm for smart buildings that adopts real-time pricing of electricity. The proposed algorithm detects the change of electricity prices by making use of a smart device and changes the power mode of each electric device dynamically. Specifically, we formulate the electricity usage scheduling problem as a real-time task scheduling problem and show that it is a complex search problem that has an exponential time complexity. An efficient heuristic based on genetic algorithms is performed on a smart device to cut down the huge searching space and find a reasonable schedule within a feasible time budget. Experimental results with various building conditions show that the proposed algorithm reduces the electricity charge of a smart building by 25.6% on average and up to 33.4%.
Task and Participant Scheduling of Trading Platforms in Vehicular Participatory Sensing Networks
Shi, Heyuan; Song, Xiaoyu; Gu, Ming; Sun, Jiaguang
2016-01-01
The vehicular participatory sensing network (VPSN) is now becoming more and more prevalent, and additionally has shown its great potential in various applications. A general VPSN consists of many tasks from task, publishers, trading platforms and a crowd of participants. Some literature treats publishers and the trading platform as a whole, which is impractical since they are two independent economic entities with respective purposes. For a trading platform in markets, its purpose is to maximize the profit by selecting tasks and recruiting participants who satisfy the requirements of accepted tasks, rather than to improve the quality of each task. This scheduling problem for a trading platform consists of two parts: which tasks should be selected and which participants to be recruited? In this paper, we investigate the scheduling problem in vehicular participatory sensing with the predictable mobility of each vehicle. A genetic-based trading scheduling algorithm (GTSA) is proposed to solve the scheduling problem. Experiments with a realistic dataset of taxi trajectories demonstrate that GTSA algorithm is efficient for trading platforms to gain considerable profit in VPSN. PMID:27916807
Task and Participant Scheduling of Trading Platforms in Vehicular Participatory Sensing Networks.
Shi, Heyuan; Song, Xiaoyu; Gu, Ming; Sun, Jiaguang
2016-11-28
The vehicular participatory sensing network (VPSN) is now becoming more and more prevalent, and additionally has shown its great potential in various applications. A general VPSN consists of many tasks from task, publishers, trading platforms and a crowd of participants. Some literature treats publishers and the trading platform as a whole, which is impractical since they are two independent economic entities with respective purposes. For a trading platform in markets, its purpose is to maximize the profit by selecting tasks and recruiting participants who satisfy the requirements of accepted tasks, rather than to improve the quality of each task. This scheduling problem for a trading platform consists of two parts: which tasks should be selected and which participants to be recruited? In this paper, we investigate the scheduling problem in vehicular participatory sensing with the predictable mobility of each vehicle. A genetic-based trading scheduling algorithm (GTSA) is proposed to solve the scheduling problem. Experiments with a realistic dataset of taxi trajectories demonstrate that GTSA algorithm is efficient for trading platforms to gain considerable profit in VPSN.
Scheduling real-time, periodic jobs using imprecise results
NASA Technical Reports Server (NTRS)
Liu, Jane W. S.; Lin, Kwei-Jay; Natarajan, Swaminathan
1987-01-01
A process is called a monotone process if the accuracy of its intermediate results is non-decreasing as more time is spent to obtain the result. The result produced by a monotone process upon its normal termination is the desired result; the error in this result is zero. External events such as timeouts or crashes may cause the process to terminate prematurely. If the intermediate result produced by the process upon its premature termination is saved and made available, the application may still find the result unusable and, hence, acceptable; such a result is said to be an imprecise one. The error in an imprecise result is nonzero. The problem of scheduling periodic jobs to meet deadlines on a system that provides the necessary programming language primitives and run-time support for processes to return imprecise results is discussed. This problem differs from the traditional scheduling problems since the scheduler may choose to terminate a task before it is completed, causing it to produce an acceptable but imprecise result. Consequently, the amounts of processor time assigned to tasks in a valid schedule can be less than the amounts of time required to complete the tasks. A meaningful formulation of this problem taking into account the quality of the overall result is discussed. Three algorithms for scheduling jobs for which the effects of errors in results produced in different periods are not cumulative are described, and their relative merits are evaluated.
Low Cost SoC Design of H.264/AVC Decoder for Handheld Video Player
NASA Astrophysics Data System (ADS)
Wisayataksin, Sumek; Li, Dongju; Isshiki, Tsuyoshi; Kunieda, Hiroaki
We propose a low cost and stand-alone platform-based SoC for H.264/AVC decoder, whose target is practical mobile applications such as a handheld video player. Both low cost and stand-alone solutions are particularly emphasized. The SoC, consisting of RISC core and decoder core, has advantages in terms of flexibility, testability and various I/O interfaces. For decoder core design, the proposed H.264/AVC coprocessor in the SoC employs a new block pipelining scheme instead of a conventional macroblock or a hybrid one, which greatly contribute to reducing drastically the size of the core and its pipelining buffer. In addition, the decoder schedule is optimized to block level which is easy to be programmed. Actually, the core size is reduced to 138 KGate with 3.5 kbyte memory. In our practical development, a single external SDRAM is sufficient for both reference frame buffer and display buffer. Various peripheral interfaces such as a compact flash, a digital broadcast receiver and a LCD driver are also provided on a chip.
NASA Astrophysics Data System (ADS)
Brouwer, Albert; Brown, David; Tomuta, Elena
2017-04-01
To detect nuclear explosions, waveform data from over 240 SHI stations world-wide flows into the International Data Centre (IDC) of the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO), located in Vienna, Austria. A complex pipeline of software applications processes this data in numerous ways to form event hypotheses. The software codebase comprises over 2 million lines of code, reflects decades of development, and is subject to frequent enhancement and revision. Since processing must run continuously and reliably, software changes are subjected to thorough testing before being put into production. To overcome the limitations and cost of manual testing, the Continuous Automated Testing System (CATS) has been created. CATS provides an isolated replica of the IDC processing environment, and is able to build and test different versions of the pipeline software directly from code repositories that are placed under strict configuration control. Test jobs are scheduled automatically when code repository commits are made. Regressions are reported. We present the CATS design choices and test methods. Particular attention is paid to how the system accommodates the individual testing of strongly interacting software components that lack test instrumentation.
Improved Weather Forecasting for the Dynamic Scheduling System of the Green Bank Telescope
NASA Astrophysics Data System (ADS)
Henry, Kari; Maddalena, Ronald
2018-01-01
The Robert C Byrd Green Bank Telescope (GBT) uses a software system that dynamically schedules observations based on models of vertical weather forecasts produced by the National Weather Service (NWS). The NWS provides hourly forecasted values for ~60 layers that extend into the stratosphere over the observatory. We use models, recommended by the Radiocommunication Sector of the International Telecommunications Union, to derive the absorption coefficient in each layer for each hour in the NWS forecasts and for all frequencies over which the GBT has receivers, 0.1 to 115 GHz. We apply radiative transfer models to derive the opacity and the atmospheric contributions to the system temperature, thereby deriving forecasts applicable to scheduling radio observations for up to 10 days into the future. Additionally, the algorithms embedded in the data processing pipeline use historical values of the forecasted opacity to calibrate observations. Until recently, we have concentrated on predictions for high frequency (> 15 GHz) observing, as these need to be scheduled carefully around bad weather. We have been using simple models for the contribution of rain and clouds since we only schedule low-frequency observations under these conditions. In this project, we wanted to improve the scheduling of the GBT and data calibration at low frequencies by deriving better algorithms for clouds and rain. To address the limitation at low frequency, the observatory acquired a Radiometrics Corporation MP-1500A radiometer, which operates in 27 channels between 22 and 30 GHz. By comparing 16 months of measurements from the radiometer against forecasted system temperatures, we have confirmed that forecasted system temperatures are indistinguishable from those measured under good weather conditions. Small miss-calibrations of the radiometer data dominate the comparison. By using recalibrated radiometer measurements, we looked at bad weather days to derive better models for forecasting the contribution of clouds to the opacity and system temperatures. We will show how these revised algorithms should help us improve both data calibration and the accuracy of scheduling low-frequency observations.
Satellite Radar Interferometry For Risk Management Of Gas Pipeline Networks
NASA Astrophysics Data System (ADS)
Ianoschi, Raluca; Schouten, Mathijs; Bas Leezenberg, Pieter; Dheenathayalan, Prabu; Hanssen, Ramon
2013-12-01
InSAR time series analyses can be fine-tuned for specific applications, yielding a potential increase in benchmark density, precision and reliability. Here we demonstrate the algorithms developed for gas pipeline monitoring, enabling operators to precisely pinpoint unstable locations. This helps asset management in planning, prioritizing and focusing in-situ inspections, thus reducing maintenance costs. In unconsolidated Quaternary soils, ground settlement contributes to possible failure of brittle cast iron gas pipes and their connections to houses. Other risk factors include the age and material of the pipe. The soil dynamics have led to a catastrophic explosion in the city of Amsterdam, which triggered an increased awareness for the significance of this problem. As the extent of the networks can be very wide, InSAR is shown to be a valuable source of information for identifying the hazard regions. We monitor subsidence affecting an urban gas transportation network in the Netherlands using both medium and high resolution SAR data. Results for the 2003-2010 period provide clear insights on the differential subsidence rates in the area. This enables characterization of underground motion that affects the integrity of the pipeline. High resolution SAR data add extra detail of door-to-door pipeline connections, which are vulnerable due to different settlements between house connections and main pipelines. The rates which we measure represent important input in planning of maintenance works. Managers can decide the priority and timing for inspecting the pipelines. The service helps manage the risk and reduce operational cost in gas transportation networks.
NASA Astrophysics Data System (ADS)
Mohamed, Adel M. E.; Mohamed, Abuo El-Ela A.
2013-06-01
Ground vibrations induced by blasting in the cement quarries are one of the fundamental problems in the quarrying industry and may cause severe damage to the nearby utilities and pipelines. Therefore, a vibration control study plays an important role in the minimization of environmental effects of blasting in quarries. The current paper presents the influence of the quarry blasts at the National Cement Company (NCC) on the two oil pipelines of SUMED Company southeast of Helwan City, by measuring the ground vibrations in terms of Peak Particle Velocity (PPV). The seismic refraction for compressional waves deduced from the shallow seismic survey and the shear wave velocity obtained from the Multi channel Analysis of Surface Waves (MASW) technique are used to evaluate the closest site of the two pipelines to the quarry blasts. The results demonstrate that, the closest site of the two pipelines is of class B, according to the National Earthquake Hazard Reduction Program (NEHRP) classification and the safe distance to avoid any environmental effects is 650 m, following the deduced Peak Particle Velocity (PPV) and scaled distance (SD) relationship (PPV = 700.08 × SD-1.225) in mm/s and the Air over Pressure (air blast) formula (air blast = 170.23 × SD-0.071) in dB. In the light of prediction analysis, the maximum allowable charge weight per delay was found to be 591 kg with damage criterion of 12.5 mm/s at the closest site of the SUMED pipelines.
Mittra, James; Tait, Joyce; Wield, David
2011-03-01
The pharmaceutical and agro-biotechnology industries have been confronted by dwindling product pipelines and rapid developments in life sciences, thus demanding a strategic rethink of conventional research and development. Despite offering both industries a solution to the pipeline problem, the life sciences have also brought complex regulatory challenges for firms. In this paper, we comment on the response of these industries to the life science trajectory, in the context of maturing conventional small-molecule product pipelines and routes to market. The challenges of managing transition from maturity to new high-value-added innovation models are addressed. Furthermore, we argue that regulation plays a crucial role in shaping the innovation systems of both industries, and as such, we suggest potentially useful changes to the current regulatory system. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lu, Yuan-Yuan; Wang, Ji-Bo; Ji, Ping; He, Hongyu
2017-09-01
In this article, single-machine group scheduling with learning effects and convex resource allocation is studied. The goal is to find the optimal job schedule, the optimal group schedule, and resource allocations of jobs and groups. For the problem of minimizing the makespan subject to limited resource availability, it is proved that the problem can be solved in polynomial time under the condition that the setup times of groups are independent. For the general setup times of groups, a heuristic algorithm and a branch-and-bound algorithm are proposed, respectively. Computational experiments show that the performance of the heuristic algorithm is fairly accurate in obtaining near-optimal solutions.
Guidelines for glycol dehydrator design; Part 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manning, W.P.; Wood, H.S.
1993-01-01
Better designs and instrumentation improve glycol dehydrator performance. This paper reports on these guidelines which emphasize efficient water removal from natural gas. Water, a common contaminant in natural gas, causes operational problems when it forms hydrates and deposits on solid surfaces. Result: plugged valves, meters, instruments and even pipelines. Simple rules resolve these problems and reduce downtime and maintenance costs.
Algorithms for Scheduling and Network Problems
1991-09-01
time. We already know, by Lemma 2.2.1, that WOPT = O(log( mpU )), so if we could solve this integer program optimally we would be done. However, the...Folydirat, 15:177-191, 1982. [6] I.S. Belov and Ya. N. Stolin. An algorithm in a single path operations scheduling problem. In Mathematical Economics and
Phunchongharn, Phond; Hossain, Ekram; Camorlinga, Sergio
2011-11-01
We study the multiple access problem for e-Health applications (referred to as secondary users) coexisting with medical devices (referred to as primary or protected users) in a hospital environment. In particular, we focus on transmission scheduling and power control of secondary users in multiple spatial reuse time-division multiple access (STDMA) networks. The objective is to maximize the spectrum utilization of secondary users and minimize their power consumption subject to the electromagnetic interference (EMI) constraints for active and passive medical devices and minimum throughput guarantee for secondary users. The multiple access problem is formulated as a dual objective optimization problem which is shown to be NP-complete. We propose a joint scheduling and power control algorithm based on a greedy approach to solve the problem with much lower computational complexity. To this end, an enhanced greedy algorithm is proposed to improve the performance of the greedy algorithm by finding the optimal sequence of secondary users for scheduling. Using extensive simulations, the tradeoff in performance in terms of spectrum utilization, energy consumption, and computational complexity is evaluated for both the algorithms.
Wan, Jiangwen; Yu, Yang; Wu, Yinfeng; Feng, Renjian; Yu, Ning
2012-01-01
In light of the problems of low recognition efficiency, high false rates and poor localization accuracy in traditional pipeline security detection technology, this paper proposes a type of hierarchical leak detection and localization method for use in natural gas pipeline monitoring sensor networks. In the signal preprocessing phase, original monitoring signals are dealt with by wavelet transform technology to extract the single mode signals as well as characteristic parameters. In the initial recognition phase, a multi-classifier model based on SVM is constructed and characteristic parameters are sent as input vectors to the multi-classifier for initial recognition. In the final decision phase, an improved evidence combination rule is designed to integrate initial recognition results for final decisions. Furthermore, a weighted average localization algorithm based on time difference of arrival is introduced for determining the leak point’s position. Experimental results illustrate that this hierarchical pipeline leak detection and localization method could effectively improve the accuracy of the leak point localization and reduce the undetected rate as well as false alarm rate. PMID:22368464
Wan, Jiangwen; Yu, Yang; Wu, Yinfeng; Feng, Renjian; Yu, Ning
2012-01-01
In light of the problems of low recognition efficiency, high false rates and poor localization accuracy in traditional pipeline security detection technology, this paper proposes a type of hierarchical leak detection and localization method for use in natural gas pipeline monitoring sensor networks. In the signal preprocessing phase, original monitoring signals are dealt with by wavelet transform technology to extract the single mode signals as well as characteristic parameters. In the initial recognition phase, a multi-classifier model based on SVM is constructed and characteristic parameters are sent as input vectors to the multi-classifier for initial recognition. In the final decision phase, an improved evidence combination rule is designed to integrate initial recognition results for final decisions. Furthermore, a weighted average localization algorithm based on time difference of arrival is introduced for determining the leak point's position. Experimental results illustrate that this hierarchical pipeline leak detection and localization method could effectively improve the accuracy of the leak point localization and reduce the undetected rate as well as false alarm rate.
Latest Development and Application of Nb-Bearing High Strength Pipeline Steels
NASA Astrophysics Data System (ADS)
Zhang, Yongqing; Shang, Chengjia; Guo, Aimin; Zheng, Lei; Niu, Tao; Han, Xiulin
In order to solve the pollution problem emerging in China recently, China's central government is making great efforts to raise the percentage of natural gas consumption in the China's primary energy mix, which needs to construct big pipelines to transport natural gas from the nation's resource-rich western regions to the energy-starved east, as well as import from the Central Asia and Russia. With this mainstream trend, high strength, high toughness, heavy gauge, and large diameter pipeline steels are needed to improve the transportation efficiency. This paper describes the latest progresses in Nb-bearing high strength pipeline steels with regard to metallurgical design, development and application, including X80 coil with a thickness up to 22.0mm, X80 plate with a diameter as much as 1422mm, X80 plate with low-temperature requirements and low-Mn sour service X65 for harsh sour service environments. Moreover, based on widely accepted TMCP and HTP practices with low carbon and Nb micro-alloying design, this paper also investigated some new metallurgical phenomena based on powerful rolling mills and heavy ACC equipment.
Space power system scheduling using an expert system
NASA Technical Reports Server (NTRS)
Bahrami, K. A.; Biefeld, E.; Costello, L.; Klein, J. W.
1986-01-01
A most pressing problem in space exploration is timely spacecraft power system sequence generation, which requires the scheduling of a set of loads given a set of resource constraints. This is particularly important after an anomaly or failure. This paper discusses the power scheduling problem and how the software program, Plan-It, can be used as a consultant for scheduling power system activities. Modeling of power activities, human interface, and two of the many strategies used by Plan-It are discussed. Preliminary results showing the development of a conflict-free sequence from an initial sequence with conflicts is presented. It shows that a 4-day schedule can be generated in a matter of a few minutes, which provides sufficient time in many cases to aid the crew in the replanning of loads and generation use following a failure or anomaly.
Daniel, Stephanie S.; Grzywacz, Joseph G.; Leerkes, Esther; Tucker, Jenna; Han, Wen-Jui
2009-01-01
This paper examines the associations between maternal nonstandard work schedules during infancy and children's early behavior problems, and the extent to which infant temperament may moderate these associations. Hypothesized associations were tested using data from the National Institute of Child Health and Human Development (NICHD) Study of Early Child Care (Phase I). Analyses focused on mothers who returned to work by the time the child was 6 months of age, and who worked an average of at least 35 h per week from 6 through 36 months. At 24 and 36 months, children whose mothers worked a nonstandard schedule had higher internalizing and externalizing behaviors. Modest, albeit inconsistent, evidence suggests that temperamentally reactive children may be more vulnerable to maternal work schedules. Maternal depressive symptoms partially mediated associations between nonstandard maternal work schedules and child behavior outcomes. PMID:19233479
Aiding USAF/UPT (Undergraduate Pilot Training) Aircrew Scheduling Using Network Flow Models.
1986-06-01
51 3.4 Heuristic Modifications ............ 55 CHAPTER 4 STUDENT SCHEDULING PROBLEM (LEVEL 2) 4.0 Introduction 4.01 Constraints ............. 60 4.02...Covering" Complete Enumeration . . .. . 71 4.14 Heuristics . ............. 72 4.2 Heuristic Method for the Level 2 Problem 4.21 Step I ............... 73...4.22 Step 2 ............... 74 4.23 Advantages to the Heuristic Method. .... .. 78 4.24 Problems with the Heuristic Method. . ... 79 :,., . * CHAPTER5
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Gross, Michael; Kuerklu, Elif
2003-01-01
We did cool stuff to reduce the number of IVPs and BVPs needed to schedule SOFIA by restricting the problem. The restriction costs us little in terms of the value of the flight plans we can build. The restriction allowed us to reformulate part of the search problem as a zero-finding problem. The result is a simplified planning model and significant savings in computation time.
A derived heuristics based multi-objective optimization procedure for micro-grid scheduling
NASA Astrophysics Data System (ADS)
Li, Xin; Deb, Kalyanmoy; Fang, Yanjun
2017-06-01
With the availability of different types of power generators to be used in an electric micro-grid system, their operation scheduling as the load demand changes with time becomes an important task. Besides satisfying load balance constraints and the generator's rated power, several other practicalities, such as limited availability of grid power and restricted ramping of power output from generators, must all be considered during the operation scheduling process, which makes it difficult to decide whether the optimization results are accurate and satisfactory. In solving such complex practical problems, heuristics-based customized optimization algorithms are suggested. However, due to nonlinear and complex interactions of variables, it is difficult to come up with heuristics in such problems off-hand. In this article, a two-step strategy is proposed in which the first task deciphers important heuristics about the problem and the second task utilizes the derived heuristics to solve the original problem in a computationally fast manner. Specifically, the specific operation scheduling is considered from a two-objective (cost and emission) point of view. The first task develops basic and advanced level knowledge bases offline from a series of prior demand-wise optimization runs and then the second task utilizes them to modify optimized solutions in an application scenario. Results on island and grid connected modes and several pragmatic formulations of the micro-grid operation scheduling problem clearly indicate the merit of the proposed two-step procedure.
A methodological proposal for the development of an HPC-based antenna array scheduler
NASA Astrophysics Data System (ADS)
Bonvallet, Roberto; Hoffstadt, Arturo; Herrera, Diego; López, Daniela; Gregorio, Rodrigo; Almuna, Manuel; Hiriart, Rafael; Solar, Mauricio
2010-07-01
As new astronomy projects choose interferometry to improve angular resolution and to minimize costs, preparing and optimizing schedules for an antenna array becomes an increasingly critical task. This problem shares similarities with the job-shop problem, which is known to be a NP-hard problem, making a complete approach infeasible. In the case of ALMA, 18000 projects per season are expected, and the best schedule must be found in the order of minutes. The problem imposes severe difficulties: the large domain of observation projects to be taken into account; a complex objective function, composed of several abstract, environmental, and hardware constraints; the number of restrictions imposed and the dynamic nature of the problem, as weather is an ever-changing variable. A solution can benefit from the use of High-Performance Computing for the final implementation to be deployed, but also for the development process. Our research group proposes the use of both metaheuristic search and statistical learning algorithms, in order to create schedules in a reasonable time. How these techniques will be applied is yet to be determined as part of the ongoing research. Several algorithms need to be implemented, tested and evaluated by the team. This work presents the methodology proposed to lead the development of the scheduler. The basic functionality is encapsulated into software components implemented on parallel architectures. These components expose a domain-level interface to the researchers, enabling then to develop early prototypes for evaluating and comparing their proposed techniques.
Design of an Aircrew Scheduling Decision Aid for the 6916th Electronic Security Squadron.
1987-06-01
Security Classification) Design of an Aircrew Scheduling Decision Aid for the 6916th Electronic Security Squadron 12. PERSONAL AUTHOR(S) Thomas J. Kopf...Because of the great number of possible scheduling alternatives, it is difficult to find an optimal solution to-the scheduling problem. Additionally...changes to the original schedule make it even more difficult to find an optimal solution. The emergence of capable microcompu- ters, decision support
NASA Astrophysics Data System (ADS)
Hsiao, Ming-Chih; Su, Ling-Huey
2018-02-01
This research addresses the problem of scheduling hybrid machine types, in which one type is a two-machine flowshop and another type is a single machine. A job is either processed on the two-machine flowshop or on the single machine. The objective is to determine a production schedule for all jobs so as to minimize the makespan. The problem is NP-hard since the two parallel machines problem was proved to be NP-hard. Simulated annealing algorithms are developed to solve the problem optimally. A mixed integer programming (MIP) is developed and used to evaluate the performance for two SAs. Computational experiments demonstrate the efficiency of the simulated annealing algorithms, the quality of the simulated annealing algorithms will also be reported.
Single-machine group scheduling problems with deteriorating and learning effect
NASA Astrophysics Data System (ADS)
Xingong, Zhang; Yong, Wang; Shikun, Bai
2016-07-01
The concepts of deteriorating jobs and learning effects have been individually studied in many scheduling problems. However, most studies considering the deteriorating and learning effects ignore the fact that production efficiency can be increased by grouping various parts and products with similar designs and/or production processes. This phenomenon is known as 'group technology' in the literature. In this paper, a new group scheduling model with deteriorating and learning effects is proposed, where learning effect depends not only on job position, but also on the position of the corresponding job group; deteriorating effect depends on its starting time of the job. This paper shows that the makespan and the total completion time problems remain polynomial optimal solvable under the proposed model. In addition, a polynomial optimal solution is also presented to minimise the maximum lateness problem under certain agreeable restriction.
Scheduling of an aircraft fleet
NASA Technical Reports Server (NTRS)
Paltrinieri, Massimo; Momigliano, Alberto; Torquati, Franco
1992-01-01
Scheduling is the task of assigning resources to operations. When the resources are mobile vehicles, they describe routes through the served stations. To emphasize such aspect, this problem is usually referred to as the routing problem. In particular, if vehicles are aircraft and stations are airports, the problem is known as aircraft routing. This paper describes the solution to such a problem developed in OMAR (Operative Management of Aircraft Routing), a system implemented by Bull HN for Alitalia. In our approach, aircraft routing is viewed as a Constraint Satisfaction Problem. The solving strategy combines network consistency and tree search techniques.
An Optimization of Manufacturing Systems using a Feedback Control Scheduling Model
NASA Astrophysics Data System (ADS)
Ikome, John M.; Kanakana, Grace M.
2018-03-01
In complex production system that involves multiple process, unplanned disruption often turn to make the entire production system vulnerable to a number of problems which leads to customer’s dissatisfaction. However, this problem has been an ongoing problem that requires a research and methods to streamline the entire process or develop a model that will address it, in contrast to this, we have developed a feedback scheduling model that can minimize some of this problem and after a number of experiment, it shows that some of this problems can be eliminated if the correct remedial actions are implemented on time.
NASA Astrophysics Data System (ADS)
Birgin, Ernesto G.; Ronconi, Débora P.
2012-10-01
The single machine scheduling problem with a common due date and non-identical ready times for the jobs is examined in this work. Performance is measured by the minimization of the weighted sum of earliness and tardiness penalties of the jobs. Since this problem is NP-hard, the application of constructive heuristics that exploit specific characteristics of the problem to improve their performance is investigated. The proposed approaches are examined through a computational comparative study on a set of 280 benchmark test problems with up to 1000 jobs.
Optimizing integrated airport surface and terminal airspace operations under uncertainty
NASA Astrophysics Data System (ADS)
Bosson, Christabelle S.
In airports and surrounding terminal airspaces, the integration of surface, arrival and departure scheduling and routing have the potential to improve the operations efficiency. Moreover, because both the airport surface and the terminal airspace are often altered by random perturbations, the consideration of uncertainty in flight schedules is crucial to improve the design of robust flight schedules. Previous research mainly focused on independently solving arrival scheduling problems, departure scheduling problems and surface management scheduling problems and most of the developed models are deterministic. This dissertation presents an alternate method to model the integrated operations by using a machine job-shop scheduling formulation. A multistage stochastic programming approach is chosen to formulate the problem in the presence of uncertainty and candidate solutions are obtained by solving sample average approximation problems with finite sample size. The developed mixed-integer-linear-programming algorithm-based scheduler is capable of computing optimal aircraft schedules and routings that reflect the integration of air and ground operations. The assembled methodology is applied to a Los Angeles case study. To show the benefits of integrated operations over First-Come-First-Served, a preliminary proof-of-concept is conducted for a set of fourteen aircraft evolving under deterministic conditions in a model of the Los Angeles International Airport surface and surrounding terminal areas. Using historical data, a representative 30-minute traffic schedule and aircraft mix scenario is constructed. The results of the Los Angeles application show that the integration of air and ground operations and the use of a time-based separation strategy enable both significant surface and air time savings. The solution computed by the optimization provides a more efficient routing and scheduling than the First-Come-First-Served solution. Additionally, a data driven analysis is performed for the Los Angeles environment and probabilistic distributions of pertinent uncertainty sources are obtained. A sensitivity analysis is then carried out to assess the methodology performance and find optimal sampling parameters. Finally, simulations of increasing traffic density in the presence of uncertainty are conducted first for integrated arrivals and departures, then for integrated surface and air operations. To compare the optimization results and show the benefits of integrated operations, two aircraft separation methods are implemented that offer different routing options. The simulations of integrated air operations and the simulations of integrated air and surface operations demonstrate that significant traveling time savings, both total and individual surface and air times, can be obtained when more direct routes are allowed to be traveled even in the presence of uncertainty. The resulting routings induce however extra take off delay for departing flights. As a consequence, some flights cannot meet their initial assigned runway slot which engenders runway position shifting when comparing resulting runway sequences computed under both deterministic and stochastic conditions. The optimization is able to compute an optimal runway schedule that represents an optimal balance between total schedule delays and total travel times.
Scheduling Projects with Multiskill Learning Effect
2014-01-01
We investigate the project scheduling problem with multiskill learning effect. A new model is proposed to deal with the problem, where both autonomous and induced learning are considered. In order to obtain the optimal solution, a genetic algorithm with specific encoding and decoding schemes is introduced. A numerical example is used to illustrate the proposed model. The computational results show that the learning effect cannot be neglected in project scheduling. By means of determining the level of induced learning, the project manager can balance the project makespan with total cost. PMID:24683355
Scheduling projects with multiskill learning effect.
Zha, Hong; Zhang, Lianying
2014-01-01
We investigate the project scheduling problem with multiskill learning effect. A new model is proposed to deal with the problem, where both autonomous and induced learning are considered. In order to obtain the optimal solution, a genetic algorithm with specific encoding and decoding schemes is introduced. A numerical example is used to illustrate the proposed model. The computational results show that the learning effect cannot be neglected in project scheduling. By means of determining the level of induced learning, the project manager can balance the project makespan with total cost.
Human-Machine Collaborative Optimization via Apprenticeship Scheduling
2016-09-09
prenticeship Scheduling (COVAS), which performs ma- chine learning using human expert demonstration, in conjunction with optimization, to automatically and ef...ficiently produce optimal solutions to challenging real- world scheduling problems. COVAS first learns a policy from human scheduling demonstration via...apprentice- ship learning , then uses this initial solution to provide a tight bound on the value of the optimal solution, thereby substantially
Cui, Laizhong; Lu, Nan; Chen, Fu
2014-01-01
Most large-scale peer-to-peer (P2P) live streaming systems use mesh to organize peers and leverage pull scheduling to transmit packets for providing robustness in dynamic environment. The pull scheduling brings large packet delay. Network coding makes the push scheduling feasible in mesh P2P live streaming and improves the efficiency. However, it may also introduce some extra delays and coding computational overhead. To improve the packet delay, streaming quality, and coding overhead, in this paper are as follows. we propose a QoS driven push scheduling approach. The main contributions of this paper are: (i) We introduce a new network coding method to increase the content diversity and reduce the complexity of scheduling; (ii) we formulate the push scheduling as an optimization problem and transform it to a min-cost flow problem for solving it in polynomial time; (iii) we propose a push scheduling algorithm to reduce the coding overhead and do extensive experiments to validate the effectiveness of our approach. Compared with previous approaches, the simulation results demonstrate that packet delay, continuity index, and coding ratio of our system can be significantly improved, especially in dynamic environments. PMID:25114968
Solving Open Job-Shop Scheduling Problems by SAT Encoding
NASA Astrophysics Data System (ADS)
Koshimura, Miyuki; Nabeshima, Hidetomo; Fujita, Hiroshi; Hasegawa, Ryuzo
This paper tries to solve open Job-Shop Scheduling Problems (JSSP) by translating them into Boolean Satisfiability Testing Problems (SAT). The encoding method is essentially the same as the one proposed by Crawford and Baker. The open problems are ABZ8, ABZ9, YN1, YN2, YN3, and YN4. We proved that the best known upper bounds 678 of ABZ9 and 884 of YN1 are indeed optimal. We also improved the upper bound of YN2 and lower bounds of ABZ8, YN2, YN3 and YN4.
McGinnis, Molly A; Houchins-Juárez, Nealetta; McDaniel, Jill L; Kennedy, Craig H
2010-01-01
Three participants whose problem behavior was maintained by contingent attention were exposed to 45-min presessions in which attention was withheld, provided on a fixed-time (FT) 15-s schedule, or provided on an FT 120-s schedule. Following each presession, participants were then tested in a 15-min session similar to the social attention condition of an analogue functional analysis. The results showed establishing operation conditions increased problem behavior during tests and that abolishing operation conditions decreased problem behavior during tests. PMID:20808502
McGinnis, Molly A; Houchins-Juárez, Nealetta; McDaniel, Jill L; Kennedy, Craig H
2010-03-01
Three participants whose problem behavior was maintained by contingent attention were exposed to 45-min presessions in which attention was withheld, provided on a fixed-time (FT) 15-s schedule, or provided on an FT 120-s schedule. Following each presession, participants were then tested in a 15-min session similar to the social attention condition of an analogue functional analysis. The results showed establishing operation conditions increased problem behavior during tests and that abolishing operation conditions decreased problem behavior during tests.
Research on Scheduling Algorithm for Multi-satellite and Point Target Task on Swinging Mode
NASA Astrophysics Data System (ADS)
Wang, M.; Dai, G.; Peng, L.; Song, Z.; Chen, G.
2012-12-01
Nowadays, using satellite in space to observe ground is an important and major method to obtain ground information. With the development of the scientific technology in the field of space, many fields such as military and economic and other areas have more and more requirement of space technology because of the benefits of the satellite's widespread, timeliness and unlimited of area and country. And at the same time, because of the wide use of all kinds of satellites, sensors, repeater satellites and ground receiving stations, ground control system are now facing great challenge. Therefore, how to make the best value of satellite resources so as to make full use of them becomes an important problem of ground control system. Satellite scheduling is to distribute the resource to all tasks without conflict to obtain the scheduling result so as to complete as many tasks as possible to meet user's requirement under considering the condition of the requirement of satellites, sensors and ground receiving stations. Considering the size of the task, we can divide tasks into point task and area task. This paper only considers point targets. In this paper, a description of satellite scheduling problem and a chief introduction of the theory of satellite scheduling are firstly made. We also analyze the restriction of resource and task in scheduling satellites. The input and output flow of scheduling process are also chiefly described in the paper. On the basis of these analyses, we put forward a scheduling model named as multi-variable optimization model for multi-satellite and point target task on swinging mode. In the multi-variable optimization model, the scheduling problem is transformed the parametric optimization problem. The parameter we wish to optimize is the swinging angle of every time-window. In the view of the efficiency and accuracy, some important problems relating the satellite scheduling such as the angle relation between satellites and ground targets, positive and negative swinging angle and the computation of time window are analyzed and discussed. And many strategies to improve the efficiency of this model are also put forward. In order to solve the model, we bring forward the conception of activity sequence map. By using the activity sequence map, the activity choice and the start time of the activity can be divided. We also bring forward three neighborhood operators to search the result space. The front movement remaining time and the back movement remaining time are used to analyze the feasibility to generate solution from neighborhood operators. Lastly, the algorithm to solve the problem and model is put forward based genetic algorithm. Population initialization, crossover operator, mutation operator, individual evaluation, collision decrease operator, select operator and collision elimination operator is designed in the paper. Finally, the scheduling result and the simulation for a practical example on 5 satellites and 100 point targets with swinging mode is given, and the scheduling performances are also analyzed while the swinging angle in 0, 5, 10, 15, 25. It can be shown by the result that the model and the algorithm are more effective than those ones without swinging mode.
Frutos, M; Méndez, M; Tohmé, F; Broz, D
2013-01-01
Many of the problems that arise in production systems can be handled with multiobjective techniques. One of those problems is that of scheduling operations subject to constraints on the availability of machines and buffer capacity. In this paper we analyze different Evolutionary multiobjective Algorithms (MOEAs) for this kind of problems. We consider an experimental framework in which we schedule production operations for four real world Job-Shop contexts using three algorithms, NSGAII, SPEA2, and IBEA. Using two performance indexes, Hypervolume and R2, we found that SPEA2 and IBEA are the most efficient for the tasks at hand. On the other hand IBEA seems to be a better choice of tool since it yields more solutions in the approximate Pareto frontier.
Innately Split Model for Job-shop Scheduling Problem
NASA Astrophysics Data System (ADS)
Ikeda, Kokolo; Kobayashi, Sigenobu
Job-shop Scheduling Problem (JSP) is one of the most difficult benchmark problems. GA approaches often fail searching the global optimum because of the deception UV-structure of JSPs. In this paper, we introduce a novel framework model of GA, Innately Split Model (ISM) which prevents UV-phenomenon, and discuss on its power particularly. Next we analyze the structure of JSPs with the help of the UV-structure hypothesys, and finally we show ISM's excellent performance on JSP.
Yu, Rong; Zhong, Weifeng; Xie, Shengli; Zhang, Yan; Zhang, Yun
2016-02-01
As the next-generation power grid, smart grid will be integrated with a variety of novel communication technologies to support the explosive data traffic and the diverse requirements of quality of service (QoS). Cognitive radio (CR), which has the favorable ability to improve the spectrum utilization, provides an efficient and reliable solution for smart grid communications networks. In this paper, we study the QoS differential scheduling problem in the CR-based smart grid communications networks. The scheduler is responsible for managing the spectrum resources and arranging the data transmissions of smart grid users (SGUs). To guarantee the differential QoS, the SGUs are assigned to have different priorities according to their roles and their current situations in the smart grid. Based on the QoS-aware priority policy, the scheduler adjusts the channels allocation to minimize the transmission delay of SGUs. The entire transmission scheduling problem is formulated as a semi-Markov decision process and solved by the methodology of adaptive dynamic programming. A heuristic dynamic programming (HDP) architecture is established for the scheduling problem. By the online network training, the HDP can learn from the activities of primary users and SGUs, and adjust the scheduling decision to achieve the purpose of transmission delay minimization. Simulation results illustrate that the proposed priority policy ensures the low transmission delay of high priority SGUs. In addition, the emergency data transmission delay is also reduced to a significantly low level, guaranteeing the differential QoS in smart grid.
Algorithms for parallel flow solvers on message passing architectures
NASA Technical Reports Server (NTRS)
Vanderwijngaart, Rob F.
1995-01-01
The purpose of this project has been to identify and test suitable technologies for implementation of fluid flow solvers -- possibly coupled with structures and heat equation solvers -- on MIMD parallel computers. In the course of this investigation much attention has been paid to efficient domain decomposition strategies for ADI-type algorithms. Multi-partitioning derives its efficiency from the assignment of several blocks of grid points to each processor in the parallel computer. A coarse-grain parallelism is obtained, and a near-perfect load balance results. In uni-partitioning every processor receives responsibility for exactly one block of grid points instead of several. This necessitates fine-grain pipelined program execution in order to obtain a reasonable load balance. Although fine-grain parallelism is less desirable on many systems, especially high-latency networks of workstations, uni-partition methods are still in wide use in production codes for flow problems. Consequently, it remains important to achieve good efficiency with this technique that has essentially been superseded by multi-partitioning for parallel ADI-type algorithms. Another reason for the concentration on improving the performance of pipeline methods is their applicability in other types of flow solver kernels with stronger implied data dependence. Analytical expressions can be derived for the size of the dynamic load imbalance incurred in traditional pipelines. From these it can be determined what is the optimal first-processor retardation that leads to the shortest total completion time for the pipeline process. Theoretical predictions of pipeline performance with and without optimization match experimental observations on the iPSC/860 very well. Analysis of pipeline performance also highlights the effect of uncareful grid partitioning in flow solvers that employ pipeline algorithms. If grid blocks at boundaries are not at least as large in the wall-normal direction as those immediately adjacent to them, then the first processor in the pipeline will receive a computational load that is less than that of subsequent processors, magnifying the pipeline slowdown effect. Extra compensation is needed for grid boundary effects, even if all grid blocks are equally sized.
Li, Guo; Lv, Fei; Guan, Xu
2014-01-01
This paper investigates a collaborative scheduling model in the assembly system, wherein multiple suppliers have to deliver their components to the multiple manufacturers under the operation of Supply-Hub. We first develop two different scenarios to examine the impact of Supply-Hub. One is that suppliers and manufacturers make their decisions separately, and the other is that the Supply-Hub makes joint decisions with collaborative scheduling. The results show that our scheduling model with the Supply-Hub is a NP-complete problem, therefore, we propose an auto-adapted differential evolution algorithm to solve this problem. Moreover, we illustrate that the performance of collaborative scheduling by the Supply-Hub is superior to separate decision made by each manufacturer and supplier. Furthermore, we also show that the algorithm proposed has good convergence and reliability, which can be applicable to more complicated supply chain environment.
Lv, Fei; Guan, Xu
2014-01-01
This paper investigates a collaborative scheduling model in the assembly system, wherein multiple suppliers have to deliver their components to the multiple manufacturers under the operation of Supply-Hub. We first develop two different scenarios to examine the impact of Supply-Hub. One is that suppliers and manufacturers make their decisions separately, and the other is that the Supply-Hub makes joint decisions with collaborative scheduling. The results show that our scheduling model with the Supply-Hub is a NP-complete problem, therefore, we propose an auto-adapted differential evolution algorithm to solve this problem. Moreover, we illustrate that the performance of collaborative scheduling by the Supply-Hub is superior to separate decision made by each manufacturer and supplier. Furthermore, we also show that the algorithm proposed has good convergence and reliability, which can be applicable to more complicated supply chain environment. PMID:24892104
Nurse Scheduling by Cooperative GA with Effective Mutation Operator
NASA Astrophysics Data System (ADS)
Ohki, Makoto
In this paper, we propose an effective mutation operators for Cooperative Genetic Algorithm (CGA) to be applied to a practical Nurse Scheduling Problem (NSP). The nurse scheduling is a very difficult task, because NSP is a complex combinatorial optimizing problem for which many requirements must be considered. In real hospitals, the schedule changes frequently. The changes of the shift schedule yields various problems, for example, a fall in the nursing level. We describe a technique of the reoptimization of the nurse schedule in response to a change. The conventional CGA is superior in ability for local search by means of its crossover operator, but often stagnates at the unfavorable situation because it is inferior to ability for global search. When the optimization stagnates for long generation cycle, a searching point, population in this case, would be caught in a wide local minimum area. To escape such local minimum area, small change in a population should be required. Based on such consideration, we propose a mutation operator activated depending on the optimization speed. When the optimization stagnates, in other words, when the optimization speed decreases, the mutation yields small changes in the population. Then the population is able to escape from a local minimum area by means of the mutation. However, this mutation operator requires two well-defined parameters. This means that user have to consider the value of these parameters carefully. To solve this problem, we propose a periodic mutation operator which has only one parameter to define itself. This simplified mutation operator is effective over a wide range of the parameter value.
NASA Astrophysics Data System (ADS)
Kyrychok, Vladyslav; Torop, Vasyl
2018-03-01
The present paper is devoted to the problem of the assessment of probable crack growth at pressure vessel nozzles zone under the cyclic seismic loads. The approaches to creating distributed pipeline systems, connected to equipment are being proposed. The possibility of using in common different finite element program packages for accurate estimation of the strength of bonded pipelines and pressure vessels systems is shown and justified. The authors propose checking the danger of defects in nozzle domain, evaluate the residual life of the system, basing on the developed approach.
Distributed fiber optic system for oil pipeline leakage detection
NASA Astrophysics Data System (ADS)
Paranjape, R.; Liu, N.; Rumple, C.; Hara, Elmer H.
2003-02-01
We present a novel approach for the detection of leakage in oil pipelines using methods of fiber optic distributed sensors, a presence-of-oil based actuator, and Optical Time Domain Reflectometry (OTDR). While the basic concepts of our approach are well understood, the integration of the components into a complete system is a real world engineering design problem. Our focus has been on the development of the actuator design and testing using installed dark fiber. Initial results are promising, however environmental studies into the long term effects of exposure to the environment are still pending.
On the Floating Point Performance of the i860 Microprocessor
NASA Technical Reports Server (NTRS)
Lee, King; Kutler, Paul (Technical Monitor)
1997-01-01
The i860 microprocessor is a pipelined processor that can deliver two double precision floating point results every clock. It is being used in the Touchstone project to develop a teraflop computer by the year 2000. With such high computational capabilities it was expected that memory bandwidth would limit performance on many kernels. Measured performance of three kernels showed performance is less than what memory bandwidth limitations would predict. This paper develops a model that explains the discrepancy in terms of memory latencies and points to some problems involved in moving data from memory to the arithmetic pipelines.
The TJO-OAdM robotic observatory: OpenROCS and dome control
NASA Astrophysics Data System (ADS)
Colomé, Josep; Francisco, Xavier; Ribas, Ignasi; Casteels, Kevin; Martín, Jonatan
2010-07-01
The Telescope Joan Oró at the Montsec Astronomical Observatory (TJO - OAdM) is a small-class observatory working in completely unattended control. There are key problems to solve when a robotic control is envisaged, both on hardware and software issues. We present the OpenROCS (ROCS stands for Robotic Observatory Control System), an open source platform developed for the robotic control of the TJO - OAdM and similar astronomical observatories. It is a complex software architecture, composed of several applications for hardware control, event handling, environment monitoring, target scheduling, image reduction pipeline, etc. The code is developed in Java, C++, Python and Perl. The software infrastructure used is based on the Internet Communications Engine (Ice), an object-oriented middleware that provides object-oriented remote procedure call, grid computing, and publish/subscribe functionality. We also describe the subsystem in charge of the dome control: several hardware and software elements developed to specially protect the system at this identified single point of failure. It integrates a redundant control and a rain detector signal for alarm triggering and it responds autonomously in case communication with any of the control elements is lost (watchdog functionality). The self-developed control software suite (OpenROCS) and dome control system have proven to be highly reliable.
Engineering considerations for corrosion monitoring of gas gathering pipeline systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Braga, T.G.; Asperger, R.G.
1987-01-01
Proper corrosion monitoring of gas gathering pipelines requires a system review to determine the appropriate monitor locations and types of monitoring techniques. This paper develops and discusses a classification of conditions such as flow regime and gas composition. Also discussed are junction categories which, for corrosion monitoring, need to be considered from two points of view. The first is related to fluid flow in the line and the second is related corrosion inhibitor movement along the pipeline. The appropriate application of the various monitoring techniques such as coupons, hydrogen detectors, electrical resistance probe and linear polarization probes are discussed inmore » relation to flow regime and gas composition. Problems caused by semi-conduction from iron sulfide are considered. Advantages and disadvantages of fluid gathering methods such as pots and flow-through drips are discussed in relation to their reliability as on-line monitoring locations.« less
Research on Buckling State of Prestressed Fiber-Strengthened Steel Pipes
NASA Astrophysics Data System (ADS)
Wang, Ruheng; Lan, Kunchang
2018-01-01
The main restorative methods of damaged oil and gas pipelines include welding reinforcement, fixture reinforcement and fiber material reinforcement. Owing to the severe corrosion problems of pipes in practical use, the research on renovation and consolidation techniques of damaged pipes gains extensive attention by experts and scholars both at home and abroad. The analysis of mechanical behaviors of reinforced pressure pipelines and further studies focusing on “the critical buckling” and intensity of pressure pipeline failure are conducted in this paper, providing theoretical basis to restressed fiber-strengthened steel pipes. Deformation coordination equations and buckling control equations of steel pipes under the effect of prestress is deduced by using Rayleigh Ritz method, which is an approximation method based on potential energy stationary value theory and minimum potential energy principle. According to the deformation of prestressed steel pipes, the deflection differential equation of prestressed steel pipes is established, and the critical value of buckling under prestress is obtained.
Chen, I-Hsuan; Aguilar, Hillary Andaluz; Paez Paez, J Sebastian; Wu, Xiaofeng; Pan, Li; Wendt, Michael K; Iliuk, Anton B; Zhang, Ying; Tao, W Andy
2018-05-15
Glycoproteins comprise more than half of current FDA-approved protein cancer markers, but the development of new glycoproteins as disease biomarkers has been stagnant. Here we present a pipeline to develop glycoproteins from extracellular vesicles (EVs) through integrating quantitative glycoproteomics with a novel reverse phase glycoprotein array and then apply it to identify novel biomarkers for breast cancer. EV glycoproteomics show promise in circumventing the problems plaguing current serum/plasma glycoproteomics and allowed us to identify hundreds of glycoproteins that have not been identified in blood. We identified 1,453 unique glycopeptides representing 556 glycoproteins in EVs, among which 20 were verified significantly higher in individual breast cancer patients. We further applied a novel glyco-specific reverse phase protein array to quantify a subset of the candidates. Together, this study demonstrates the great potential of this integrated pipeline for biomarker discovery.
NASA Astrophysics Data System (ADS)
Chang, Yung-Chia; Li, Vincent C.; Chiang, Chia-Ju
2014-04-01
Make-to-order or direct-order business models that require close interaction between production and distribution activities have been adopted by many enterprises in order to be competitive in demanding markets. This article considers an integrated production and distribution scheduling problem in which jobs are first processed by one of the unrelated parallel machines and then distributed to corresponding customers by capacitated vehicles without intermediate inventory. The objective is to find a joint production and distribution schedule so that the weighted sum of total weighted job delivery time and the total distribution cost is minimized. This article presents a mathematical model for describing the problem and designs an algorithm using ant colony optimization. Computational experiments illustrate that the algorithm developed is capable of generating near-optimal solutions. The computational results also demonstrate the value of integrating production and distribution in the model for the studied problem.
Zhimeng, Li; Chuan, He; Dishan, Qiu; Jin, Liu; Manhao, Ma
2013-01-01
Aiming to the imaging tasks scheduling problem on high-altitude airship in emergency condition, the programming models are constructed by analyzing the main constraints, which take the maximum task benefit and the minimum energy consumption as two optimization objectives. Firstly, the hierarchy architecture is adopted to convert this scheduling problem into three subproblems, that is, the task ranking, value task detecting, and energy conservation optimization. Then, the algorithms are designed for the sub-problems, and the solving results are corresponding to feasible solution, efficient solution, and optimization solution of original problem, respectively. This paper makes detailed introduction to the energy-aware optimization strategy, which can rationally adjust airship's cruising speed based on the distribution of task's deadline, so as to decrease the total energy consumption caused by cruising activities. Finally, the application results and comparison analysis show that the proposed strategy and algorithm are effective and feasible. PMID:23864822
Minimizing conflicts: A heuristic repair method for constraint-satisfaction and scheduling problems
NASA Technical Reports Server (NTRS)
Minton, Steve; Johnston, Mark; Philips, Andrew; Laird, Phil
1992-01-01
This paper describes a simple heuristic approach to solving large-scale constraint satisfaction and scheduling problems. In this approach one starts with an inconsistent assignment for a set of variables and searches through the space of possible repairs. The search can be guided by a value-ordering heuristic, the min-conflicts heuristic, that attempts to minimize the number of constraint violations after each step. The heuristic can be used with a variety of different search strategies. We demonstrate empirically that on the n-queens problem, a technique based on this approach performs orders of magnitude better than traditional backtracking techniques. We also describe a scheduling application where the approach has been used successfully. A theoretical analysis is presented both to explain why this method works well on certain types of problems and to predict when it is likely to be most effective.
NASA Astrophysics Data System (ADS)
Garcia-Santiago, C. A.; Del Ser, J.; Upton, C.; Quilligan, F.; Gil-Lopez, S.; Salcedo-Sanz, S.
2015-11-01
When seeking near-optimal solutions for complex scheduling problems, meta-heuristics demonstrate good performance with affordable computational effort. This has resulted in a gravitation towards these approaches when researching industrial use-cases such as energy-efficient production planning. However, much of the previous research makes assumptions about softer constraints that affect planning strategies and about how human planners interact with the algorithm in a live production environment. This article describes a job-shop problem that focuses on minimizing energy consumption across a production facility of shared resources. The application scenario is based on real facilities made available by the Irish Center for Manufacturing Research. The formulated problem is tackled via harmony search heuristics with random keys encoding. Simulation results are compared to a genetic algorithm, a simulated annealing approach and a first-come-first-served scheduling. The superior performance obtained by the proposed scheduler paves the way towards its practical implementation over industrial production chains.
Applications of colored petri net and genetic algorithms to cluster tool scheduling
NASA Astrophysics Data System (ADS)
Liu, Tung-Kuan; Kuo, Chih-Jen; Hsiao, Yung-Chin; Tsai, Jinn-Tsong; Chou, Jyh-Horng
2005-12-01
In this paper, we propose a method, which uses Coloured Petri Net (CPN) and genetic algorithm (GA) to obtain an optimal deadlock-free schedule and to solve re-entrant problem for the flexible process of the cluster tool. The process of the cluster tool for producing a wafer usually can be classified into three types: 1) sequential process, 2) parallel process, and 3) sequential parallel process. But these processes are not economical enough to produce a variety of wafers in small volume. Therefore, this paper will propose the flexible process where the operations of fabricating wafers are randomly arranged to achieve the best utilization of the cluster tool. However, the flexible process may have deadlock and re-entrant problems which can be detected by CPN. On the other hand, GAs have been applied to find the optimal schedule for many types of manufacturing processes. Therefore, we successfully integrate CPN and GAs to obtain an optimal schedule with the deadlock and re-entrant problems for the flexible process of the cluster tool.
Naval Postgraduate School Scheduling Support System (NPS4)
1992-03-01
NPSS ...... .................. 156 2. Final Exam Scheduler .. .......... 159 F. PRESENTATION SYSTEM ... ............. . 160 G. USER INTERFACE... NPSS ...... .................. 185 2. Final Exam Model ... ............ 186 3. The Class Schedulers .. .......... 186 4. Assessment of Problem Model...Information Distribution ....... 150 4.13 NPSS Optimization Process .... ............ . 157 4.14 NPSS Performance ..... ................ . 159 4.15 Department
A scheduling algorithm for Spacelab telescope observations
NASA Technical Reports Server (NTRS)
Grone, B.
1982-01-01
An algorithm is developed for sequencing and scheduling of observations of stellar targets by equipment on Spacelab. The method is a general one. The scheduling problem is defined and examined. The method developed for its solution is documented. Suggestions for further development and implementation of this method are made.
Integrating the ODI-PPA scientific gateway with the QuickReduce pipeline for on-demand processing
NASA Astrophysics Data System (ADS)
Young, Michael D.; Kotulla, Ralf; Gopu, Arvind; Liu, Wilson
2014-07-01
As imaging systems improve, the size of astronomical data has continued to grow, making the transfer and processing of data a significant burden. To solve this problem for the WIYN Observatory One Degree Imager (ODI), we developed the ODI-Portal, Pipeline, and Archive (ODI-PPA) science gateway, integrating the data archive, data reduction pipelines, and a user portal. In this paper, we discuss the integration of the QuickReduce (QR) pipeline into PPA's Tier 2 processing framework. QR is a set of parallelized, stand-alone Python routines accessible to all users, and operators who can create master calibration products and produce standardized calibrated data, with a short turn-around time. Upon completion, the data are ingested into the archive and portal, and made available to authorized users. Quality metrics and diagnostic plots are generated and presented via the portal for operator approval and user perusal. Additionally, users can tailor the calibration process to their specific science objective(s) by selecting custom datasets, applying preferred master calibrations or generating their own, and selecting pipeline options. Submission of a QuickReduce job initiates data staging, pipeline execution, and ingestion of output data products all while allowing the user to monitor the process status, and to download or further process/analyze the output within the portal. User-generated data products are placed into a private user-space within the portal. ODI-PPA leverages cyberinfrastructure at Indiana University including the Big Red II supercomputer, the Scholarly Data Archive tape system and the Data Capacitor shared file system.
NASA Astrophysics Data System (ADS)
Vetrov, A.
2009-05-01
The condition of underground constructions, communication and supply systems in the cities has to be periodically monitored and controlled in order to prevent their breakage, which can result in serious accident, especially in urban area. The most risk of damage have the underground construction made of steal such as pipelines widely used for water, gas and heat supply. To ensure the pipeline survivability it is necessary to carry out the operative and inexpensive control of pipelines condition. Induced electromagnetic methods of geophysics can be applied to provide such diagnostics. The highly developed surface in urbane area is one of cause hampering the realization of electromagnetic methods of diagnostics. The main problem is in finding of an appropriate place for the source line and electrodes on a limited surface area and their optimal position relative to the observation path to minimize their influence on observed data. Author made a number of experiments of an underground heating system pipeline diagnostics using different position of the source line and electrodes. The experiments were made on a 200 meters section over 2 meters deep pipeline. The admissible length of the source line and angle between the source line and the observation path were determined. The minimal length of the source line for the experiment conditions and accuracy made 30 meters, the maximum admissible angle departure from the perpendicular position made 30 degrees. The work was undertaken in cooperation with diagnostics company DIsSO, Saint-Petersburg, Russia.
[Pressure control in medical gas distribution systems].
Bourgain, J L; Benayoun, L; Baguenard, P; Haré, G; Puizillout, J M; Billard, V
1997-01-01
To assess whether the pressure gauges at the downstream part of pressure regulators are accurate enough to ensure that pressure in O2 pipeline is always higher than in Air pipeline and that pressure in the latter is higher than pressure in N2O pipeline. A pressure difference of at least 0.4 bar between two medical gas supply systems is recommended to avoid the reflow of either N2O or Air into the O2 pipeline, through a faulty mixer or proportioning device. Prospective technical comparative study. Readings of 32 Bourdon gauges were compared with data obtained with a calibrated reference transducer. Two sets of measurements were performed at a one month interval. Pressure differences between Bourdon gauges and reference transducer were 8% (0.28 bar) in average for a theoretical maximal error less than 2.5%. During the first set of measurements, Air pressure was higher than O2 pressure in one place and N2O pressure higher than Air pressure in another. After an increase in the O2 pipeline pressure and careful setting of pressure regulators, this problem was not observed at the second set of measurements. Actual accuracy of Bourdon gauges was not convenient enough to ensure that O2 pressure was always above Air pressure. Regular controls of these pressure gauges are therefore essential. Replacement of the faulty Bourdon gauges by more accurate transducers should be considered. As an alternative, the increase in pressure difference between O2 and Air pipelines to at least 0.6 bar is recommended.
Binary Trees and Parallel Scheduling Algorithms.
1980-09-01
been pro- cessed for p. time units. If a job does not complete by its due time, it is tardy. In a nonpreemptive schedule, job i is scheduled to process...the preemptive schedule obtained by the algorithm of section 2.1.2 also minimizes 5Ti, this problem is easily solved in parallel. When lci is to e...August 1978, pp. 657-661. 14. Horn, W. A., "Some simple scheduling algorithms," Naval Res. Logist . Qur., Vol. 21, pp. 177-185, 1974. i5. Hforowitz, E
WeFold: A Coopetition for Protein Structure Prediction
Khoury, George A.; Liwo, Adam; Khatib, Firas; Zhou, Hongyi; Chopra, Gaurav; Bacardit, Jaume; Bortot, Leandro O.; Faccioli, Rodrigo A.; Deng, Xin; He, Yi; Krupa, Pawel; Li, Jilong; Mozolewska, Magdalena A.; Sieradzan, Adam K.; Smadbeck, James; Wirecki, Tomasz; Cooper, Seth; Flatten, Jeff; Xu, Kefan; Baker, David; Cheng, Jianlin; Delbem, Alexandre C. B.; Floudas, Christodoulos A.; Keasar, Chen; Levitt, Michael; Popović, Zoran; Scheraga, Harold A.; Skolnick, Jeffrey; Crivelli, Silvia N.; Players, Foldit
2014-01-01
The protein structure prediction problem continues to elude scientists. Despite the introduction of many methods, only modest gains were made over the last decade for certain classes of prediction targets. To address this challenge, a social-media based worldwide collaborative effort, named WeFold, was undertaken by thirteen labs. During the collaboration, the labs were simultaneously competing with each other. Here, we present the first attempt at “coopetition” in scientific research applied to the protein structure prediction and refinement problems. The coopetition was possible by allowing the participating labs to contribute different components of their protein structure prediction pipelines and create new hybrid pipelines that they tested during CASP10. This manuscript describes both successes and areas needing improvement as identified throughout the first WeFold experiment and discusses the efforts that are underway to advance this initiative. A footprint of all contributions and structures are publicly accessible at http://www.wefold.org. PMID:24677212
NASA Astrophysics Data System (ADS)
Kosterina, E. A.
2018-01-01
The situation of leakage of a polluting liquid from a longitudinal crack of the pipeline lying on the ground surface is considered. The two-dimensional nonstationary mathematical model is based on the mass balance equation in terms of pressure, which is satisfied in a domain with an unknown moving boundary. This area corresponds to the area of contaminated zone. A function characterizing the region of action of the equation is introduced, which makes it possible to obtain the formulation of the problem in a fixed domain. Two types of finite-difference approximation of the problem statement are proposed. They differ by approximation of the convective term. Counter-current approximation and approximation along characteristics are used. The results of computational experiments, which are in favor of using the method of characteristics, are presented. The methods application is illustrated by an example of spread of oil pollution.
New scheduling rules for a dynamic flexible flow line problem with sequence-dependent setup times
NASA Astrophysics Data System (ADS)
Kia, Hamidreza; Ghodsypour, Seyed Hassan; Davoudpour, Hamid
2017-09-01
In the literature, the application of multi-objective dynamic scheduling problem and simple priority rules are widely studied. Although these rules are not efficient enough due to simplicity and lack of general insight, composite dispatching rules have a very suitable performance because they result from experiments. In this paper, a dynamic flexible flow line problem with sequence-dependent setup times is studied. The objective of the problem is minimization of mean flow time and mean tardiness. A 0-1 mixed integer model of the problem is formulated. Since the problem is NP-hard, four new composite dispatching rules are proposed to solve it by applying genetic programming framework and choosing proper operators. Furthermore, a discrete-event simulation model is made to examine the performances of scheduling rules considering four new heuristic rules and the six adapted heuristic rules from the literature. It is clear from the experimental results that composite dispatching rules that are formed from genetic programming have a better performance in minimization of mean flow time and mean tardiness than others.
An Algorithm for the Weighted Earliness-Tardiness Unconstrained Project Scheduling Problem
NASA Astrophysics Data System (ADS)
Afshar Nadjafi, Behrouz; Shadrokh, Shahram
This research considers a project scheduling problem with the object of minimizing weighted earliness-tardiness penalty costs, taking into account a deadline for the project and precedence relations among the activities. An exact recursive method has been proposed for solving the basic form of this problem. We present a new depth-first branch and bound algorithm for extended form of the problem, which time value of money is taken into account by discounting the cash flows. The algorithm is extended with two bounding rules in order to reduce the size of the branch and bound tree. Finally, some test problems are solved and computational results are reported.
An Observer's View of the ORAC System at UKIRT
NASA Astrophysics Data System (ADS)
Wright, G. S.; Bridger, A. B.; Pickup, D. A.; Tan, M.; Folger, M.; Economou, F.; Adamson, A. J.; Currie, M. J.; Rees, N. P.; Purves, M.; Kackley, R. D.
The Observatory Reduction and Acquisition Control system (ORAC) was commissioned with its first instrument at the UK Infrared Telescope (UKIRT) in October 1999, and with all of the other UKIRT instrumentation this year. ORAC's advance preparation Observing Tool makes it simpler to prepare and carry out observations. Its Observing Manager gives observers excellent feedback on their observing as it goes along, reducing wasted time. The ORAC pipelined Data Reduction system produces near-publication quality reduced data at the telescope. ORAC is now in use for all observing at UKIRT, including flexibly scheduled nights and service observing. This paper provides an observer's perspective of the system and its performance.
Identification of failure type in corroded pipelines: a bayesian probabilistic approach.
Breton, T; Sanchez-Gheno, J C; Alamilla, J L; Alvarez-Ramirez, J
2010-07-15
Spillover of hazardous materials from transport pipelines can lead to catastrophic events with serious and dangerous environmental impact, potential fire events and human fatalities. The problem is more serious for large pipelines when the construction material is under environmental corrosion conditions, as in the petroleum and gas industries. In this way, predictive models can provide a suitable framework for risk evaluation, maintenance policies and substitution procedure design that should be oriented to reduce increased hazards. This work proposes a bayesian probabilistic approach to identify and predict the type of failure (leakage or rupture) for steel pipelines under realistic corroding conditions. In the first step of the modeling process, the mechanical performance of the pipe is considered for establishing conditions under which either leakage or rupture failure can occur. In the second step, experimental burst tests are used to introduce a mean probabilistic boundary defining a region where the type of failure is uncertain. In the boundary vicinity, the failure discrimination is carried out with a probabilistic model where the events are considered as random variables. In turn, the model parameters are estimated with available experimental data and contrasted with a real catastrophic event, showing good discrimination capacity. The results are discussed in terms of policies oriented to inspection and maintenance of large-size pipelines in the oil and gas industry. 2010 Elsevier B.V. All rights reserved.
A novel pipeline based FPGA implementation of a genetic algorithm
NASA Astrophysics Data System (ADS)
Thirer, Nonel
2014-05-01
To solve problems when an analytical solution is not available, more and more bio-inspired computation techniques have been applied in the last years. Thus, an efficient algorithm is the Genetic Algorithm (GA), which imitates the biological evolution process, finding the solution by the mechanism of "natural selection", where the strong has higher chances to survive. A genetic algorithm is an iterative procedure which operates on a population of individuals called "chromosomes" or "possible solutions" (usually represented by a binary code). GA performs several processes with the population individuals to produce a new population, like in the biological evolution. To provide a high speed solution, pipelined based FPGA hardware implementations are used, with a nstages pipeline for a n-phases genetic algorithm. The FPGA pipeline implementations are constraints by the different execution time of each stage and by the FPGA chip resources. To minimize these difficulties, we propose a bio-inspired technique to modify the crossover step by using non identical twins. Thus two of the chosen chromosomes (parents) will build up two new chromosomes (children) not only one as in classical GA. We analyze the contribution of this method to reduce the execution time in the asynchronous and synchronous pipelines and also the possibility to a cheaper FPGA implementation, by using smaller populations. The full hardware architecture for a FPGA implementation to our target ALTERA development card is presented and analyzed.
Dynamic water behaviour due to one trapped air pocket in a laboratory pipeline apparatus
NASA Astrophysics Data System (ADS)
Bergant, A.; Karadžić, U.; Tijsseling, A.
2016-11-01
Trapped air pockets may cause severe operational problems in hydropower and water supply systems. A locally isolated air pocket creates distinct amplitude, shape and timing of pressure pulses. This paper investigates dynamic behaviour of a single trapped air pocket. The air pocket is incorporated as a boundary condition into the discrete gas cavity model (DGCM). DGCM allows small gas cavities to form at computational sections in the method of characteristics (MOC). The growth of the pocket and gas cavities is described by the water hammer compatibility equation(s), the continuity equation for the cavity volume, and the equation of state of an ideal gas. Isentropic behaviour is assumed for the trapped gas pocket and an isothermal bath for small gas cavities. Experimental investigations have been performed in a laboratory pipeline apparatus. The apparatus consists of an upstream end high-pressure tank, a horizontal steel pipeline (total length 55.37 m, inner diameter 18 mm), four valve units positioned along the pipeline including the end points, and a downstream end tank. A trapped air pocket is captured between two ball valves at the downstream end of the pipeline. The transient event is initiated by rapid opening of the upstream end valve; the downstream end valve stays closed during the event. Predicted and measured results for a few typical cases are compared and discussed.
Better approximation guarantees for job-shop scheduling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldberg, L.A.; Paterson, M.; Srinivasan, A.
1997-06-01
Job-shop scheduling is a classical NP-hard problem. Shmoys, Stein & Wein presented the first polynomial-time approximation algorithm for this problem that has a good (polylogarithmic) approximation guarantee. We improve the approximation guarantee of their work, and present further improvements for some important NP-hard special cases of this problem (e.g., in the preemptive case where machines can suspend work on operations and later resume). We also present NC algorithms with improved approximation guarantees for some NP-hard special cases.
Continual planning and scheduling for managing patient tests in hospital laboratories.
Marinagi, C C; Spyropoulos, C D; Papatheodorou, C; Kokkotos, S
2000-10-01
Hospital laboratories perform examination tests upon patients, in order to assist medical diagnosis or therapy progress. Planning and scheduling patient requests for examination tests is a complicated problem because it concerns both minimization of patient stay in hospital and maximization of laboratory resources utilization. In the present paper, we propose an integrated patient-wise planning and scheduling system which supports the dynamic and continual nature of the problem. The proposed combination of multiagent and blackboard architecture allows the dynamic creation of agents that share a set of knowledge sources and a knowledge base to service patient test requests.
An Improved Memetic Algorithm for Break Scheduling
NASA Astrophysics Data System (ADS)
Widl, Magdalena; Musliu, Nysret
In this paper we consider solving a complex real life break scheduling problem. This problem of high practical relevance arises in many working areas, e.g. in air traffic control and other fields where supervision personnel is working. The objective is to assign breaks to employees such that various constraints reflecting legal demands or ergonomic criteria are satisfied and staffing requirement violations are minimised.
ERIC Educational Resources Information Center
McGinnis, Molly A.; Houchins-Juarez, Nealetta; McDaniel, Jill L.; Kennedy, Craig H.
2010-01-01
Three participants whose problem behavior was maintained by contingent attention were exposed to 45-min presessions in which attention was withheld, provided on a fixed-time (FT) 15-s schedule, or provided on an FT 120-s schedule. Following each presession, participants were then tested in a 15-min session similar to the social attention condition…
Longest jobs first algorithm in solving job shop scheduling using adaptive genetic algorithm (GA)
NASA Astrophysics Data System (ADS)
Alizadeh Sahzabi, Vahid; Karimi, Iman; Alizadeh Sahzabi, Navid; Mamaani Barnaghi, Peiman
2012-01-01
In this paper, genetic algorithm was used to solve job shop scheduling problems. One example discussed in JSSP (Job Shop Scheduling Problem) and I described how we can solve such these problems by genetic algorithm. The goal in JSSP is to gain the shortest process time. Furthermore I proposed a method to obtain best performance on performing all jobs in shortest time. The method mainly, is according to Genetic algorithm (GA) and crossing over between parents always follows the rule which the longest process is at the first in the job queue. In the other word chromosomes is suggested to sorts based on the longest processes to shortest i.e. "longest job first" says firstly look which machine contains most processing time during its performing all its jobs and that is the bottleneck. Secondly, start sort those jobs which are belonging to that specific machine descending. Based on the achieved results," longest jobs first" is the optimized status in job shop scheduling problems. In our results the accuracy would grow up to 94.7% for total processing time and the method improved 4% the accuracy of performing all jobs in the presented example.
Evaluation of fixed momentary dro schedules under signaled and unsignaled arrangements.
Hammond, Jennifer L; Iwata, Brian A; Fritz, Jennifer N; Dempsey, Carrie M
2011-01-01
Fixed momentary schedules of differential reinforcement of other behavior (FM DRO) generally have been ineffective as treatment for problem behavior. Because most early research on FM DRO included presentation of a signal at the end of the DRO interval, it is unclear whether the limited effects of FM DRO were due to (a) the momentary response requirement of the schedule per se or (b) discrimination of the contingency made more salient by the signal. To separate these two potential influences, we compared the effects of signaled versus unsignaled FM DRO with 4 individuals with developmental disabilities whose problem behavior was maintained by social-positive reinforcement. During signaled FM DRO, the experimenter presented a visual stimulus 3 s prior to the end of the DRO interval and delivered reinforcement contingent on the absence of problem behavior at the second the interval elapsed. Unsignaled DRO was identical except that interval termination was not signaled. Results indicated that signaled FM DRO was effective in decreasing 2 subjects' problem behavior, whereas an unsignaled schedule was required for the remaining 2 subjects. These results suggest that the response requirement per se of FM DRO may not be problematic if it is not easily discriminated.
Xiang, Wei; Li, Chong
2015-01-01
Operating Room (OR) is the core sector in hospital expenditure, the operation management of which involves a complete three-stage surgery flow, multiple resources, prioritization of the various surgeries, and several real-life OR constraints. As such reasonable surgery scheduling is crucial to OR management. To optimize OR management and reduce operation cost, a short-term surgery scheduling problem is proposed and defined based on the survey of the OR operation in a typical hospital in China. The comprehensive operation cost is clearly defined considering both under-utilization and overutilization. A nested Ant Colony Optimization (nested-ACO) incorporated with several real-life OR constraints is proposed to solve such a combinatorial optimization problem. The 10-day manual surgery schedules from a hospital in China are compared with the optimized schedules solved by the nested-ACO. Comparison results show the advantage using the nested-ACO in several measurements: OR-related time, nurse-related time, variation in resources' working time, and the end time. The nested-ACO considering real-life operation constraints such as the difference between first and following case, surgeries priority, and fixed nurses in pre/post-operative stage is proposed to solve the surgery scheduling optimization problem. The results clearly show the benefit of using the nested-ACO in enhancing the OR management efficiency and minimizing the comprehensive overall operation cost.
Decision support system for the operating room rescheduling problem.
van Essen, J Theresia; Hurink, Johann L; Hartholt, Woutske; van den Akker, Bernd J
2012-12-01
Due to surgery duration variability and arrivals of emergency surgeries, the planned Operating Room (OR) schedule is disrupted throughout the day which may lead to a change in the start time of the elective surgeries. These changes may result in undesirable situations for patients, wards or other involved departments, and therefore, the OR schedule has to be adjusted. In this paper, we develop a decision support system (DSS) which assists the OR manager in this decision by providing the three best adjusted OR schedules. The system considers the preferences of all involved stakeholders and only evaluates the OR schedules that satisfy the imposed resource constraints. The decision rules used for this system are based on a thorough analysis of the OR rescheduling problem. We model this problem as an Integer Linear Program (ILP) which objective is to minimize the deviation from the preferences of the considered stakeholders. By applying this ILP to instances from practice, we determined that the given preferences mainly lead to (i) shifting a surgery and (ii) scheduling a break between two surgeries. By using these changes in the DSS, the performed simulation study shows that less surgeries are canceled and patients and wards are more satisfied, but also that the perceived workload of several departments increases to compensate this. The system can also be used to judge the acceptability of a proposed initial OR schedule.
Generation of Look-Up Tables for Dynamic Job Shop Scheduling Decision Support Tool
NASA Astrophysics Data System (ADS)
Oktaviandri, Muchamad; Hassan, Adnan; Mohd Shaharoun, Awaluddin
2016-02-01
Majority of existing scheduling techniques are based on static demand and deterministic processing time, while most job shop scheduling problem are concerned with dynamic demand and stochastic processing time. As a consequence, the solutions obtained from the traditional scheduling technique are ineffective wherever changes occur to the system. Therefore, this research intends to develop a decision support tool (DST) based on promising artificial intelligent that is able to accommodate the dynamics that regularly occur in job shop scheduling problem. The DST was designed through three phases, i.e. (i) the look-up table generation, (ii) inverse model development and (iii) integration of DST components. This paper reports the generation of look-up tables for various scenarios as a part in development of the DST. A discrete event simulation model was used to compare the performance among SPT, EDD, FCFS, S/OPN and Slack rules; the best performances measures (mean flow time, mean tardiness and mean lateness) and the job order requirement (inter-arrival time, due dates tightness and setup time ratio) which were compiled into look-up tables. The well-known 6/6/J/Cmax Problem from Muth and Thompson (1963) was used as a case study. In the future, the performance measure of various scheduling scenarios and the job order requirement will be mapped using ANN inverse model.
Development of Watch Schedule Using Rules Approach
NASA Astrophysics Data System (ADS)
Jurkevicius, Darius; Vasilecas, Olegas
The software for schedule creation and optimization solves a difficult, important and practical problem. The proposed solution is an online employee portal where administrator users can create and manage watch schedules and employee requests. Each employee can login with his/her own account and see his/her assignments, manage requests, etc. Employees set as administrators can perform the employee scheduling online, manage requests, etc. This scheduling software allows users not only to see the initial and optimized watch schedule in a simple and understandable form, but also to create special rules and criteria and input their business. The system using rules automatically will generate watch schedule.
Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang
2014-01-01
We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms.
A parallel-machine scheduling problem with two competing agents
NASA Astrophysics Data System (ADS)
Lee, Wen-Chiung; Chung, Yu-Hsiang; Wang, Jen-Ya
2017-06-01
Scheduling with two competing agents has become popular in recent years. Most of the research has focused on single-machine problems. This article considers a parallel-machine problem, the objective of which is to minimize the total completion time of jobs from the first agent given that the maximum tardiness of jobs from the second agent cannot exceed an upper bound. The NP-hardness of this problem is also examined. A genetic algorithm equipped with local search is proposed to search for the near-optimal solution. Computational experiments are conducted to evaluate the proposed genetic algorithm.
Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang
2014-01-01
We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms. PMID:24764774
Morrison, Heather; Roscoe, Eileen M; Atwell, Amy
2011-01-01
We evaluated antecedent exercise for treating the automatically reinforced problem behavior of 4 individuals with autism. We conducted preference assessments to identify leisure and exercise items that were associated with high levels of engagement and low levels of problem behavior. Next, we conducted three 3-component multiple-schedule sequences: an antecedent-exercise test sequence, a noncontingent leisure-item control sequence, and a social-interaction control sequence. Within each sequence, we used a 3-component multiple schedule to evaluate preintervention, intervention, and postintervention effects. Problem behavior decreased during the postintervention component relative to the preintervention component for 3 of the 4 participants during the exercise-item assessment; however, the effects could not be attributed solely to exercise for 1 of these participants. PMID:21941383
Frutos, M.; Méndez, M.; Tohmé, F.; Broz, D.
2013-01-01
Many of the problems that arise in production systems can be handled with multiobjective techniques. One of those problems is that of scheduling operations subject to constraints on the availability of machines and buffer capacity. In this paper we analyze different Evolutionary multiobjective Algorithms (MOEAs) for this kind of problems. We consider an experimental framework in which we schedule production operations for four real world Job-Shop contexts using three algorithms, NSGAII, SPEA2, and IBEA. Using two performance indexes, Hypervolume and R2, we found that SPEA2 and IBEA are the most efficient for the tasks at hand. On the other hand IBEA seems to be a better choice of tool since it yields more solutions in the approximate Pareto frontier. PMID:24489502
Constraint monitoring in TOSCA
NASA Technical Reports Server (NTRS)
Beck, Howard
1992-01-01
The Job-Shop Scheduling Problem (JSSP) deals with the allocation of resources over time to factory operations. Allocations are subject to various constraints (e.g., production precedence relationships, factory capacity constraints, and limits on the allowable number of machine setups) which must be satisfied for a schedule to be valid. The identification of constraint violations and the monitoring of constraint threats plays a vital role in schedule generation in terms of the following: (1) directing the scheduling process; and (2) informing scheduling decisions. This paper describes a general mechanism for identifying constraint violations and monitoring threats to the satisfaction of constraints throughout schedule generation.
Space Shuttle processing - A case study in artificial intelligence
NASA Technical Reports Server (NTRS)
Mollikarimi, Cindy; Gargan, Robert; Zweben, Monte
1991-01-01
A scheduling system incorporating AI is described and applied to the automated processing of the Space Shuttle. The unique problem of addressing the temporal, resource, and orbiter-configuration requirements of shuttle processing is described with comparisons to traditional project management for manufacturing processes. The present scheduling system is developed to handle the late inputs and complex programs that characterize shuttle processing by incorporating fixed preemptive scheduling, constraint-based simulated annealing, and the characteristics of an 'anytime' algorithm. The Space-Shuttle processing environment is modeled with 500 activities broken down into 4000 subtasks and with 1600 temporal constraints, 8000 resource constraints, and 3900 state requirements. The algorithm is shown to scale to very large problems and maintain anytime characteristics suggesting that an automated scheduling process is achievable and potentially cost-effective.
Advanced computer architecture for large-scale real-time applications.
DOT National Transportation Integrated Search
1973-04-01
Air traffic control automation is identified as a crucial problem which provides a complex, real-time computer application environment. A novel computer architecture in the form of a pipeline associative processor is conceived to achieve greater perf...
Dynamics of a Pipeline under the Action of Internal Shock Pressure
NASA Astrophysics Data System (ADS)
Il'gamov, M. A.
2017-11-01
The static and dynamic bending of a pipeline in the vertical plane under the action of its own weight is considered with regard to the interaction of the internal pressure with the curvature of the axial line and the axisymmetric deformation. The pressure consists of a constant and timevarying parts and is assumed to be uniformly distributed over the entire span between the supports. The pipeline reaction to the stepwise increase in the pressure is analyzed in the case where it is possible to determine the exact solution of the problem. The initial stage of bending determined by the smallness of elastic forces as compared to the inertial forces is introduced into the consideration. At this stage, the solution is sought in the form of power series and the law of pressure variation can be arbitrary. This solution provides initial conditions for determining the further process. The duration of the inertial stage is compared with the times of sharp changes of the pressure and the shock waves in fluids. The structure parameters are determined in the case where the shock pressure is accepted only by the inertial forces in the pipeline.
Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas
2013-01-01
Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum.
Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas
2013-01-01
Introduction: Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. Methods: In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. Results: This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. Conclusion: The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum. PMID:23766941
Computer-Assisted Scheduling of Army Unit Training: An Application of Simulated Annealing.
ERIC Educational Resources Information Center
Hart, Roland J.; Goehring, Dwight J.
This report of an ongoing research project intended to provide computer assistance to Army units for the scheduling of training focuses on the feasibility of simulated annealing, a heuristic approach for solving scheduling problems. Following an executive summary and brief introduction, the document is divided into three sections. First, the Army…
Mothers' Night Work and Children's Behavior Problems
ERIC Educational Resources Information Center
Dunifon, Rachel; Kalil, Ariel; Crosby, Danielle A.; Su, Jessica Houston
2013-01-01
Many mothers work in jobs with nonstandard schedules (i.e., schedules that involve work outside of the traditional 9-5, Monday through Friday schedule); this is particularly true for economically disadvantaged mothers. In the present article, we used longitudinal data from the Fragile Families and Child Wellbeing Survey (n = 2,367 mothers of…
ERIC Educational Resources Information Center
Brackney, Ryan J.; Cheung, Timothy H. C.; Neisewander, Janet L.; Sanabria, Federico
2011-01-01
Dissociating motoric and motivational effects of pharmacological manipulations on operant behavior is a substantial challenge. To address this problem, we applied a response-bout analysis to data from rats trained to lever press for sucrose on variable-interval (VI) schedules of reinforcement. Motoric, motivational, and schedule factors (effort…
Temporal and Resource Reasoning for Planning, Scheduling and Execution in Autonomous Agents
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Hunsberger, Luke; Tsamardinos, Ioannis
2005-01-01
This viewgraph slide tutorial reviews methods for planning and scheduling events. The presentation reviews several methods and uses several examples of scheduling events for the successful and timely completion of the overall plan. Using constraint based models the presentation reviews planning with time, time representations in problem solving and resource reasoning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, Faming; Cheng, Yichen; Lin, Guang
2014-06-13
Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to have such a long CPU time. This paper proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation Markov chain Monte Carlo, it is shown that themore » new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, e.g., a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors.« less
A Study on Real-Time Scheduling Methods in Holonic Manufacturing Systems
NASA Astrophysics Data System (ADS)
Iwamura, Koji; Taimizu, Yoshitaka; Sugimura, Nobuhiro
Recently, new architectures of manufacturing systems have been proposed to realize flexible control structures of the manufacturing systems, which can cope with the dynamic changes in the volume and the variety of the products and also the unforeseen disruptions, such as failures of manufacturing resources and interruptions by high priority jobs. They are so called as the autonomous distributed manufacturing system, the biological manufacturing system and the holonic manufacturing system. Rule-based scheduling methods were proposed and applied to the real-time production scheduling problems of the HMS (Holonic Manufacturing System) in the previous report. However, there are still remaining problems from the viewpoint of the optimization of the whole production schedules. New procedures are proposed, in the present paper, to select the production schedules, aimed at generating effective production schedules in real-time. The proposed methods enable the individual holons to select suitable machining operations to be carried out in the next time period. Coordination process among the holons is also proposed to carry out the coordination based on the effectiveness values of the individual holons.
Towards a dynamical scheduler for ALMA: a science - software collaboration
NASA Astrophysics Data System (ADS)
Avarias, Jorge; Toledo, Ignacio; Espada, Daniel; Hibbard, John; Nyman, Lars-Ake; Hiriart, Rafael
2016-07-01
State-of-the art astronomical facilities are costly to build and operate, hence it is essential that these facilities must be operated as much efficiently as possible, trying to maximize the scientific output and at the same time minimizing overhead times. Over the latest decades the scheduling problem has drawn attention of research because new facilities have been demonstrated that is unfeasible to try to schedule observations manually, due the complexity to satisfy the astronomical and instrumental constraints and the number of scientific proposals to be reviewed and evaluated in near real-time. In addition, the dynamic nature of some constraints make this problem even more difficult. The Atacama Large Millimeter/submillimeter Array (ALMA) is a major collaboration effort between European (ESO), North American (NRAO) and East Asian countries (NAOJ), under operations on the Chilean Chajnantor plateau, at 5.000 meters of altitude. During normal operations at least two independent arrays are available, aiming to achieve different types of science. Since ALMA does not observe in the visible spectrum, observations are not limited to night time only, thus a 24/7 operation with little downtime as possible is expected when full operations state will have been reached. However, during preliminary operations (early-science) ALMA has been operated on tied schedules using around half of the whole day-time to conduct scientific observations. The purpose of this paper is to explain how the observation scheduling and its optimization is done within ALMA, giving details about the problem complexity, its similarities and differences with traditional scheduling problems found in the literature. The paper delves into the current recommendation system implementation and the difficulties found during the road to its deployment in production.
NASA Technical Reports Server (NTRS)
Gaspin, Christine
1989-01-01
How a neural network can work, compared to a hybrid system based on an operations research and artificial intelligence approach, is investigated through a mission scheduling problem. The characteristic features of each system are discussed.
Software For Integer Programming
NASA Technical Reports Server (NTRS)
Fogle, F. R.
1992-01-01
Improved Exploratory Search Technique for Pure Integer Linear Programming Problems (IESIP) program optimizes objective function of variables subject to confining functions or constraints, using discrete optimization or integer programming. Enables rapid solution of problems up to 10 variables in size. Integer programming required for accuracy in modeling systems containing small number of components, distribution of goods, scheduling operations on machine tools, and scheduling production in general. Written in Borland's TURBO Pascal.
A System for Automatically Generating Scheduling Heuristics
NASA Technical Reports Server (NTRS)
Morris, Robert
1996-01-01
The goal of this research is to improve the performance of automated schedulers by designing and implementing an algorithm by automatically generating heuristics by selecting a schedule. The particular application selected by applying this method solves the problem of scheduling telescope observations, and is called the Associate Principal Astronomer. The input to the APA scheduler is a set of observation requests submitted by one or more astronomers. Each observation request specifies an observation program as well as scheduling constraints and preferences associated with the program. The scheduler employs greedy heuristic search to synthesize a schedule that satisfies all hard constraints of the domain and achieves a good score with respect to soft constraints expressed as an objective function established by an astronomer-user.
2013-03-01
33 Mario Vanhoucke and Stephan Vandevoorde – “Measuring the Accuracy of Earned Value/Earned Schedule Forecasting Predictors” (2007...technical problem to the present day ‘ super projects’” (Clark and Lorenzoni, 1997: 2). Cost engineering has “application regardless of industry...large construction projects, but also the acceptance of earned schedule principles on an international scale. Mario Vanhoucke and Stephan Vandevoorde
A Network Flow Approach to the Initial Skills Training Scheduling Problem
2007-12-01
include (but are not limited to) queuing theory, stochastic analysis and simulation. After the demand schedule has been estimated, it can be ...software package has already been purchased and is in use by AFPC, AFPC has requested that the new algorithm be programmed in this language as well ...the discussed outputs from those schedules. Required Inputs A single input file details the students to be scheduled as well as the courses
Range and mission scheduling automation using combined AI and operations research techniques
NASA Technical Reports Server (NTRS)
Arbabi, Mansur; Pfeifer, Michael
1987-01-01
Ground-based systems for Satellite Command, Control, and Communications (C3) operations require a method for planning, scheduling and assigning the range resources such as: antenna systems scattered around the world, communications systems, and personnel. The method must accommodate user priorities, last minute changes, maintenance requirements, and exceptions from nominal requirements. Described are computer programs which solve 24 hour scheduling problems, using heuristic algorithms and a real time interactive scheduling process.
Xiang, Wei; Yin, Jiao; Lim, Gino
2015-02-01
Operating room (OR) surgery scheduling determines the individual surgery's operation start time and assigns the required resources to each surgery over a schedule period, considering several constraints related to a complete surgery flow and the multiple resources involved. This task plays a decisive role in providing timely treatments for the patients while balancing hospital resource utilization. The originality of the present study is to integrate the surgery scheduling problem with real-life nurse roster constraints such as their role, specialty, qualification and availability. This article proposes a mathematical model and an ant colony optimization (ACO) approach to efficiently solve such surgery scheduling problems. A modified ACO algorithm with a two-level ant graph model is developed to solve such combinatorial optimization problems because of its computational complexity. The outer ant graph represents surgeries, while the inner graph is a dynamic resource graph. Three types of pheromones, i.e. sequence-related, surgery-related, and resource-related pheromone, fitting for a two-level model are defined. The iteration-best and feasible update strategy and local pheromone update rules are adopted to emphasize the information related to the good solution in makespan, and the balanced utilization of resources as well. The performance of the proposed ACO algorithm is then evaluated using the test cases from (1) the published literature data with complete nurse roster constraints, and 2) the real data collected from a hospital in China. The scheduling results using the proposed ACO approach are compared with the test case from both the literature and the real life hospital scheduling. Comparison results with the literature shows that the proposed ACO approach has (1) an 1.5-h reduction in end time; (2) a reduction in variation of resources' working time, i.e. 25% for ORs, 50% for nurses in shift 1 and 86% for nurses in shift 2; (3) an 0.25h reduction in individual maximum overtime (OT); and (4) an 42% reduction in the total OT of nurses. Comparison results with the real 10-workday hospital scheduling further show the advantage of the ACO in several measurements. Instead of assigning all surgeries by a surgeon to only one OR and the same nurses by traditional manual approach in hospital, ACO realizes a more balanced surgery arrangement by assigning the surgeries to different ORs and nurses. It eventually leads to shortening the end time within the confidential interval of [7.4%, 24.6%] with 95% confidence level. The ACO approach proposed in this paper efficiently solves the surgery scheduling problem with daily nurse roster while providing a shortened end time and relatively balanced resource allocations. It also supports the advantage of integrating the surgery scheduling with the nurse scheduling and the efficiency of systematic optimization considering a complete three-stage surgery flow and resources involved. Copyright © 2014 Elsevier B.V. All rights reserved.
Bulk Leisure--Problem or Blessing?
ERIC Educational Resources Information Center
Beland, Robert M.
1983-01-01
With an increasing number of the nation's work force experiencing "bulk leisure" time because of new work scheduling procedures, parks and recreation offices are encouraged to examine their program scheduling and content. (JM)
Energy latency tradeoffs for medium access and sleep scheduling in wireless sensor networks
NASA Astrophysics Data System (ADS)
Gang, Lu
Wireless sensor networks are expected to be used in a wide range of applications from environment monitoring to event detection. The key challenge is to provide energy efficient communication; however, latency remains an important concern for many applications that require fast response. The central thesis of this work is that energy efficient medium access and sleep scheduling mechanisms can be designed without necessarily sacrificing application-specific latency performance. We validate this thesis through results from four case studies that cover various aspects of medium access and sleep scheduling design in wireless sensor networks. Our first effort, DMAC, is to design an adaptive low latency and energy efficient MAC for data gathering to reduce the sleep latency. We propose staggered schedule, duty cycle adaptation, data prediction and the use of more-to-send packets to enable seamless packet forwarding under varying traffic load and channel contentions. Simulation and experimental results show significant energy savings and latency reduction while ensuring high data reliability. The second research effort, DESS, investigates the problem of designing sleep schedules in arbitrary network communication topologies to minimize the worst case end-to-end latency (referred to as delay diameter). We develop a novel graph-theoretical formulation, derive and analyze optimal solutions for the tree and ring topologies and heuristics for arbitrary topologies. The third study addresses the problem of minimum latency joint scheduling and routing (MLSR). By constructing a novel delay graph, the optimal joint scheduling and routing can be solved by M node-disjoint paths algorithm under multiple channel model. We further extended the algorithm to handle dynamic traffic changes and topology changes. A heuristic solution is proposed for MLSR under single channel interference. In the fourth study, EEJSPC, we first formulate a fundamental optimization problem that provides tunable energy-latency-throughput tradeoffs with joint scheduling and power control and present both exponential and polynomial complexity solutions. Then we investigate the problem of minimizing total transmission energy while satisfying transmission requests within a latency bound, and present an iterative approach which converges rapidly to the optimal parameter settings.
Optimal Rate Schedules with Data Sharing in Energy Harvesting Communication Systems.
Wu, Weiwei; Li, Huafan; Shan, Feng; Zhao, Yingchao
2017-12-20
Despite the abundant research on energy-efficient rate scheduling polices in energy harvesting communication systems, few works have exploited data sharing among multiple applications to further enhance the energy utilization efficiency, considering that the harvested energy from environments is limited and unstable. In this paper, to overcome the energy shortage of wireless devices at transmitting data to a platform running multiple applications/requesters, we design rate scheduling policies to respond to data requests as soon as possible by encouraging data sharing among data requests and reducing the redundancy. We formulate the problem as a transmission completion time minimization problem under constraints of dynamical data requests and energy arrivals. We develop offline and online algorithms to solve this problem. For the offline setting, we discover the relationship between two problems: the completion time minimization problem and the energy consumption minimization problem with a given completion time. We first derive the optimal algorithm for the min-energy problem and then adopt it as a building block to compute the optimal solution for the min-completion-time problem. For the online setting without future information, we develop an event-driven online algorithm to complete the transmission as soon as possible. Simulation results validate the efficiency of the proposed algorithm.
Optimal Rate Schedules with Data Sharing in Energy Harvesting Communication Systems
Wu, Weiwei; Li, Huafan; Shan, Feng; Zhao, Yingchao
2017-01-01
Despite the abundant research on energy-efficient rate scheduling polices in energy harvesting communication systems, few works have exploited data sharing among multiple applications to further enhance the energy utilization efficiency, considering that the harvested energy from environments is limited and unstable. In this paper, to overcome the energy shortage of wireless devices at transmitting data to a platform running multiple applications/requesters, we design rate scheduling policies to respond to data requests as soon as possible by encouraging data sharing among data requests and reducing the redundancy. We formulate the problem as a transmission completion time minimization problem under constraints of dynamical data requests and energy arrivals. We develop offline and online algorithms to solve this problem. For the offline setting, we discover the relationship between two problems: the completion time minimization problem and the energy consumption minimization problem with a given completion time. We first derive the optimal algorithm for the min-energy problem and then adopt it as a building block to compute the optimal solution for the min-completion-time problem. For the online setting without future information, we develop an event-driven online algorithm to complete the transmission as soon as possible. Simulation results validate the efficiency of the proposed algorithm. PMID:29261135
Protocols for distributive scheduling
NASA Technical Reports Server (NTRS)
Richards, Stephen F.; Fox, Barry
1993-01-01
The increasing complexity of space operations and the inclusion of interorganizational and international groups in the planning and control of space missions lead to requirements for greater communication, coordination, and cooperation among mission schedulers. These schedulers must jointly allocate scarce shared resources among the various operational and mission oriented activities while adhering to all constraints. This scheduling environment is complicated by such factors as the presence of varying perspectives and conflicting objectives among the schedulers, the need for different schedulers to work in parallel, and limited communication among schedulers. Smooth interaction among schedulers requires the use of protocols that govern such issues as resource sharing, authority to update the schedule, and communication of updates. This paper addresses the development and characteristics of such protocols and their use in a distributed scheduling environment that incorporates computer-aided scheduling tools. An example problem is drawn from the domain of space shuttle mission planning.
Distributed project scheduling at NASA: Requirements for manual protocols and computer-based support
NASA Technical Reports Server (NTRS)
Richards, Stephen F.
1992-01-01
The increasing complexity of space operations and the inclusion of interorganizational and international groups in the planning and control of space missions lead to requirements for greater communication, coordination, and cooperation among mission schedulers. These schedulers must jointly allocate scarce shared resources among the various operational and mission oriented activities while adhering to all constraints. This scheduling environment is complicated by such factors as the presence of varying perspectives and conflicting objectives among the schedulers, the need for different schedulers to work in parallel, and limited communication among schedulers. Smooth interaction among schedulers requires the use of protocols that govern such issues as resource sharing, authority to update the schedule, and communication of updates. This paper addresses the development and characteristics of such protocols and their use in a distributed scheduling environment that incorporates computer-aided scheduling tools. An example problem is drawn from the domain of Space Shuttle mission planning.
Production scheduling and rescheduling with genetic algorithms.
Bierwirth, C; Mattfeld, D C
1999-01-01
A general model for job shop scheduling is described which applies to static, dynamic and non-deterministic production environments. Next, a Genetic Algorithm is presented which solves the job shop scheduling problem. This algorithm is tested in a dynamic environment under different workload situations. Thereby, a highly efficient decoding procedure is proposed which strongly improves the quality of schedules. Finally, this technique is tested for scheduling and rescheduling in a non-deterministic environment. It is shown by experiment that conventional methods of production control are clearly outperformed at reasonable run-time costs.
Applying Graph Theory to Problems in Air Traffic Management
NASA Technical Reports Server (NTRS)
Farrahi, Amir Hossein; Goldbert, Alan; Bagasol, Leonard Neil; Jung, Jaewoo
2017-01-01
Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it is shown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.
Applying Graph Theory to Problems in Air Traffic Management
NASA Technical Reports Server (NTRS)
Farrahi, Amir H.; Goldberg, Alan T.; Bagasol, Leonard N.; Jung, Jaewoo
2017-01-01
Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it isshown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flathers, M.B.; Bache, G.E.; Rainsberger, R.
1996-04-01
The flow field of a complex three-dimensional radial inlet for an industrial pipeline centrifugal compressor has been experimentally determined on a half-scale model. Based on the experimental results, inlet guide vanes have been designed to correct pressure and swirl angle distribution deficiencies. The unvaned and vaned inlets are analyzed with a commercially available fully three-dimensional viscous Navier-Stokes code. Since experimental results were available prior to the numerical study, the unvaned analysis is considered a postdiction while the vaned analysis is considered a prediction. The computational results of the unvaned inlet have been compared to the previously obtained experimental results. Themore » experimental method utilized for the unvaned inlet is repeated for the vaned inlet and the data have been used to verify the computational results. The paper will discuss experimental, design, and computational procedures, grid generation, boundary conditions, and experimental versus computational methods. Agreement between experimental and computational results is very good, both in prediction and postdiction modes. The results of this investigation indicate that CFD offers a measurable advantage in design, schedule, and cost and can be applied to complex, three-dimensional radial inlets.« less
Scheduling Jobs with Variable Job Processing Times on Unrelated Parallel Machines
Zhang, Guang-Qian; Wang, Jian-Jun; Liu, Ya-Jing
2014-01-01
m unrelated parallel machines scheduling problems with variable job processing times are considered, where the processing time of a job is a function of its position in a sequence, its starting time, and its resource allocation. The objective is to determine the optimal resource allocation and the optimal schedule to minimize a total cost function that dependents on the total completion (waiting) time, the total machine load, the total absolute differences in completion (waiting) times on all machines, and total resource cost. If the number of machines is a given constant number, we propose a polynomial time algorithm to solve the problem. PMID:24982933
Karakashian, A N; Lepeshkina, T R; Ratushnaia, A N; Glushchenko, S S; Zakharenko, M I; Lastovchenko, V B; Diordichuk, T I
1993-01-01
Weight, tension and harmfulness of professional activity, peculiarities of labour conditions and characteristics of work, shift dynamics of operative personnel's working capacity were studied in the course of 8-hour working day currently accepted at hydroelectric power stations (HEPS) and experimental 12-hour schedule. Working conditions classified as "admissible", positive dynamics of operators' state, their social and material contentment were a basis for 12-hour two-shift schedule to be recommended as more appropriate. At the same time, problem of optimal shift schedules for operative personnel of HEPS remains unsolved and needs to be further explored.
NASA Astrophysics Data System (ADS)
Moreno-Camacho, Carlos A.; Montoya-Torres, Jairo R.; Vélez-Gallego, Mario C.
2018-06-01
Only a few studies in the available scientific literature address the problem of having a group of workers that do not share identical levels of productivity during the planning horizon. This study considers a workforce scheduling problem in which the actual processing time is a function of the scheduling sequence to represent the decline in workers' performance, evaluating two classical performance measures separately: makespan and maximum tardiness. Several mathematical models are compared with each other to highlight the advantages of each approach. The mathematical models are tested with randomly generated instances available from a public e-library.
Assurance of reliability and safety in liquid hydrocarbons marine transportation and storing
NASA Astrophysics Data System (ADS)
Korshunov, G. I.; Polyakov, S. L.; Shunmin, Li
2017-10-01
The problems of assurance of safety and reliability in the liquid hydrocarbons marine transportation and storing are described. The requirements of standard IEC61511 have to be fulfilled for the load/unload in tanker’s system under dynamic loads on the pipeline system. The safety zones for fires of the type “fireball” and the spillage have to be determined when storing the liquid hydrocarbons. An example of the achieved necessary safety level of the duplicated load system, the conditions of the pipelines reliable operation under dynamic loads, the principles of the method of the liquid hydrocarbons storage safety zones under possible accident conditions are represented.
E.M.I Effects of Cathodic Protection on Electromagnetic Flowmeters
Gundogdu, Serdar; Sahin, Ozge
2007-01-01
Electromagnetic flowmeters are used to measure the speed of water flow in water distribution systems. Corrosion problem in metal pipelines can be solved by cathodic protection methods. This paper presents a research on corruptive effects of the cathodic protection system on electromagnetic flowmeter depending on its measuring principle. Experimental measurements are realized on the water distribution pipelines of the Izmir Municipality, Department of Water and Drainage Administration (IZSU) in Turkey and measurement results are given. Experimental results proved that the values measured by the electromagnetic flowmeter (EMF) are affected by cathodic protection system current. Comments on the measurement results are made and precautions to be taken are proposed.
Transportation Network Analysis and Decomposition Methods
DOT National Transportation Integrated Search
1978-03-01
The report outlines research in transportation network analysis using decomposition techniques as a basis for problem solutions. Two transportation network problems were considered in detail: a freight network flow problem and a scheduling problem fo...
A Hybrid Cellular Genetic Algorithm for Multi-objective Crew Scheduling Problem
NASA Astrophysics Data System (ADS)
Jolai, Fariborz; Assadipour, Ghazal
Crew scheduling is one of the important problems of the airline industry. This problem aims to cover a number of flights by crew members, such that all the flights are covered. In a robust scheduling the assignment should be so that the total cost, delays, and unbalanced utilization are minimized. As the problem is NP-hard and the objectives are in conflict with each other, a multi-objective meta-heuristic called CellDE, which is a hybrid cellular genetic algorithm, is implemented as the optimization method. The proposed algorithm provides the decision maker with a set of non-dominated or Pareto-optimal solutions, and enables them to choose the best one according to their preferences. A set of problems of different sizes is generated and solved using the proposed algorithm. Evaluating the performance of the proposed algorithm, three metrics are suggested, and the diversity and the convergence of the achieved Pareto front are appraised. Finally a comparison is made between CellDE and PAES, another meta-heuristic algorithm. The results show the superiority of CellDE.
Integration of QUARK and I-TASSER for ab initio protein structure prediction in CASP11
Zhang, Wenxuan; Yang, Jianyi; He, Baoji; Walker, Sara Elizabeth; Zhang, Hongjiu; Govindarajoo, Brandon; Virtanen, Jouko; Xue, Zhidong; Shen, Hong-Bin; Zhang, Yang
2015-01-01
We tested two pipelines developed for template-free protein structure prediction in the CASP11 experiment. First, the QUARK pipeline constructs structure models by reassembling fragments of continuously distributed lengths excised from unrelated proteins. Five free-modeling (FM) targets have the model successfully constructed by QUARK with a TM-score above 0.4, including the first model of T0837-D1, which has a TM-score=0.736 and RMSD=2.9 Å to the native. Detailed analysis showed that the success is partly attributed to the high-resolution contact map prediction derived from fragment-based distance-profiles, which are mainly located between regular secondary structure elements and loops/turns and help guide the orientation of secondary structure assembly. In the Zhang-Server pipeline, weakly scoring threading templates are re-ordered by the structural similarity to the ab initio folding models, which are then reassembled by I-TASSER based structure assembly simulations; 60% more domains with length up to 204 residues, compared to the QUARK pipeline, were successfully modeled by the I-TASSER pipeline with a TM-score above 0.4. The robustness of the I-TASSER pipeline can stem from the composite fragment-assembly simulations that combine structures from both ab initio folding and threading template refinements. Despite the promising cases, challenges still exist in long-range beta-strand folding, domain parsing, and the uncertainty of secondary structure prediction; the latter of which was found to affect nearly all aspects of FM structure predictions, from fragment identification, target classification, structure assembly, to final model selection. Significant efforts are needed to solve these problems before real progress on FM could be made. PMID:26370505
Microseismic response characteristics modeling and locating of underground water supply pipe leak
NASA Astrophysics Data System (ADS)
Wang, J.; Liu, J.
2015-12-01
In traditional methods of pipeline leak location, geophones must be located on the pipe wall. If the exact location of the pipeline is unknown, the leaks cannot be identified accurately. To solve this problem, taking into account the characteristics of the pipeline leak, we propose a continuous random seismic source model and construct geological models to investigate the proposed method for locating underground pipeline leaks. Based on two dimensional (2D) viscoacoustic equations and the staggered grid finite-difference (FD) algorithm, the microseismic wave field generated by a leaking pipe is modeled. Cross-correlation analysis and the simulated annealing (SA) algorithm were utilized to obtain the time difference and the leak location. We also analyze and discuss the effect of the number of recorded traces, the survey layout, and the offset and interval of the traces on the accuracy of the estimated location. The preliminary results of the simulation and data field experiment indicate that (1) a continuous random source can realistically represent the leak microseismic wave field in a simulation using 2D visco-acoustic equations and a staggered grid FD algorithm. (2) The cross-correlation method is effective for calculating the time difference of the direct wave relative to the reference trace. However, outside the refraction blind zone, the accuracy of the time difference is reduced by the effects of the refracted wave. (3) The acquisition method of time difference based on the microseismic theory and SA algorithm has a great potential for locating leaks from underground pipelines from an array located on the ground surface. Keywords: Viscoacoustic finite-difference simulation; continuous random source; simulated annealing algorithm; pipeline leak location
Integration of QUARK and I-TASSER for Ab Initio Protein Structure Prediction in CASP11.
Zhang, Wenxuan; Yang, Jianyi; He, Baoji; Walker, Sara Elizabeth; Zhang, Hongjiu; Govindarajoo, Brandon; Virtanen, Jouko; Xue, Zhidong; Shen, Hong-Bin; Zhang, Yang
2016-09-01
We tested two pipelines developed for template-free protein structure prediction in the CASP11 experiment. First, the QUARK pipeline constructs structure models by reassembling fragments of continuously distributed lengths excised from unrelated proteins. Five free-modeling (FM) targets have the model successfully constructed by QUARK with a TM-score above 0.4, including the first model of T0837-D1, which has a TM-score = 0.736 and RMSD = 2.9 Å to the native. Detailed analysis showed that the success is partly attributed to the high-resolution contact map prediction derived from fragment-based distance-profiles, which are mainly located between regular secondary structure elements and loops/turns and help guide the orientation of secondary structure assembly. In the Zhang-Server pipeline, weakly scoring threading templates are re-ordered by the structural similarity to the ab initio folding models, which are then reassembled by I-TASSER based structure assembly simulations; 60% more domains with length up to 204 residues, compared to the QUARK pipeline, were successfully modeled by the I-TASSER pipeline with a TM-score above 0.4. The robustness of the I-TASSER pipeline can stem from the composite fragment-assembly simulations that combine structures from both ab initio folding and threading template refinements. Despite the promising cases, challenges still exist in long-range beta-strand folding, domain parsing, and the uncertainty of secondary structure prediction; the latter of which was found to affect nearly all aspects of FM structure predictions, from fragment identification, target classification, structure assembly, to final model selection. Significant efforts are needed to solve these problems before real progress on FM could be made. Proteins 2016; 84(Suppl 1):76-86. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Due-Window Assignment Scheduling with Variable Job Processing Times
Wu, Yu-Bin
2015-01-01
We consider a common due-window assignment scheduling problem jobs with variable job processing times on a single machine, where the processing time of a job is a function of its position in a sequence (i.e., learning effect) or its starting time (i.e., deteriorating effect). The problem is to determine the optimal due-windows, and the processing sequence simultaneously to minimize a cost function includes earliness, tardiness, the window location, window size, and weighted number of tardy jobs. We prove that the problem can be solved in polynomial time. PMID:25918745
Single machine scheduling with slack due dates assignment
NASA Astrophysics Data System (ADS)
Liu, Weiguo; Hu, Xiangpei; Wang, Xuyin
2017-04-01
This paper considers a single machine scheduling problem in which each job is assigned an individual due date based on a common flow allowance (i.e. all jobs have slack due date). The goal is to find a sequence for jobs, together with a due date assignment, that minimizes a non-regular criterion comprising the total weighted absolute lateness value and common flow allowance cost, where the weight is a position-dependent weight. In order to solve this problem, an ? time algorithm is proposed. Some extensions of the problem are also shown.
Yang, S; Wang, D
2000-01-01
This paper presents a constraint satisfaction adaptive neural network, together with several heuristics, to solve the generalized job-shop scheduling problem, one of NP-complete constraint satisfaction problems. The proposed neural network can be easily constructed and can adaptively adjust its weights of connections and biases of units based on the sequence and resource constraints of the job-shop scheduling problem during its processing. Several heuristics that can be combined with the neural network are also presented. In the combined approaches, the neural network is used to obtain feasible solutions, the heuristic algorithms are used to improve the performance of the neural network and the quality of the obtained solutions. Simulations have shown that the proposed neural network and its combined approaches are efficient with respect to the quality of solutions and the solving speed.
Future aircraft networks and schedules
NASA Astrophysics Data System (ADS)
Shu, Yan
2011-07-01
Because of the importance of air transportation scheduling, the emergence of small aircraft and the vision of future fuel-efficient aircraft, this thesis has focused on the study of aircraft scheduling and network design involving multiple types of aircraft and flight services. It develops models and solution algorithms for the schedule design problem and analyzes the computational results. First, based on the current development of small aircraft and on-demand flight services, this thesis expands a business model for integrating on-demand flight services with the traditional scheduled flight services. This thesis proposes a three-step approach to the design of aircraft schedules and networks from scratch under the model. In the first step, both a frequency assignment model for scheduled flights that incorporates a passenger path choice model and a frequency assignment model for on-demand flights that incorporates a passenger mode choice model are created. In the second step, a rough fleet assignment model that determines a set of flight legs, each of which is assigned an aircraft type and a rough departure time is constructed. In the third step, a timetable model that determines an exact departure time for each flight leg is developed. Based on the models proposed in the three steps, this thesis creates schedule design instances that involve almost all the major airports and markets in the United States. The instances of the frequency assignment model created in this thesis are large-scale non-convex mixed-integer programming problems, and this dissertation develops an overall network structure and proposes iterative algorithms for solving these instances. The instances of both the rough fleet assignment model and the timetable model created in this thesis are large-scale mixed-integer programming problems, and this dissertation develops subproblem schemes for solving these instances. Based on these solution algorithms, this dissertation also presents computational results of these large-scale instances. To validate the models and solution algorithms developed, this thesis also compares the daily flight schedules that it designs with the schedules of the existing airlines. Furthermore, it creates instances that represent different economic and fuel-prices conditions and derives schedules under these different conditions. In addition, it discusses the implication of using new aircraft in the future flight schedules. Finally, future research in three areas---model, computational method, and simulation for validation---is proposed.
Task Scheduling in Desktop Grids: Open Problems
NASA Astrophysics Data System (ADS)
Chernov, Ilya; Nikitina, Natalia; Ivashko, Evgeny
2017-12-01
We survey the areas of Desktop Grid task scheduling that seem to be insufficiently studied so far and are promising for efficiency, reliability, and quality of Desktop Grid computing. These topics include optimal task grouping, "needle in a haystack" paradigm, game-theoretical scheduling, domain-imposed approaches, special optimization of the final stage of the batch computation, and Enterprise Desktop Grids.
Optimization Models for Scheduling of Jobs
Indika, S. H. Sathish; Shier, Douglas R.
2006-01-01
This work is motivated by a particular scheduling problem that is faced by logistics centers that perform aircraft maintenance and modification. Here we concentrate on a single facility (hangar) which is equipped with several work stations (bays). Specifically, a number of jobs have already been scheduled for processing at the facility; the starting times, durations, and work station assignments for these jobs are assumed to be known. We are interested in how best to schedule a number of new jobs that the facility will be processing in the near future. We first develop a mixed integer quadratic programming model (MIQP) for this problem. Since the exact solution of this MIQP formulation is time consuming, we develop a heuristic procedure, based on existing bin packing techniques. This heuristic is further enhanced by application of certain local optimality conditions. PMID:27274921
Improved NSGA model for multi objective operation scheduling and its evaluation
NASA Astrophysics Data System (ADS)
Li, Weining; Wang, Fuyu
2017-09-01
Reasonable operation can increase the income of the hospital and improve the patient’s satisfactory level. In this paper, by using multi object operation scheduling method with improved NSGA algorithm, it shortens the operation time, reduces the operation costand lowers the operation risk, the multi-objective optimization model is established for flexible operation scheduling, through the MATLAB simulation method, the Pareto solution is obtained, the standardization of data processing. The optimal scheduling scheme is selected by using entropy weight -Topsis combination method. The results show that the algorithm is feasible to solve the multi-objective operation scheduling problem, and provide a reference for hospital operation scheduling.
Artificial Immune Algorithm for Subtask Industrial Robot Scheduling in Cloud Manufacturing
NASA Astrophysics Data System (ADS)
Suma, T.; Murugesan, R.
2018-04-01
The current generation of manufacturing industry requires an intelligent scheduling model to achieve an effective utilization of distributed manufacturing resources, which motivated us to work on an Artificial Immune Algorithm for subtask robot scheduling in cloud manufacturing. This scheduling model enables a collaborative work between the industrial robots in different manufacturing centers. This paper discussed two optimizing objectives which includes minimizing the cost and load balance of industrial robots through scheduling. To solve these scheduling problems, we used the algorithm based on Artificial Immune system. The parameters are simulated with MATLAB and the results compared with the existing algorithms. The result shows better performance than existing.
Extended precedence preservative crossover for job shop scheduling problems
NASA Astrophysics Data System (ADS)
Ong, Chung Sin; Moin, Noor Hasnah; Omar, Mohd
2013-04-01
Job shop scheduling problems (JSSP) is one of difficult combinatorial scheduling problems. A wide range of genetic algorithms based on the two parents crossover have been applied to solve the problem but multi parents (more than two parents) crossover in solving the JSSP is still lacking. This paper proposes the extended precedence preservative crossover (EPPX) which uses multi parents for recombination in the genetic algorithms. EPPX is a variation of the precedence preservative crossover (PPX) which is one of the crossovers that perform well to find the solutions for the JSSP. EPPX is based on a vector to determine the gene selected in recombination for the next generation. Legalization of children (offspring) can be eliminated due to the JSSP representation encoded by using permutation with repetition that guarantees the feasibility of chromosomes. The simulations are performed on a set of benchmarks from the literatures and the results are compared to ensure the sustainability of multi parents recombination in solving the JSSP.